Re: [one-users] Making scheduler allocation aware

2010-11-10 Thread Shashank Rachamalla
Hi Javier

Thanks for the inputs but I came across another problem while testing:

If opennebula receives multiple vm requests in a short span of time, the
scheduler might take decisions for all these vms considering the host
monitoring information available from the last monitoring cycle. Ideally,
before processing every pending request,  fresh host monitoring information
has to be taken into account as the previous set of requests might have
already changed the host’s state. This can result in over committing when
host is being used close to its full capacity.

*Is there any workaround which helps the scheduler to overcome the above
problem ?*

steps to reproduce the problem scenario:

Host 1 : Total memory = 3GB
Host 2 : Total memory = 2GB
Assume Host1 and Host2 have same number of CPU cores. ( Host1 will have a
higher RANK value )

VM1: memory = 2GB
VM2: memroy = 2GB

Start VM1 and VM2 immediately one after the other. Both VM1 and VM2 will
come up on Host1.  ( Thus over committing )

Start VM1 and VM2 with an intermediate delay of 60sec. VM1 will come up on
Host1 and VM2 will come up on Host2. This is true because opennebula would
have fetched a fresh set of host monitoring information in that time.


On 4 November 2010 02:04, Javier Fontan  wrote:

> Hello,
>
> It looks fine to me. I think that taking out the memory the hypervisor
> may be consuming is key to make it work.
>
> Bye
>
> On Wed, Nov 3, 2010 at 8:32 PM, Rangababu Chakravarthula
>  wrote:
> > Javier
> >
> > Yes we are using KVM and OpenNebula 1.4.
> >
> > We have been having this problem since a long time and we were doing all
> > kinds of validations ourselves before submitting the request to
> OpenNebula.
> > (there should  be enough memory in the cloud that matches the requested
> > memory & there should be atleast one host that has memory > requested
> memory
> > )   We had to do those because OpenNebula would schedule to an arbitrary
> > host based on the existing logic it had.
> > So at last we thought that we need to make OpenNebula aware of memory
> > allocated of running VM's on the host and started this discussion.
> >
> > Thanks for taking up this issue as priority. Appreciate it.
> >
> > Shashank came up with this patch to kvm.rb. Please take a look and let us
> > know if that will work until we get a permanent solution.
> >
> >
> 
> >
> > $mem_allocated_for_running_vms=0
> > for i in `virsh list|grep running|tr -s ' ' ' '|cut -f2 -d' '` do
> > $dominfo=`virsh dominfo #{i}`
> > $dominfo.split(/\n/).each{|line|
> > if line.match('^Max memory')
> > $mem_allocated_for_running_vms += line.split("
> > ")[2].strip.to_i
> > end
> > }
> > end
> >
> > $mem_used_by_base_hypervisor = [some xyz kb that we want to set aside for
> > hypervisor]
> >
> > $free_memory = $total_memory.to_i - ( $mem_allocated_for_running_vms.to_i
> +
> > $mem_used_by_base_hypervisor.to_i )
> >
> >
> ==
> >
> > Ranga
> >
> > On Wed, Nov 3, 2010 at 2:16 PM, Javier Fontan  wrote:
> >>
> >> Hello,
> >>
> >> Sorry for the delay in the response.
> >>
> >> It looks that the problem is OpenNebula calculating available memory.
> >> For xen >= 3.2 there is a reliable way to get available memory that is
> >> calling "xm info" and getting "max_free_memory" attribute.
> >> Unfortunately for kvm or xen < 3.2 there is not such attribute. I
> >> suppose you are using kvm as you tell about "free" command.
> >>
> >> I began analyzing the kvm IM probe that gets memory information and
> >> there is a problem on the way to get total memory. Here is how it now
> >> gets memory information:
> >>
> >> TOTALMEMORY: runs virsh info that gets the real physical memory
> >> installed in the machine
> >> FREEMEMORY: runs free command and gets the free column data without
> >> buffers and cache
> >> USEDMEMORY: runs top command and gets used memory from it (this counts
> >> buffers and cache)
> >>
> >> This is a big problem as those values do not match one with another (I
> >> don't really know how I failed to see this before). Here is the
> >> monitoring data from a host without VMs.
> >>
> >> --8<--
> >> TOTALMEMORY=8193988
> >> USEDMEMORY=7819952
> >> FREEMEMORY=7911924
> >> -->8--
> >>
> >> As you can see it makes no sense at all. Even the TOTALMEMORY that is
> >> got from virsh info is very misleading for oned as the host linux
> >> instance does not have access to all that memory (some is consumed by
> >> the hypervisor itself) as seen calling a free command:
> >>
> >> --8<--
> >> total   used   free sharedbuffers cached
> >> Mem:   81939887819192 374796  0  64176
>  7473992
> >> -->8--
> >>
> >> I am also copying this text as an issue to solve this problem
> >> http://dev.opennebula.org/issues/388. It is

[one-users] Creating VMs images in CentOS

2010-11-10 Thread Fernando Morgenstern
Hello,

I'm using opennebula with CentOS 5.5 + Xen but i am not sure about the "right" 
way of creating images.


Example, if i want to create a VM to run CentOS, which steps should i do?

I saw that to create Xen domU you can use, for example:

virt-install \
--paravirt \
--name webserver01 \
--ram 512 \
--file /vm/example.img \
--file-size 10 \
--nographics \
--location http://mirrors.kernel.org/centos/5.3/os/x86_64/

Is this the correct way of creating it?

Them after installing via xm console i should transfer this to frontend and 
create a configuration file for image, deploying it at opennebula?

Regards,

Fernando Marcelo___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Could't add one cluster node

2010-11-10 Thread Ruben S. Montero
Hi

Could you execute the probes in the remote host and post the
output Just log in a node and run

/var/tmp/one/remotes/im/run_probes kvm

Cheers

Ruben

On Mon, Nov 8, 2010 at 5:15 PM, Andrea Turli  wrote:
> Dear all,
>
> I'm using OpenNebula 2.0 and I've problem in adding a node to my cluster
> based on Centos 5.5, KVM, NFS.
> After creating a new node with
> onehost create mynode im_kvm vmm_kvm tm_nfs
>
> the monitor fails after a while and these are the error messages I can see:
>
> $ tail -f /srv/cloud/one/var/sched.log
> Mon Nov  8 17:04:08 2010 [HOST][E]: Exception raised: Unable to transport
> XML to server and get XML response back.  HTTP response: 502
> Mon Nov  8 17:04:08 2010 [POOL][E]: Could not retrieve pool info from ONE
>
> $ tail -f /srv/cloud/one/var/oned.log
> Mon Nov  8 17:06:52 2010 [InM][I]: Monitoring host 161.27.221.5 (6)
> Mon Nov  8 17:07:25 2010 [InM][D]: Host 6 successfully monitored.
> Mon Nov  8 17:07:25 2010 [ONE][E]: syntax error, unexpected $end, expecting
> VARIABLE at line 2, columns 1:2
> Mon Nov  8 17:07:25 2010 [InM][E]: Error parsing host information:
>
> Where can I find more log information?
> I've seen a similar problem described at [one-users] "Unable to transport
> XML to server ..."
> but the solution proposed seems not good for me.
>
> Any hints?
>
> Thank you
>
> --
> Andrea Turli
> Ricercatore
> Direzione Ricerca e Innovazione
> andrea.tu...@eng.it
>
> Engineering Ingegneria Informatica spa
> Via Riccardo Morandi, 32 00148 Roma (RM)
> Tel. +39 06 8307 4710
> Fax +39 06 8307 4200
> www.eng.it
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>



-- 
Dr. Ruben Santiago Montero
Associate Professor (Profesor Titular), Complutense University of Madrid

URL: http://dsa-research.org/doku.php?id=people:ruben
Weblog: http://blog.dsa-research.org/?author=7
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Opennebula on Gentoo

2010-11-10 Thread George L. Emigh
Thank you,

 I submitted the modified working ebuild to gentoo in hopes that it may get 
included in portage.

When the maintenance release is available I can submit an updated one.

The gentoo bug report is here if it is of interest to anyone.
https://bugs.gentoo.org/show_bug.cgi?id=344969

George

On Wednesday November 10 2010, Ruben S. Montero wrote:
  > Hi George,
  > 
  > Thanks for the heads-up on this. This is basically because of Werror.
  > We'll take care of this for the next maintenance release
  > (http://dev.opennebula.org/issues/406)
  > 
  > Cheers
  > 
  > Ruben
  > 
  > On Wed, Nov 10, 2010 at 8:42 PM, George L. Emigh
  > 
  >  wrote:
  > > I was changing a beta ebuild of opennebula to build and install
  > > opennebula 2.0 on a gentoo system (since Fedora 13 has not been usable
  > > with Opennebula), in gentoo the ebuild has CFLAGS set as defined in
  > > the make.conf and that gets applied to the compile of opennebula.
  > > 
  > > So in my instance, CFLAGS is set to "-march=native -O2 -pipe
  > > -mfpmath=sse"
  > > 
  > > and opennebula fails to build with:
  > > 
  > > gcc -o src/template/template_parser.o -c -march=native -O2 -pipe
  > > -mfpmath=sse -rdynamic -g -Wall -Werror -DSQLITE_DB -DMYSQL_DB
  > > -DHAVE_ERRNO_AS_DEFINE=1 - DUNIV_LINUX -Iinclude -I/usr/include
  > > -I/usr/include/mysql -
  > > I/usr/include/libxml2 src/template/template_parser.c
  > > cc1: warnings being treated as errors
  > > template_parser.c: In function 'template_lex':
  > > template_parser.l:89: error: ignoring return value of 'fwrite',
  > > declared with attribute warn_unused_result
  > > scons: *** [src/template/template_parser.o] Error 1
  > > scons: building terminated because of errors.
  > > 
  > > If I remove the -O2 it builds fine, and I will just do that and move
  > > forward for now, but opennebula should probably not fail to build with
  > > -O2
  > > 
  > > 
  > > Thank You
  > > --
  > > George L. Emigh
  > > ___
  > > Users mailing list
  > > Users@lists.opennebula.org
  > > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
  > 
  > --
  > Dr. Ruben Santiago Montero
  > Associate Professor (Profesor Titular), Complutense University of Madrid
  > 
  > URL: http://dsa-research.org/doku.php?id=people:ruben
  > Weblog: http://blog.dsa-research.org/?author=7

-- 
George L. Emigh
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Opennebula on Gentoo

2010-11-10 Thread Ruben S. Montero
Hi George,

Thanks for the heads-up on this. This is basically because of Werror.
We'll take care of this for the next maintenance release
(http://dev.opennebula.org/issues/406)

Cheers

Ruben

On Wed, Nov 10, 2010 at 8:42 PM, George L. Emigh
 wrote:
>
> I was changing a beta ebuild of opennebula to build and install opennebula 2.0
> on a gentoo system (since Fedora 13 has not been usable with Opennebula), in
> gentoo the ebuild has CFLAGS set as defined in the make.conf and that gets
> applied to the compile of opennebula.
>
> So in my instance, CFLAGS is set to "-march=native -O2 -pipe -mfpmath=sse"
>
> and opennebula fails to build with:
>
> gcc -o src/template/template_parser.o -c -march=native -O2 -pipe -mfpmath=sse
> -rdynamic -g -Wall -Werror -DSQLITE_DB -DMYSQL_DB -DHAVE_ERRNO_AS_DEFINE=1 -
> DUNIV_LINUX -Iinclude -I/usr/include -I/usr/include/mysql -
> I/usr/include/libxml2 src/template/template_parser.c
> cc1: warnings being treated as errors
> template_parser.c: In function 'template_lex':
> template_parser.l:89: error: ignoring return value of 'fwrite', declared with
> attribute warn_unused_result
> scons: *** [src/template/template_parser.o] Error 1
> scons: building terminated because of errors.
>
> If I remove the -O2 it builds fine, and I will just do that and move forward
> for now, but opennebula should probably not fail to build with -O2
>
>
> Thank You
> --
> George L. Emigh
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



--
Dr. Ruben Santiago Montero
Associate Professor (Profesor Titular), Complutense University of Madrid

URL: http://dsa-research.org/doku.php?id=people:ruben
Weblog: http://blog.dsa-research.org/?author=7
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Opennebula on Gentoo

2010-11-10 Thread George L. Emigh
I was changing a beta ebuild of opennebula to build and install opennebula 2.0 
on a gentoo system (since Fedora 13 has not been usable with Opennebula), in 
gentoo the ebuild has CFLAGS set as defined in the make.conf and that gets 
applied to the compile of opennebula.

So in my instance, CFLAGS is set to "-march=native -O2 -pipe -mfpmath=sse"

and opennebula fails to build with:

gcc -o src/template/template_parser.o -c -march=native -O2 -pipe -mfpmath=sse 
-rdynamic -g -Wall -Werror -DSQLITE_DB -DMYSQL_DB -DHAVE_ERRNO_AS_DEFINE=1 -
DUNIV_LINUX -Iinclude -I/usr/include -I/usr/include/mysql -
I/usr/include/libxml2 src/template/template_parser.c
cc1: warnings being treated as errors
template_parser.c: In function 'template_lex':
template_parser.l:89: error: ignoring return value of 'fwrite', declared with 
attribute warn_unused_result
scons: *** [src/template/template_parser.o] Error 1
scons: building terminated because of errors.

If I remove the -O2 it builds fine, and I will just do that and move forward 
for now, but opennebula should probably not fail to build with -O2


Thank You
-- 
George L. Emigh
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] vm still in pending state problem

2010-11-10 Thread Tino Vazquez
Dear all,

I think the problem lies here:

Tue Oct 26 16:20:00 2010 [VMM][I]: Error: Had a bootloader specified,
but no disks are bootable

This may go away if in the image template definition of
"Centos_base_image" something like the following is set:

TARGET="sda"

Could you send us the deployment.0 file (in $ONE_LOCATION/var/)?

Regards,

-Tino

--
Constantino Vázquez Blanco | dsa-research.org/tinova
Virtualization Technology Engineer / Researcher
OpenNebula Toolkit | opennebula.org



2010/10/26 Saravanan S :
>
> From: Saravanan S 
> Date: 2010/10/26
> Subject: Re: [one-users] vm still in pending state problem
> To: Łukasz Oleś 
>
>
> Hi,
>
> Please find the logs below, to create vm and dmesg
>
> # cat /var/log/one/4.log
> Tue Oct 26 16:18:07 2010 [DiM][I]: New VM state is ACTIVE.
> Tue Oct 26 16:18:07 2010 [LCM][I]: New VM state is PROLOG.
> Tue Oct 26 16:18:07 2010 [VM][I]: Virtual Machine has no context
> Tue Oct 26 16:18:27 2010 [TM][I]: tm_clone.sh:
> test2.sios.com:/var/lib/one//images/c25a93f8b92350dd5a95b28e39a10e7cd9c992f4
> 127.0.0.1:/srv/cloud/one/var/4/images/disk.0
> Tue Oct 26 16:18:27 2010 [TM][I]: tm_clone.sh: DST:
> /srv/cloud/one/var/4/images/disk.0
> Tue Oct 26 16:18:27 2010 [TM][I]: tm_clone.sh: Creating directory
> /srv/cloud/one/var/4/images
> Tue Oct 26 16:18:27 2010 [TM][I]: tm_clone.sh: Executed "mkdir -p
> /srv/cloud/one/var/4/images".
> Tue Oct 26 16:18:27 2010 [TM][I]: tm_clone.sh: Executed "chmod a+w
> /srv/cloud/one/var/4/images".
> Tue Oct 26 16:18:27 2010 [TM][I]: tm_clone.sh: Cloning
> /var/lib/one//images/c25a93f8b92350dd5a95b28e39a10e7cd9c992f4
> Tue Oct 26 16:18:27 2010 [TM][I]: tm_clone.sh: Executed "cp -r
> /var/lib/one//images/c25a93f8b92350dd5a95b28e39a10e7cd9c992f4
> /srv/cloud/one/var/4/images/disk.0".
> Tue Oct 26 16:18:27 2010 [TM][I]: tm_clone.sh: Executed "chmod a+w
> /srv/cloud/one/var/4/images/disk.0".
> Tue Oct 26 16:18:28 2010 [TM][I]: tm_mkswap.sh: Creating directory
> /srv/cloud/one/var/4/images
> Tue Oct 26 16:18:28 2010 [TM][I]: tm_mkswap.sh: Executed "mkdir -p
> /srv/cloud/one/var/4/images".
> Tue Oct 26 16:18:28 2010 [TM][I]: tm_mkswap.sh: Executed "chmod a+w
> /srv/cloud/one/var/4/images".
> Tue Oct 26 16:18:28 2010 [TM][I]: tm_mkswap.sh: Creating 512Mb image in
> /srv/cloud/one/var/4/images/disk.1
> Tue Oct 26 16:18:28 2010 [TM][I]: tm_mkswap.sh: Executed "/bin/dd
> if=/dev/zero of=/srv/cloud/one/var/4/images/disk.1 bs=1 count=1 seek=512M".
> Tue Oct 26 16:18:28 2010 [TM][I]: tm_mkswap.sh: Initializing swap space
> Tue Oct 26 16:18:28 2010 [TM][I]: tm_mkswap.sh: Executed "/sbin/mkswap
> /srv/cloud/one/var/4/images/disk.1".
> Tue Oct 26 16:18:28 2010 [TM][I]: tm_mkswap.sh: Executed "chmod a+w
> /srv/cloud/one/var/4/images/disk.1".
> Tue Oct 26 16:18:28 2010 [LCM][I]: New VM state is BOOT
> Tue Oct 26 16:18:28 2010 [VMM][I]: Generating deployment file:
> /var/lib/one/4/deployment.0
> Tue Oct 26 16:18:29 2010 [VMM][I]: Command execution fail:
> /lib/remotes/vmm/xen/deploy 127.0.0.1 /var/lib/one/4/deployment.0
> Tue Oct 26 16:18:29 2010 [VMM][I]: STDERR follows.
> Tue Oct 26 16:18:29 2010 [VMM][I]: Error: Had a bootloader specified, but no
> disks are bootable
> Tue Oct 26 16:18:29 2010 [VMM][I]: ExitCode: 1
> Tue Oct 26 16:18:29 2010 [VMM][E]: Error deploying virtual machine
> Tue Oct 26 16:18:30 2010 [DiM][I]: New VM state is FAILED
> Tue Oct 26 16:18:30 2010 [TM][W]: Ignored: LOG - 4 tm_delete.sh: Deleting
> /srv/cloud/one/var/4/images
>
> Tue Oct 26 16:18:30 2010 [TM][W]: Ignored: LOG - 4 tm_delete.sh: Executed
> "rm -rf /srv/cloud/one/var/4/images".
>
> Tue Oct 26 16:18:30 2010 [TM][W]: Ignored: TRANSFER SUCCESS 4 -
>
> # cat /var/log/one/5.log
> Tue Oct 26 16:19:37 2010 [DiM][I]: New VM state is ACTIVE.
> Tue Oct 26 16:19:37 2010 [LCM][I]: New VM state is PROLOG.
> Tue Oct 26 16:19:37 2010 [VM][I]: Virtual Machine has no context
> Tue Oct 26 16:19:58 2010 [TM][I]: tm_clone.sh:
> test2.sios.com:/var/lib/one//images/c25a93f8b92350dd5a95b28e39a10e7cd9c992f4
> 127.0.0.1:/srv/cloud/one/var/5/images/disk.0
> Tue Oct 26 16:19:58 2010 [TM][I]: tm_clone.sh: DST:
> /srv/cloud/one/var/5/images/disk.0
> Tue Oct 26 16:19:58 2010 [TM][I]: tm_clone.sh: Creating directory
> /srv/cloud/one/var/5/images
> Tue Oct 26 16:19:58 2010 [TM][I]: tm_clone.sh: Executed "mkdir -p
> /srv/cloud/one/var/5/images".
> Tue Oct 26 16:19:58 2010 [TM][I]: tm_clone.sh: Executed "chmod a+w
> /srv/cloud/one/var/5/images".
> Tue Oct 26 16:19:58 2010 [TM][I]: tm_clone.sh: Cloning
> /var/lib/one//images/c25a93f8b92350dd5a95b28e39a10e7cd9c992f4
> Tue Oct 26 16:19:58 2010 [TM][I]: tm_clone.sh: Executed "cp -r
> /var/lib/one//images/c25a93f8b92350dd5a95b28e39a10e7cd9c992f4
> /srv/cloud/one/var/5/images/disk.0".
> Tue Oct 26 16:19:58 2010 [TM][I]: tm_clone.sh: Executed "chmod a+w
> /srv/cloud/one/var/5/images/disk.0".
> Tue Oct 26 16:19:59 2010 [TM][I]: tm_mkswap.sh: Creating directory
> /srv/cloud/one/var/5/images
> Tue Oct 26 16:19:59 2010 [TM][I]: tm_mkswap.sh: Executed "mkdir -p
> /srv

Re: [one-users] tm_common.sh.... typo??

2010-11-10 Thread Javier Fontan
I have made a change in 0eb75b414fe0bc6a829b8b24b0f59b80ed1513e7
(branch bug-389) that sets $SED variable with different values for
linux or anything else. Tell me if that seems correct, I think it is
the most straightforward way to fix the problem.

On Wed, Nov 3, 2010 at 6:30 PM, Tiago Batista  wrote:
> During normal operation, the error is not triggered but when I was
> experimenting with zfs-fuse I was able to trigger it.
>
> I will try to demonstrate the problem with an interactive shell...
>
> If i use the following function (note the \+ escaping) it works as
> expected:
>
> function fix_dir_slashes {
>    dirname "$1/file" | sed -e 's/\/\+/\//g';
> }
>
> If however I use the -E switch, for some reason it fails as the
> equivalent gnu switch is -r I think.
>
> $ function fix_dir_slashes { dirname "$1/file" | sed -e 's/\/\+/\//g'; }
> $ fix_dir_slashes /usr/bin//drunk
> /usr/bin/drunk
>
> $ function fix_dir_slashes { dirname "$1/file" | sed -E 's/\/\+/\//g'; }
> $ fix_dir_slashes /usr/bin//drunk
> sed: invalid option -- E
> (...)
>
> I later found out that this is implementation dependent.
>
> The behavior above is from sed 4.1.5 on a scientific linux (rhel based)
> system.
>
> On my gentoo desktop, using sed 4.2.1 I do not get an error, but a
> silent fail to perform the substitution (unless I remove the \+
> escaping, efectively using the regexp from the file):
>
> $ function fix_dir_slashes { dirname "$1/file" | sed -E 's/\/\+/\//g'; }
> $ fix_dir_slashes /usr/bin//drunk
> /usr/bin//drunk
>
> $ function fix_dir_slashes { dirname "$1/file" | sed -E 's/\/+/\//g'; }
> $ fix_dir_slashes /usr/bin//drunk
> /usr/bin/drunk
>
> $ function fix_dir_slashes { dirname "$1/file" | sed -e 's/\/\+/\//g'; }
> $ fix_dir_slashes /usr/bin//drunk
> /usr/bin/drunk
>
>
> At the moment this does not qualify as a bug for me, as all the linux
> systems I know do not fail when paths with multiple slashes are used,
> and the problem with the -E was never triggered with the default tm_nfs
> scripts.
>
> After searching some more and comparing the bsd and the gnu versions of
> sed, I think I may have a solution that works on both platforms (in fact
> the one above!) As I do not see the need for extended regular
> expressions to perform such a simple replacement when escaping the +
> char would solve it...
>
> While at it, I propose that the usage of the $SED variable set above,
> therefore defining the function as:
>
> function fix_dir_slashes { dirname "$1/file" | $SED -e 's/\/\+/\//g'; }
>
> Can you try this on BSD sed as I do not have a BSD shell at the moment?
>
> Tiago
>
> On Wed, Nov 3, 2010 at 4:09 PM, Javier Fontan  wrote:
>> Hello,
>>
>> We are using -E as BSD version needs that parameter for this regexp to
>> work and in our tests was working also on linux. Can you tell us what
>> is the error. Is it sed that does not recognize that command or does
>> not perform the intended task?
>>
>> Bye
>>
>> On Mon, Oct 25, 2010 at 3:30 PM, Tiago Batista  
>> wrote:
>>> Hello all
>>>
>>> From tm_common.sh:
>>>
>>> # Takes out uneeded slashes. Repeated and final directory slashes:
>>> # /some//path///somewhere/ -> /some/path/somewhere
>>> function fix_dir_slashes
>>> {
>>>    dirname "$1/file" | sed -E 's/\/+/\//g'
>>> }
>>>
>>>
>>> Should this be (notice the sed argument)?
>>>
>>> # Takes out uneeded slashes. Repeated and final directory slashes:
>>> # /some//path///somewhere/ -> /some/path/somewhere
>>> function fix_dir_slashes
>>> {
>>>    dirname "$1/file" | sed -e 's/\/+/\//g'
>>> }
>>>
>>>
>>> I have looked for it, but found no documentation on the -E argument.
>>>
>>> Note that with the tm_nfs drivers this is not an issue (i think...),
>>> but for some reason, when building transfer driver similar to that
>>> found in 
>>> http://mperedim.wordpress.com/2010/09/26/opennebula-zfs-and-xen-%E2%80%93-part-2-instant-cloning/
>>> this becomes visible as I do call this code...
>>>
>>> Tiago
>>> ___
>>> Users mailing list
>>> Users@lists.opennebula.org
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>
>>
>>
>> --
>> Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
>> DSA Research Group: http://dsa-research.org
>> Globus GridWay Metascheduler: http://www.GridWay.org
>> OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
>>
>



-- 
Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
DSA Research Group: http://dsa-research.org
Globus GridWay Metascheduler: http://www.GridWay.org
OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VMs stuck in PEND state

2010-11-10 Thread Fernando Morgenstern
Hello,

After applying this patch and adding host again i had a couple of different 
errors ( like ruby not being installed in the node ) which i was able to fix. 
But now i got stuck again during the VM startup, at the log i see the following 
error:

Wed Nov 10 17:22:38 2010 [LCM][I]: New VM state is BOOT
Wed Nov 10 17:22:38 2010 [VMM][I]: Generating deployment file: 
/var/lib/one/5/deployment.0
Wed Nov 10 17:22:38 2010 [VMM][E]: No kernel or bootloader defined and no 
default provided.
Wed Nov 10 17:22:38 2010 [VMM][E]: deploy_action, error generating deployment 
file: /var/lib/one/5/deployment.0
Wed Nov 10 17:22:38 2010 [DiM][I]: New VM state is FAILED
Wed Nov 10 17:22:38 2010 [TM][W]: Ignored: LOG - 5 tm_delete.sh: Deleting 
/var/lib/one//5/images
Wed Nov 10 17:22:38 2010 [TM][W]: Ignored: LOG - 5 tm_delete.sh: Executed "rm 
-rf /var/lib/one//5/images".
Wed Nov 10 17:22:38 2010 [TM][W]: Ignored: TRANSFER SUCCESS 5 -

The thread that i found here at list about this doesn't contain a solution for 
this problem, so i'm not sure how to proceed.

Regards,

Fernando.

Em 10/11/2010, às 11:49, openneb...@nerling.ch escreveu:

> Yes, it is related to the bug http://dev.opennebula.org/issues/385.
> I attached a patch for /usr/lib/one/mads/one_im_ssh.rb.
> ## Save it on the /tmp
> #cd /usr/lib/one/mads/
> #patch -p0 < /tmp/one_im_ssh.rb.patch
> 
> Quoting Fernando Morgenstern :
> 
>> Hi,
>> 
>> The tmp folder in my host is empty.
>> 
>> Here is the output of the commands:
>> 
>> $ onehost show 1
>> HOST 1 INFORMATION
>> ID: 1
>> NAME  : node01
>> CLUSTER   : default
>> STATE : ERROR
>> IM_MAD: im_xen
>> VM_MAD: vmm_xen
>> TM_MAD: tm_nfs
>> 
>> HOST SHARES
>> MAX MEM   : 0
>> USED MEM (REAL)   : 0
>> USED MEM (ALLOCATED)  : 0
>> MAX CPU   : 0
>> USED CPU (REAL)   : 0
>> USED CPU (ALLOCATED)  : 0
>> RUNNING VMS   : 0
>> 
>> MONITORING INFORMATION
>> 
>> $ onehost show -x 1
>> 
>>  1
>>  node01
>>  3
>>  im_xen
>>  vmm_xen
>>  tm_nfs
>>  1289394482
>>  default
>>  
>>1
>>0
>>0
>>0
>>0
>>0
>>0
>>0
>>0
>>0
>>0
>>0
>>0
>>0
>>  
>>  
>> 
>> 
>> Thanks again for your answers.
>> 
>> 
>> Em 10/11/2010, às 11:22, openneb...@nerling.ch escreveu:
>> 
>>> Hallo Fernando.
>>> try to log in the host and look if there is a folder names /tmp/one.
>>> If not it could be related to the bug: http://dev.opennebula.org/issues/385
>>> 
>>> please post the output from:
>>> #onehost show 1
>>> #onehost show -x 1
>>> 
>>> I thought before your host has an id of 0.
>>> 
>>> Marlon Nerling
>>> 
>>> Quoting Fernando Morgenstern :
>>> 
 Hello,
 
 Thanks for the answer.
 
 You are right, the host is showing an error state and i didn't verified 
 it. How can i know what is causing the error in host?
 
 $ onehost list
 ID NAME  CLUSTER  RVM   TCPU   FCPU   ACPUTMEMFMEM STAT
  1 node01default0  0  0100  0K  0K  err
 
 $ onevm show 0
 VIRTUAL MACHINE 0 INFORMATION
 ID : 0
 NAME   : ttylinux
 STATE  : DONE
 LCM_STATE  : LCM_INIT
 START TIME : 11/09 19:06:37
 END TIME   : 11/09 19:11:09
 DEPLOY ID: : -
 
 VIRTUAL MACHINE MONITORING
 NET_RX : 0
 USED MEMORY: 0
 USED CPU   : 0
 NET_TX : 0
 
 VIRTUAL MACHINE TEMPLATE
 CPU=0.1
 DISK=[
 DISK_ID=0,
 READONLY=no,
 SOURCE=/home/oneadmin/ttylinux.img,
 TARGET=hda ]
 FEATURES=[
 ACPI=no ]
 MEMORY=64
 NAME=ttylinux
 NIC=[
 BRIDGE=br0,
 IP=*,
 MAC=02:00:5d:9f:d0:68,
 NETWORK=Small network,
 NETWORK_ID=0 ]
 VMID=0
 
 $ onehost show 0
 Error: [HostInfo] Error getting HOST [0].
 
 Thanks!
 
 Em 10/11/2010, às 06:45, openneb...@nerling.ch escreveu:
 
> Hallo Fernando.
> Could you please post the output of:
> #onehost list
> #onevm show 0
> #onehost show 0
> 
> It seems that none of your Hosts are enabled!
> Tue Nov  9 20:31:18 2010 [HOST][D]: Discovered Hosts (enabled):
> 
> best regards
> 
> Marlon Nerling
> 
> Zitat von Fernando Morgenstern :
> 
>> Hello,
>> 
>> This is the first time that i'm using open nebula, so i tried to do it 
>> with express script which ran fine. I'm using CentOS 5.5 with Xen.
>> 
>> The first thing that i'm trying to do is getting the following vm 
>> running:
>> 
>> NAME   = ttylinux
>> CPU= 0.1
>> MEMORY = 64
>> 
>> DISK   = [
>> source   = "/var/lib/one/images/ttylinux.img",
>> target   = "hda",
>> readonly = "no" ]
>> 
>> NIC= [ NETWORK = "Small network" ]
>> 
>> FEATURES=[ acpi="no" ]
>> 
>

Re: [one-users] VMs stuck in PEND state

2010-11-10 Thread opennebula

Yes, it is related to the bug http://dev.opennebula.org/issues/385.
I attached a patch for /usr/lib/one/mads/one_im_ssh.rb.
## Save it on the /tmp
#cd /usr/lib/one/mads/
#patch -p0 < /tmp/one_im_ssh.rb.patch

Quoting Fernando Morgenstern :


Hi,

The tmp folder in my host is empty.

Here is the output of the commands:

$ onehost show 1
HOST 1 INFORMATION
ID: 1
NAME  : node01
CLUSTER   : default
STATE : ERROR
IM_MAD: im_xen
VM_MAD: vmm_xen
TM_MAD: tm_nfs

HOST SHARES
MAX MEM   : 0
USED MEM (REAL)   : 0
USED MEM (ALLOCATED)  : 0
MAX CPU   : 0
USED CPU (REAL)   : 0
USED CPU (ALLOCATED)  : 0
RUNNING VMS   : 0

MONITORING INFORMATION

$ onehost show -x 1

  1
  node01
  3
  im_xen
  vmm_xen
  tm_nfs
  1289394482
  default
  
1
0
0
0
0
0
0
0
0
0
0
0
0
0
  
  


Thanks again for your answers.


Em 10/11/2010, às 11:22, openneb...@nerling.ch escreveu:


Hallo Fernando.
try to log in the host and look if there is a folder names /tmp/one.
If not it could be related to the bug: http://dev.opennebula.org/issues/385

please post the output from:
#onehost show 1
#onehost show -x 1

I thought before your host has an id of 0.

Marlon Nerling

Quoting Fernando Morgenstern :


Hello,

Thanks for the answer.

You are right, the host is showing an error state and i didn't  
verified it. How can i know what is causing the error in host?


$ onehost list
 ID NAME  CLUSTER  RVM   TCPU   FCPU   ACPUTMEM 
FMEM STAT
  1 node01default0  0  0100  0K 
  0K  err


$ onevm show 0
VIRTUAL MACHINE 0 INFORMATION
ID : 0
NAME   : ttylinux
STATE  : DONE
LCM_STATE  : LCM_INIT
START TIME : 11/09 19:06:37
END TIME   : 11/09 19:11:09
DEPLOY ID: : -

VIRTUAL MACHINE MONITORING
NET_RX : 0
USED MEMORY: 0
USED CPU   : 0
NET_TX : 0

VIRTUAL MACHINE TEMPLATE
CPU=0.1
DISK=[
 DISK_ID=0,
 READONLY=no,
 SOURCE=/home/oneadmin/ttylinux.img,
 TARGET=hda ]
FEATURES=[
 ACPI=no ]
MEMORY=64
NAME=ttylinux
NIC=[
 BRIDGE=br0,
 IP=*,
 MAC=02:00:5d:9f:d0:68,
 NETWORK=Small network,
 NETWORK_ID=0 ]
VMID=0

$ onehost show 0
Error: [HostInfo] Error getting HOST [0].

Thanks!

Em 10/11/2010, às 06:45, openneb...@nerling.ch escreveu:


Hallo Fernando.
Could you please post the output of:
#onehost list
#onevm show 0
#onehost show 0

It seems that none of your Hosts are enabled!
Tue Nov  9 20:31:18 2010 [HOST][D]: Discovered Hosts (enabled):

best regards

Marlon Nerling

Zitat von Fernando Morgenstern :


Hello,

This is the first time that i'm using open nebula, so i tried to  
do it with express script which ran fine. I'm using CentOS 5.5  
with Xen.


The first thing that i'm trying to do is getting the following  
vm running:


NAME   = ttylinux
CPU= 0.1
MEMORY = 64

DISK   = [
source   = "/var/lib/one/images/ttylinux.img",
target   = "hda",
readonly = "no" ]

NIC= [ NETWORK = "Small network" ]

FEATURES=[ acpi="no" ]


So i used:

onevm create ttylinux.one

The issue is that it keeps stuck in PEND state:

onevm list
 ID USER NAME STAT CPU MEMHOSTNAMETIME
  3 oneadmin ttylinux pend   0  0K 00 00:20:03


At oned log the following messages keeps repeating:

Tue Nov  9 20:29:48 2010 [ReM][D]: VirtualMachinePoolInfo method invoked
Tue Nov  9 20:29:59 2010 [ReM][D]: VirtualMachinePoolInfo method invoked
Tue Nov  9 20:30:18 2010 [ReM][D]: HostPoolInfo method invoked
Tue Nov  9 20:30:18 2010 [ReM][D]: VirtualMachinePoolInfo method invoked

And at sched.log, i see this:

Tue Nov  9 20:31:18 2010 [HOST][D]: Discovered Hosts (enabled):
Tue Nov  9 20:31:18 2010 [VM][D]: Pending virtual machines : 3
Tue Nov  9 20:31:18 2010 [RANK][W]: No rank defined for VM
Tue Nov  9 20:31:18 2010 [SCHED][I]: Select hosts
PRI HID
---
Virtual Machine: 3


Any ideas of what might be happening?

I saw other threads here at the list with similar problems, but  
their solution didn't applied to my case.


Best Regards,

Fernando.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.

Re: [one-users] Strange behaviour when creating new VM from image (ONE 2.0)

2010-11-10 Thread Javier Fontan
Can you send us the vm log file located at $ONE_LOCATION/var/58/vm.log
or /var/log/58.log?

Bye

On Tue, Nov 9, 2010 at 3:35 PM, Oscar Elfving  wrote:
> Hello,
> I am trying to create a new VM from a raw disk image, I have added it in the
> template like this:
> DISK = [
>         TYPE = "disk",
>         SOURCE = "/srv/cloud/var/images/ws1.img",
>         CLONE = no
> ]
> But ONE just creates it, links it and then immediately deletes it, why is
> this?
> oned.log:
> Tue Nov  9 15:33:18 2010 [TM][D]: Message received: LOG - 58 tm_ln.sh:
> Creating directory /srv/cloud/one/var/58/images
> Tue Nov  9 15:33:18 2010 [TM][D]: Message received: LOG - 58 tm_ln.sh:
> Executed "mkdir -p /srv/cloud/one/var/58/images".
> Tue Nov  9 15:33:18 2010 [TM][D]: Message received: LOG - 58 tm_ln.sh:
> Executed "chmod a+w /srv/cloud/one/var/58/images".
> Tue Nov  9 15:33:18 2010 [TM][D]: Message received: LOG - 58 tm_ln.sh: Link
> /srv/cloud/var/images/ws1.img
> Tue Nov  9 15:33:18 2010 [TM][D]: Message received: LOG - 58 tm_ln.sh:
> Executed "ln -s /srv/cloud/var/images/ws1.img
> /srv/cloud/one/var/58/images/disk.0".
> Tue Nov  9 15:33:18 2010 [TM][D]: Message received: TRANSFER SUCCESS 58 -
> Tue Nov  9 15:33:18 2010 [TM][D]: Message received: LOG - 58 tm_delete.sh:
> Deleting /srv/cloud/one/var/58/images
> Tue Nov  9 15:33:18 2010 [TM][D]: Message received: LOG - 58 tm_delete.sh:
> Executed "rm -rf /srv/cloud/one/var/58/images".
> Tue Nov  9 15:33:18 2010 [TM][D]: Message received: TRANSFER SUCCESS 58 -
> Best regards,
> Oscar Elfving
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>



-- 
Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
DSA Research Group: http://dsa-research.org
Globus GridWay Metascheduler: http://www.GridWay.org
OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] EC2 driver

2010-11-10 Thread Waterbley Tom
Dear,

Has somebody an updated driver which supports monitoring instances on Amazon 
EC2?
Also I want to stop and resume instances instead of terminate them.

Can someone help me?

Tom
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VMs stuck in PEND state

2010-11-10 Thread Fernando Morgenstern
Hello,

Thanks for the answer.

You are right, the host is showing an error state and i didn't verified it. How 
can i know what is causing the error in host?

$ onehost list
  ID NAME  CLUSTER  RVM   TCPU   FCPU   ACPUTMEMFMEM STAT
   1 node01default0  0  0100  0K  0K  err

$ onevm show 0
VIRTUAL MACHINE 0 INFORMATION   
ID : 0   
NAME   : ttylinux
STATE  : DONE
LCM_STATE  : LCM_INIT
START TIME : 11/09 19:06:37  
END TIME   : 11/09 19:11:09  
DEPLOY ID: : -   

VIRTUAL MACHINE MONITORING  
NET_RX : 0   
USED MEMORY: 0   
USED CPU   : 0   
NET_TX : 0   

VIRTUAL MACHINE TEMPLATE
CPU=0.1
DISK=[
  DISK_ID=0,
  READONLY=no,
  SOURCE=/home/oneadmin/ttylinux.img,
  TARGET=hda ]
FEATURES=[
  ACPI=no ]
MEMORY=64
NAME=ttylinux
NIC=[
  BRIDGE=br0,
  IP=*,
  MAC=02:00:5d:9f:d0:68,
  NETWORK=Small network,
  NETWORK_ID=0 ]
VMID=0

$ onehost show 0
Error: [HostInfo] Error getting HOST [0].

Thanks!

Em 10/11/2010, às 06:45, openneb...@nerling.ch escreveu:

> Hallo Fernando.
> Could you please post the output of:
> #onehost list
> #onevm show 0
> #onehost show 0
> 
> It seems that none of your Hosts are enabled!
> Tue Nov  9 20:31:18 2010 [HOST][D]: Discovered Hosts (enabled):
> 
> best regards
> 
> Marlon Nerling
> 
> Zitat von Fernando Morgenstern :
> 
>> Hello,
>> 
>> This is the first time that i'm using open nebula, so i tried to do it with 
>> express script which ran fine. I'm using CentOS 5.5 with Xen.
>> 
>> The first thing that i'm trying to do is getting the following vm running:
>> 
>> NAME   = ttylinux
>> CPU= 0.1
>> MEMORY = 64
>> 
>> DISK   = [
>>  source   = "/var/lib/one/images/ttylinux.img",
>>  target   = "hda",
>>  readonly = "no" ]
>> 
>> NIC= [ NETWORK = "Small network" ]
>> 
>> FEATURES=[ acpi="no" ]
>> 
>> 
>> So i used:
>> 
>> onevm create ttylinux.one
>> 
>> The issue is that it keeps stuck in PEND state:
>> 
>> onevm list
>>   ID USER NAME STAT CPU MEMHOSTNAMETIME
>>3 oneadmin ttylinux pend   0  0K 00 00:20:03
>> 
>> 
>> At oned log the following messages keeps repeating:
>> 
>> Tue Nov  9 20:29:48 2010 [ReM][D]: VirtualMachinePoolInfo method invoked
>> Tue Nov  9 20:29:59 2010 [ReM][D]: VirtualMachinePoolInfo method invoked
>> Tue Nov  9 20:30:18 2010 [ReM][D]: HostPoolInfo method invoked
>> Tue Nov  9 20:30:18 2010 [ReM][D]: VirtualMachinePoolInfo method invoked
>> 
>> And at sched.log, i see this:
>> 
>> Tue Nov  9 20:31:18 2010 [HOST][D]: Discovered Hosts (enabled):
>> Tue Nov  9 20:31:18 2010 [VM][D]: Pending virtual machines : 3
>> Tue Nov  9 20:31:18 2010 [RANK][W]: No rank defined for VM
>> Tue Nov  9 20:31:18 2010 [SCHED][I]: Select hosts
>>  PRI HID
>>  ---
>> Virtual Machine: 3
>> 
>> 
>> Any ideas of what might be happening?
>> 
>> I saw other threads here at the list with similar problems, but their 
>> solution didn't applied to my case.
>> 
>> Best Regards,
>> 
>> Fernando.
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> 
> 
> 
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] client for nebula ?

2010-11-10 Thread Javier Fontan
Hello,

On Tue, Nov 2, 2010 at 1:26 PM, Zeeshan Ali Shah  wrote:
> 1) what need to be install for client , like we have Euca-tools for
> eucalyptus. any client specific packages for the users ?

There is not an specific client only package for opennebula CLI. I
think that a gem containing needed files could be the best way to
distribute them but right now I cannot tell you wen or if it will be
done. Needed files are from installation:

bin/one*
lib/ruby/OpenNebula.rb
lib/ruby/OpenNebula/*
lib/ruby/client_utilities.rb
lib/ruby/command_parse.rb

I am writing this from memory, maybe some other files are needed.

> 2) how authentication work does the username:password in ~/.one/one_auth
> must be same as in frontend ?

The authentication works the same.

If the server is located in a server different that the clients make
sure to set ONE_XMLRPC env var, it is explained here
http://opennebula.org/documentation:rel2.0:cg.

Also take into account that one CLI do not transfer files, the paths
must be local to the frontend.

Bye

-- 
Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
DSA Research Group: http://dsa-research.org
Globus GridWay Metascheduler: http://www.GridWay.org
OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VMs stuck in PEND state

2010-11-10 Thread Fernando Morgenstern
Hi,

The tmp folder in my host is empty.

Here is the output of the commands:

$ onehost show 1
HOST 1 INFORMATION  
ID: 1   
NAME  : node01
CLUSTER   : default 
STATE : ERROR   
IM_MAD: im_xen  
VM_MAD: vmm_xen 
TM_MAD: tm_nfs  

HOST SHARES 
MAX MEM   : 0   
USED MEM (REAL)   : 0   
USED MEM (ALLOCATED)  : 0   
MAX CPU   : 0   
USED CPU (REAL)   : 0   
USED CPU (ALLOCATED)  : 0   
RUNNING VMS   : 0   

MONITORING INFORMATION  

$ onehost show -x 1

  1
  node01
  3
  im_xen
  vmm_xen
  tm_nfs
  1289394482
  default
  
1
0
0
0
0
0
0
0
0
0
0
0
0
0
  
  


Thanks again for your answers.


Em 10/11/2010, às 11:22, openneb...@nerling.ch escreveu:

> Hallo Fernando.
> try to log in the host and look if there is a folder names /tmp/one.
> If not it could be related to the bug: http://dev.opennebula.org/issues/385
> 
> please post the output from:
> #onehost show 1
> #onehost show -x 1
> 
> I thought before your host has an id of 0.
> 
> Marlon Nerling
> 
> Quoting Fernando Morgenstern :
> 
>> Hello,
>> 
>> Thanks for the answer.
>> 
>> You are right, the host is showing an error state and i didn't verified it. 
>> How can i know what is causing the error in host?
>> 
>> $ onehost list
>>  ID NAME  CLUSTER  RVM   TCPU   FCPU   ACPUTMEMFMEM STAT
>>   1 node01default0  0  0100  0K  0K  err
>> 
>> $ onevm show 0
>> VIRTUAL MACHINE 0 INFORMATION
>> ID : 0
>> NAME   : ttylinux
>> STATE  : DONE
>> LCM_STATE  : LCM_INIT
>> START TIME : 11/09 19:06:37
>> END TIME   : 11/09 19:11:09
>> DEPLOY ID: : -
>> 
>> VIRTUAL MACHINE MONITORING
>> NET_RX : 0
>> USED MEMORY: 0
>> USED CPU   : 0
>> NET_TX : 0
>> 
>> VIRTUAL MACHINE TEMPLATE
>> CPU=0.1
>> DISK=[
>>  DISK_ID=0,
>>  READONLY=no,
>>  SOURCE=/home/oneadmin/ttylinux.img,
>>  TARGET=hda ]
>> FEATURES=[
>>  ACPI=no ]
>> MEMORY=64
>> NAME=ttylinux
>> NIC=[
>>  BRIDGE=br0,
>>  IP=*,
>>  MAC=02:00:5d:9f:d0:68,
>>  NETWORK=Small network,
>>  NETWORK_ID=0 ]
>> VMID=0
>> 
>> $ onehost show 0
>> Error: [HostInfo] Error getting HOST [0].
>> 
>> Thanks!
>> 
>> Em 10/11/2010, às 06:45, openneb...@nerling.ch escreveu:
>> 
>>> Hallo Fernando.
>>> Could you please post the output of:
>>> #onehost list
>>> #onevm show 0
>>> #onehost show 0
>>> 
>>> It seems that none of your Hosts are enabled!
>>> Tue Nov  9 20:31:18 2010 [HOST][D]: Discovered Hosts (enabled):
>>> 
>>> best regards
>>> 
>>> Marlon Nerling
>>> 
>>> Zitat von Fernando Morgenstern :
>>> 
 Hello,
 
 This is the first time that i'm using open nebula, so i tried to do it 
 with express script which ran fine. I'm using CentOS 5.5 with Xen.
 
 The first thing that i'm trying to do is getting the following vm running:
 
 NAME   = ttylinux
 CPU= 0.1
 MEMORY = 64
 
 DISK   = [
 source   = "/var/lib/one/images/ttylinux.img",
 target   = "hda",
 readonly = "no" ]
 
 NIC= [ NETWORK = "Small network" ]
 
 FEATURES=[ acpi="no" ]
 
 
 So i used:
 
 onevm create ttylinux.one
 
 The issue is that it keeps stuck in PEND state:
 
 onevm list
  ID USER NAME STAT CPU MEMHOSTNAMETIME
   3 oneadmin ttylinux pend   0  0K 00 00:20:03
 
 
 At oned log the following messages keeps repeating:
 
 Tue Nov  9 20:29:48 2010 [ReM][D]: VirtualMachinePoolInfo method invoked
 Tue Nov  9 20:29:59 2010 [ReM][D]: VirtualMachinePoolInfo method invoked
 Tue Nov  9 20:30:18 2010 [ReM][D]: HostPoolInfo method invoked
 Tue Nov  9 20:30:18 2010 [ReM][D]: VirtualMachinePoolInfo method invoked
 
 And at sched.log, i see this:
 
 Tue Nov  9 20:31:18 2010 [HOST][D]: Discovered Hosts (enabled):
 Tue Nov  9 20:31:18 2010 [VM][D]: Pending virtual machines : 3
 Tue Nov  9 20:31:18 2010 [RANK][W]: No rank defined for VM
 Tue Nov  9 20:31:18 2010 [SCHED][I]: Select hosts
PRI HID
---
 Virtual Machine: 3
 
 
 Any ideas of what might be happening?
 
 I saw other threads here at the list with similar problems, but their 
 solution didn't applied to my case.
 
 Best Regards,
 
 Fernando.
 ___
 Users mailing list
 Users@list

Re: [one-users] VMs stuck in PEND state

2010-11-10 Thread opennebula

Hallo Fernando.
try to log in the host and look if there is a folder names /tmp/one.
If not it could be related to the bug: http://dev.opennebula.org/issues/385

please post the output from:
#onehost show 1
#onehost show -x 1

I thought before your host has an id of 0.

Marlon Nerling

Quoting Fernando Morgenstern :


Hello,

Thanks for the answer.

You are right, the host is showing an error state and i didn't  
verified it. How can i know what is causing the error in host?


$ onehost list
  ID NAME  CLUSTER  RVM   TCPU   FCPU   ACPUTMEMFMEM STAT
   1 node01default0  0  0100  0K  0K  err

$ onevm show 0
VIRTUAL MACHINE 0 INFORMATION
ID : 0
NAME   : ttylinux
STATE  : DONE
LCM_STATE  : LCM_INIT
START TIME : 11/09 19:06:37
END TIME   : 11/09 19:11:09
DEPLOY ID: : -

VIRTUAL MACHINE MONITORING
NET_RX : 0
USED MEMORY: 0
USED CPU   : 0
NET_TX : 0

VIRTUAL MACHINE TEMPLATE
CPU=0.1
DISK=[
  DISK_ID=0,
  READONLY=no,
  SOURCE=/home/oneadmin/ttylinux.img,
  TARGET=hda ]
FEATURES=[
  ACPI=no ]
MEMORY=64
NAME=ttylinux
NIC=[
  BRIDGE=br0,
  IP=*,
  MAC=02:00:5d:9f:d0:68,
  NETWORK=Small network,
  NETWORK_ID=0 ]
VMID=0

$ onehost show 0
Error: [HostInfo] Error getting HOST [0].

Thanks!

Em 10/11/2010, às 06:45, openneb...@nerling.ch escreveu:


Hallo Fernando.
Could you please post the output of:
#onehost list
#onevm show 0
#onehost show 0

It seems that none of your Hosts are enabled!
Tue Nov  9 20:31:18 2010 [HOST][D]: Discovered Hosts (enabled):

best regards

Marlon Nerling

Zitat von Fernando Morgenstern :


Hello,

This is the first time that i'm using open nebula, so i tried to  
do it with express script which ran fine. I'm using CentOS 5.5  
with Xen.


The first thing that i'm trying to do is getting the following vm running:

NAME   = ttylinux
CPU= 0.1
MEMORY = 64

DISK   = [
 source   = "/var/lib/one/images/ttylinux.img",
 target   = "hda",
 readonly = "no" ]

NIC= [ NETWORK = "Small network" ]

FEATURES=[ acpi="no" ]


So i used:

onevm create ttylinux.one

The issue is that it keeps stuck in PEND state:

onevm list
  ID USER NAME STAT CPU MEMHOSTNAMETIME
   3 oneadmin ttylinux pend   0  0K 00 00:20:03


At oned log the following messages keeps repeating:

Tue Nov  9 20:29:48 2010 [ReM][D]: VirtualMachinePoolInfo method invoked
Tue Nov  9 20:29:59 2010 [ReM][D]: VirtualMachinePoolInfo method invoked
Tue Nov  9 20:30:18 2010 [ReM][D]: HostPoolInfo method invoked
Tue Nov  9 20:30:18 2010 [ReM][D]: VirtualMachinePoolInfo method invoked

And at sched.log, i see this:

Tue Nov  9 20:31:18 2010 [HOST][D]: Discovered Hosts (enabled):
Tue Nov  9 20:31:18 2010 [VM][D]: Pending virtual machines : 3
Tue Nov  9 20:31:18 2010 [RANK][W]: No rank defined for VM
Tue Nov  9 20:31:18 2010 [SCHED][I]: Select hosts
PRI HID
---
Virtual Machine: 3


Any ideas of what might be happening?

I saw other threads here at the list with similar problems, but  
their solution didn't applied to my case.


Best Regards,

Fernando.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Strange behaviour when creating new VM from image (ONE 2.0)

2010-11-10 Thread Oscar Elfving
I found what was wrong, I needed to use TARGET in the DISK definition
otherwise it can't generate the deployment.0 file.

Best regards,
Oscar Elfving

On Wed, Nov 10, 2010 at 11:21 AM, Javier Fontan  wrote:

> Can you send us the vm log file located at $ONE_LOCATION/var/58/vm.log
> or /var/log/58.log?
>
> Bye
>
> On Tue, Nov 9, 2010 at 3:35 PM, Oscar Elfving  wrote:
> > Hello,
> > I am trying to create a new VM from a raw disk image, I have added it in
> the
> > template like this:
> > DISK = [
> > TYPE = "disk",
> > SOURCE = "/srv/cloud/var/images/ws1.img",
> > CLONE = no
> > ]
> > But ONE just creates it, links it and then immediately deletes it, why is
> > this?
> > oned.log:
> > Tue Nov  9 15:33:18 2010 [TM][D]: Message received: LOG - 58 tm_ln.sh:
> > Creating directory /srv/cloud/one/var/58/images
> > Tue Nov  9 15:33:18 2010 [TM][D]: Message received: LOG - 58 tm_ln.sh:
> > Executed "mkdir -p /srv/cloud/one/var/58/images".
> > Tue Nov  9 15:33:18 2010 [TM][D]: Message received: LOG - 58 tm_ln.sh:
> > Executed "chmod a+w /srv/cloud/one/var/58/images".
> > Tue Nov  9 15:33:18 2010 [TM][D]: Message received: LOG - 58 tm_ln.sh:
> Link
> > /srv/cloud/var/images/ws1.img
> > Tue Nov  9 15:33:18 2010 [TM][D]: Message received: LOG - 58 tm_ln.sh:
> > Executed "ln -s /srv/cloud/var/images/ws1.img
> > /srv/cloud/one/var/58/images/disk.0".
> > Tue Nov  9 15:33:18 2010 [TM][D]: Message received: TRANSFER SUCCESS 58 -
> > Tue Nov  9 15:33:18 2010 [TM][D]: Message received: LOG - 58
> tm_delete.sh:
> > Deleting /srv/cloud/one/var/58/images
> > Tue Nov  9 15:33:18 2010 [TM][D]: Message received: LOG - 58
> tm_delete.sh:
> > Executed "rm -rf /srv/cloud/one/var/58/images".
> > Tue Nov  9 15:33:18 2010 [TM][D]: Message received: TRANSFER SUCCESS 58 -
> > Best regards,
> > Oscar Elfving
> >
> > ___
> > Users mailing list
> > Users@lists.opennebula.org
> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> >
> >
>
>
>
> --
> Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
> DSA Research Group: http://dsa-research.org
> Globus GridWay Metascheduler: http://www.GridWay.org
> OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Express Installation Script.

2010-11-10 Thread KING LABS
Thats a good thing to know, Once again thanks Daniel for helping me with
immediate solution.

Also I would like to remind about /usr/lib/one/remote , what is its role...
I have to copy this every time i reboot node. Currently I am running on
single node setup to understand and build the POC.



On Wed, Nov 10, 2010 at 5:17 PM, Jaime Melis  wrote:

> Hello,
>
> thank you both for reporting these issues. There is indeed a problem with
> /var/lock/one and /var/run/one directories getting removed on system
> restart. The problem is not OpenNebula Express but the binary packages
> themselves. We have opened an issue to provide init scripts which will
> recreate these directories.
>
> This will be fixed in the upcoming 2.0.1 maintanence release.
>
> There is one other thing that is still unclear to us: Daniel, you said that
> /var/lib/one wasn't created with the correct permissions, could you please
> elaborate? We think 755 are the correct permissions for that directory and
> not 777. Why do you need it to be world-writable?
>
> Regards,
> Jaime
>
> On Sat, Nov 6, 2010 at 8:49 AM,  wrote:
>
>> I think that the error is related to "export ONE_AUTH=/$HOME/.one-auth"
>> and it means that there is no one_auth file. This file should contain
>> username:pasword of the opennebula user. but it might also be from some
>> broken/missing ruby dependencies. You can also try install ruby-full from
>> a package manager.
>>
>> Now about installing nebula. Using the express install is the easiest way
>> of getting things working(I am still talking of installing on a Ubuntu
>> distribution).
>> 1. You run the install on the client machine(using sudo install.sh or
>> something). The install script creates a "oneadmin" user and generates a
>> "rsa" key for this user. When it generates the "node-install.sh" it copies
>> that key to the node-install script.
>>
>> 2. If you add a node using "tm_ssh"( i have only used ssh, never nfs
>> because i am new at this too) then OpenNebula will get the host info by
>> using a "scp" to copy the "remotes" folder in the node /tmp/one and then
>> will connect using "ssh" to the node and call those ruby scripts.
>>
>> NOTE1: one start must be done from the oneadmin user( so log in as
>> oneadmin , because this is why the instalation script creates it ) and DO
>> NOT USE sudo. If you use SUDO for "sudo start one", nebula will try  to
>> ssh as root.
>> NOTE2: to log in as oneadmin (i don't know the default oneadmin password)
>> i do a sudo passwd oneadmin and input another password.
>>
>> 3. The commands:
>> >> 1. export ONE_AUTH=/$HOME/.one-auth
>> >> 2. export ONE_XMLRPC=http://localhost:2633/RPC2
>> >> 3. sudo mkdir /var/run/one
>> >> 4. sudo mkdir /var/lock/one
>> >> 5. sudo chmod 0777 /var/run/one
>> >> 6. sudo chmod 0777 /var/lock/one
>> >> 7. one start
>> Need to be performed on the client machine from the "oneadmin" user
>> logged in.
>> NOTE1: The install script should had created the /$HOME/.one-auth
>> containing oneadmin:oneadmin inside.
>> NOTE2: The password in the one_auth file does NOT NEED TO match the
>> password of the oneadmin user. They are two separate things. The one_auth
>> file is used for opennebula requests for client validation.
>>
>> 4. To install opennebula-node just run the node-install.sh on each node.
>> The node-install script also creates a oneadmin user. And more important,
>> it creates a  $HOME/.ssh (hidden folder, use Ctrl+H to see it in a file
>> manager). In this folder it creates(if not already existing) a file called
>> authorized_keys. Here the "rsa" key generated on the client is placed.
>> This file contains all the "rsa" keys used by anywone which wants to be
>> able to connect remotely to this node trough ssh. If the key is not
>> present a password is requested when issuing a ssh.
>>
>> NOTE1: after running node-install, generate a password for oneadmin user
>> and log in as oneadmin. If you remain logged as other user the nebula
>> client will not be able to connect to the node to get info.
>>
>> NOTE2: this steps only enable onehost add and onevm submit methods to
>> work. Migrate and onevm stop will fail because when migrating the nebula
>> nodes communicate directly. And when issuing a stop the node will try to
>> save the state of the virtual machine and copy back the machine to the
>> nebula client. This two methods will fail because the nodes do not have
>> the "rsa" key of the other nodes in their $HOME/.ssh/authorized_keys file.
>> And also the nebula client does not have the keys of the nebula nodes. So,
>> on each node, do a "ssh-keygen -t rsa". It will generate a id_rsa.pub.
>> Copy the key from the .pub file to the authorized_keys file on the nebula
>> client and of the other nebula nodes. Do this for each node. If the
>> authorized_keys file does not exist create it but see in the node-install
>> sh how is that created. VERRY IMPORTANT it must have certain access rights
>> and owner. A chmod 0600 and chown -R oneadmin $HOME/.ssh  is

Re: [one-users] Express Installation Script.

2010-11-10 Thread Daniel . MOLDOVAN
I am sorry. I haven't said that 755 are the wrong permissions. Just that
being new using opennebula i have encountered some "permission denied"
errors and by making the folder world-writable i have avoided them. I have
just mentioned as a quick fix to give everybody access to /var/lib/one. I
assume my problem originated from the oneadmin user being created without
necessary access rights. Because oneadmin also needs write permissions and
i think(excuse me if i am wrong) 755 means read-only for anyone other than
root. So my oneadmin user created from the install script does not have
write access needed for every opennebula operation as the one.db and
virtual machines files are there(images and deployment files).

But i insist, this might just be due to an incorrect installation process
performed by me and not due to incorrect access rights.

Regards,
Daniel


> Hello,
>
>
> thank you both for reporting these issues. There is indeed a problem with
>  /var/lock/one and /var/run/one directories getting removed on system
> restart. The problem is not OpenNebula Express but the binary packages
> themselves. We have opened an issue to provide init scripts which will
> recreate these directories.
>
> This will be fixed in the upcoming 2.0.1 maintanence release.
>
>
> There is one other thing that is still unclear to us: Daniel, you said
> that /var/lib/one wasn't created with the correct permissions, could you
> please elaborate? We think 755 are the correct permissions for that
> directory and not 777. Why do you need it to be world-writable?
>
> Regards,
> Jaime
>
>
> On Sat, Nov 6, 2010 at 8:49 AM,  wrote:
>
>
>> I think that the error is related to "export ONE_AUTH=/$HOME/.one-auth"
>>  and it means that there is no one_auth file. This file should contain
>> username:pasword of the opennebula user. but it might also be from some
>>  broken/missing ruby dependencies. You can also try install ruby-full
>> from a package manager.
>>
>> Now about installing nebula. Using the express install is the easiest
>> way of getting things working(I am still talking of installing on a
>> Ubuntu
>> distribution). 1. You run the install on the client machine(using sudo
>> install.sh or something). The install script creates a "oneadmin" user
>> and generates a "rsa" key for this user. When it generates the
>> "node-install.sh" it copies
>> that key to the node-install script.
>>
>> 2. If you add a node using "tm_ssh"( i have only used ssh, never nfs
>> because i am new at this too) then OpenNebula will get the host info by
>> using a "scp" to copy the "remotes" folder in the node /tmp/one and
>> then will connect using "ssh" to the node and call those ruby scripts.
>>
>> NOTE1: one start must be done from the oneadmin user( so log in as
>> oneadmin , because this is why the instalation script creates it ) and
>> DO
>> NOT USE sudo. If you use SUDO for "sudo start one", nebula will try  to
>> ssh as root. NOTE2: to log in as oneadmin (i don't know the default
>> oneadmin password) i do a sudo passwd oneadmin and input another
>> password.
>>
>> 3. The commands:
>>
 1. export ONE_AUTH=/$HOME/.one-auth
 2. export ONE_XMLRPC=http://localhost:2633/RPC2
 3. sudo mkdir /var/run/one
 4. sudo mkdir /var/lock/one
 5. sudo chmod 0777 /var/run/one
 6. sudo chmod 0777 /var/lock/one
 7. one start

>> Need to be performed on the client machine from the "oneadmin" user
>> logged in. NOTE1: The install script should had created the
>> /$HOME/.one-auth
>> containing oneadmin:oneadmin inside. NOTE2: The password in the one_auth
>> file does NOT NEED TO match the password of the oneadmin user. They are
>> two separate things. The one_auth file is used for opennebula requests
>> for client validation.
>>
>> 4. To install opennebula-node just run the node-install.sh on each
>> node. The node-install script also creates a oneadmin user. And more
>> important, it creates a  $HOME/.ssh (hidden folder, use Ctrl+H to see it
>> in a file manager). In this folder it creates(if not already existing) a
>> file called authorized_keys. Here the "rsa" key generated on the client
>> is placed. This file contains all the "rsa" keys used by anywone which
>> wants to be able to connect remotely to this node trough ssh. If the key
>> is not present a password is requested when issuing a ssh.
>>
>> NOTE1: after running node-install, generate a password for oneadmin
>> user and log in as oneadmin. If you remain logged as other user the
>> nebula client will not be able to connect to the node to get info.
>>
>> NOTE2: this steps only enable onehost add and onevm submit methods to
>> work. Migrate and onevm stop will fail because when migrating the nebula
>>  nodes communicate directly. And when issuing a stop the node will try
>> to save the state of the virtual machine and copy back the machine to
>> the nebula client. This two methods will fail because the nodes do not
>> have the "rsa" key of the other nodes in their
>> $HOME/.ssh/a

Re: [one-users] Express Installation Script.

2010-11-10 Thread Jaime Melis
Hi,

The scripts under /usr/lib/one/remotes are reponsible for the VMM and IM
actions. You have to copy them because they are by default copied to /tmp
which is generally not persistent after reboot. The default path of these
scripts has been changed to /var/tmp in order to avoid this issue and in the
upcoming 2.0.1 that will be fixed.

If you don't want to wait to the next release follow this thread where we
previously discussed this problem:
http://lists.opennebula.org/htdig.cgi/users-opennebula.org/2010-October/003102.html

regards,
Jaime

On Wed, Nov 10, 2010 at 1:32 PM, KING LABS  wrote:

> Thats a good thing to know, Once again thanks Daniel for helping me with
> immediate solution.
>
> Also I would like to remind about /usr/lib/one/remote , what is its role...
> I have to copy this every time i reboot node. Currently I am running on
> single node setup to understand and build the POC.
>
>
>
> On Wed, Nov 10, 2010 at 5:17 PM, Jaime Melis  wrote:
>
>> Hello,
>>
>> thank you both for reporting these issues. There is indeed a problem with
>> /var/lock/one and /var/run/one directories getting removed on system
>> restart. The problem is not OpenNebula Express but the binary packages
>> themselves. We have opened an issue to provide init scripts which will
>> recreate these directories.
>>
>> This will be fixed in the upcoming 2.0.1 maintanence release.
>>
>> There is one other thing that is still unclear to us: Daniel, you said
>> that /var/lib/one wasn't created with the correct permissions, could you
>> please elaborate? We think 755 are the correct permissions for that
>> directory and not 777. Why do you need it to be world-writable?
>>
>> Regards,
>> Jaime
>>
>> On Sat, Nov 6, 2010 at 8:49 AM,  wrote:
>>
>>> I think that the error is related to "export ONE_AUTH=/$HOME/.one-auth"
>>> and it means that there is no one_auth file. This file should contain
>>> username:pasword of the opennebula user. but it might also be from some
>>> broken/missing ruby dependencies. You can also try install ruby-full from
>>> a package manager.
>>>
>>> Now about installing nebula. Using the express install is the easiest way
>>> of getting things working(I am still talking of installing on a Ubuntu
>>> distribution).
>>> 1. You run the install on the client machine(using sudo install.sh or
>>> something). The install script creates a "oneadmin" user and generates a
>>> "rsa" key for this user. When it generates the "node-install.sh" it
>>> copies
>>> that key to the node-install script.
>>>
>>> 2. If you add a node using "tm_ssh"( i have only used ssh, never nfs
>>> because i am new at this too) then OpenNebula will get the host info by
>>> using a "scp" to copy the "remotes" folder in the node /tmp/one and then
>>> will connect using "ssh" to the node and call those ruby scripts.
>>>
>>> NOTE1: one start must be done from the oneadmin user( so log in as
>>> oneadmin , because this is why the instalation script creates it ) and DO
>>> NOT USE sudo. If you use SUDO for "sudo start one", nebula will try  to
>>> ssh as root.
>>> NOTE2: to log in as oneadmin (i don't know the default oneadmin password)
>>> i do a sudo passwd oneadmin and input another password.
>>>
>>> 3. The commands:
>>> >> 1. export ONE_AUTH=/$HOME/.one-auth
>>> >> 2. export ONE_XMLRPC=http://localhost:2633/RPC2
>>> >> 3. sudo mkdir /var/run/one
>>> >> 4. sudo mkdir /var/lock/one
>>> >> 5. sudo chmod 0777 /var/run/one
>>> >> 6. sudo chmod 0777 /var/lock/one
>>> >> 7. one start
>>> Need to be performed on the client machine from the "oneadmin" user
>>> logged in.
>>> NOTE1: The install script should had created the /$HOME/.one-auth
>>> containing oneadmin:oneadmin inside.
>>> NOTE2: The password in the one_auth file does NOT NEED TO match the
>>> password of the oneadmin user. They are two separate things. The one_auth
>>> file is used for opennebula requests for client validation.
>>>
>>> 4. To install opennebula-node just run the node-install.sh on each node.
>>> The node-install script also creates a oneadmin user. And more important,
>>> it creates a  $HOME/.ssh (hidden folder, use Ctrl+H to see it in a file
>>> manager). In this folder it creates(if not already existing) a file
>>> called
>>> authorized_keys. Here the "rsa" key generated on the client is placed.
>>> This file contains all the "rsa" keys used by anywone which wants to be
>>> able to connect remotely to this node trough ssh. If the key is not
>>> present a password is requested when issuing a ssh.
>>>
>>> NOTE1: after running node-install, generate a password for oneadmin user
>>> and log in as oneadmin. If you remain logged as other user the nebula
>>> client will not be able to connect to the node to get info.
>>>
>>> NOTE2: this steps only enable onehost add and onevm submit methods to
>>> work. Migrate and onevm stop will fail because when migrating the nebula
>>> nodes communicate directly. And when issuing a stop the node will try to
>>> save the state of the virtual machine

Re: [one-users] The URL Does not work

2010-11-10 Thread Javier Fontan
Hello,

Where did you get that url? AFAIK it never existed.

The documentation is located at
http://opennebula.org/documentation:documentation

Bye

On Wed, Nov 10, 2010 at 9:42 AM,   wrote:
> I got this website, and it is supposed that it is talking about the
> documentation/ manual to build private even (hybrid, public) cloud(s), but
> it did not work
> here you are the URL of the website:
>
> http://wiki.opennebula.org
>
> Is there any other alternative, or suggestion?
>
> Thank you.
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
DSA Research Group: http://dsa-research.org
Globus GridWay Metascheduler: http://www.GridWay.org
OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VM fails to start in OpenNebula 2.0

2010-11-10 Thread Tino Vazquez
Hi,

Could you please check permissions on /var/lib/one//10/images/disk.0?

Regards,

-Tino

--
Constantino Vázquez Blanco | dsa-research.org/tinova
Virtualization Technology Engineer / Researcher
OpenNebula Toolkit | opennebula.org



On Tue, Nov 2, 2010 at 12:55 PM, KING LABS  wrote:
> /var/lib/one//10/images/disk.0
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Express Installation Script.

2010-11-10 Thread Jaime Melis
Hello,

thank you both for reporting these issues. There is indeed a problem with
/var/lock/one and /var/run/one directories getting removed on system
restart. The problem is not OpenNebula Express but the binary packages
themselves. We have opened an issue to provide init scripts which will
recreate these directories.

This will be fixed in the upcoming 2.0.1 maintanence release.

There is one other thing that is still unclear to us: Daniel, you said that
/var/lib/one wasn't created with the correct permissions, could you please
elaborate? We think 755 are the correct permissions for that directory and
not 777. Why do you need it to be world-writable?

Regards,
Jaime

On Sat, Nov 6, 2010 at 8:49 AM,  wrote:

> I think that the error is related to "export ONE_AUTH=/$HOME/.one-auth"
> and it means that there is no one_auth file. This file should contain
> username:pasword of the opennebula user. but it might also be from some
> broken/missing ruby dependencies. You can also try install ruby-full from
> a package manager.
>
> Now about installing nebula. Using the express install is the easiest way
> of getting things working(I am still talking of installing on a Ubuntu
> distribution).
> 1. You run the install on the client machine(using sudo install.sh or
> something). The install script creates a "oneadmin" user and generates a
> "rsa" key for this user. When it generates the "node-install.sh" it copies
> that key to the node-install script.
>
> 2. If you add a node using "tm_ssh"( i have only used ssh, never nfs
> because i am new at this too) then OpenNebula will get the host info by
> using a "scp" to copy the "remotes" folder in the node /tmp/one and then
> will connect using "ssh" to the node and call those ruby scripts.
>
> NOTE1: one start must be done from the oneadmin user( so log in as
> oneadmin , because this is why the instalation script creates it ) and DO
> NOT USE sudo. If you use SUDO for "sudo start one", nebula will try  to
> ssh as root.
> NOTE2: to log in as oneadmin (i don't know the default oneadmin password)
> i do a sudo passwd oneadmin and input another password.
>
> 3. The commands:
> >> 1. export ONE_AUTH=/$HOME/.one-auth
> >> 2. export ONE_XMLRPC=http://localhost:2633/RPC2
> >> 3. sudo mkdir /var/run/one
> >> 4. sudo mkdir /var/lock/one
> >> 5. sudo chmod 0777 /var/run/one
> >> 6. sudo chmod 0777 /var/lock/one
> >> 7. one start
> Need to be performed on the client machine from the "oneadmin" user
> logged in.
> NOTE1: The install script should had created the /$HOME/.one-auth
> containing oneadmin:oneadmin inside.
> NOTE2: The password in the one_auth file does NOT NEED TO match the
> password of the oneadmin user. They are two separate things. The one_auth
> file is used for opennebula requests for client validation.
>
> 4. To install opennebula-node just run the node-install.sh on each node.
> The node-install script also creates a oneadmin user. And more important,
> it creates a  $HOME/.ssh (hidden folder, use Ctrl+H to see it in a file
> manager). In this folder it creates(if not already existing) a file called
> authorized_keys. Here the "rsa" key generated on the client is placed.
> This file contains all the "rsa" keys used by anywone which wants to be
> able to connect remotely to this node trough ssh. If the key is not
> present a password is requested when issuing a ssh.
>
> NOTE1: after running node-install, generate a password for oneadmin user
> and log in as oneadmin. If you remain logged as other user the nebula
> client will not be able to connect to the node to get info.
>
> NOTE2: this steps only enable onehost add and onevm submit methods to
> work. Migrate and onevm stop will fail because when migrating the nebula
> nodes communicate directly. And when issuing a stop the node will try to
> save the state of the virtual machine and copy back the machine to the
> nebula client. This two methods will fail because the nodes do not have
> the "rsa" key of the other nodes in their $HOME/.ssh/authorized_keys file.
> And also the nebula client does not have the keys of the nebula nodes. So,
> on each node, do a "ssh-keygen -t rsa". It will generate a id_rsa.pub.
> Copy the key from the .pub file to the authorized_keys file on the nebula
> client and of the other nebula nodes. Do this for each node. If the
> authorized_keys file does not exist create it but see in the node-install
> sh how is that created. VERRY IMPORTANT it must have certain access rights
> and owner. A chmod 0600 and chown -R oneadmin $HOME/.ssh  is necesary. But
> search in the node-install script. There are the correct values.
>
> În Vin, Noiembrie 5, 2010 6:42 pm, KING LABS a scris:
> > Hi Daniel,
> >
> >
> > What you said is right , I am still struggling to get things right  I
> >  dont find opennebula docs to be straigt forward for a newbei , can you
> > ask you for help .
> >
> > I am hoping if you can brief me the steps to install opennebula from
> > source or using express script 

Re: [one-users] ONE-2 vs Ubuntu 10.10 : Error on deploying image when arch type not set in deployment file

2010-11-10 Thread Javier Fontan
Hello,

The correct way to add this parameter is in deployment file
generation, that is LibVirtDriverKVM.cc. We hope to have a solution
for next minor release.

Bye

On Fri, Oct 22, 2010 at 5:00 PM, Olivier Sallou  wrote:
> hi,
> I face an issue when deploying on an Ubuntu 10.10 node.
>
> I have an error:
> "internal error no supported architecture for os type 'hvm'"
>
> Virtualization is ok, it works if I create a VM locally with virt-manager.
>
> I took a ONE deployment file generated on a CentOS node (where the same VM
> was successfully deployed) and tried it on the Ubuntu node.  It failed.
>
> Then I updated manually:
> 
> hvm
> to
> 
> hvm
>
> and deployment was successuful with virsh command.
>
> It seems that there is no default "arch" for os and result in virsh failure.
>
> Is it possible to specify the arch attribute somewhere in the ONE scripts?
>
> Thanks
>
> Olivier
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
DSA Research Group: http://dsa-research.org
Globus GridWay Metascheduler: http://www.GridWay.org
OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VMs stuck in PEND state

2010-11-10 Thread opennebula

Hallo Fernando.
Could you please post the output of:
#onehost list
#onevm show 0
#onehost show 0

It seems that none of your Hosts are enabled!
Tue Nov  9 20:31:18 2010 [HOST][D]: Discovered Hosts (enabled):

best regards

Marlon Nerling

Zitat von Fernando Morgenstern :


Hello,

This is the first time that i'm using open nebula, so i tried to do  
it with express script which ran fine. I'm using CentOS 5.5 with Xen.


The first thing that i'm trying to do is getting the following vm running:

NAME   = ttylinux
CPU= 0.1
MEMORY = 64

DISK   = [
  source   = "/var/lib/one/images/ttylinux.img",
  target   = "hda",
  readonly = "no" ]

NIC= [ NETWORK = "Small network" ]

FEATURES=[ acpi="no" ]


So i used:

onevm create ttylinux.one

The issue is that it keeps stuck in PEND state:

onevm list
   ID USER NAME STAT CPU MEMHOSTNAMETIME
3 oneadmin ttylinux pend   0  0K 00 00:20:03


At oned log the following messages keeps repeating:

Tue Nov  9 20:29:48 2010 [ReM][D]: VirtualMachinePoolInfo method invoked
Tue Nov  9 20:29:59 2010 [ReM][D]: VirtualMachinePoolInfo method invoked
Tue Nov  9 20:30:18 2010 [ReM][D]: HostPoolInfo method invoked
Tue Nov  9 20:30:18 2010 [ReM][D]: VirtualMachinePoolInfo method invoked

And at sched.log, i see this:

Tue Nov  9 20:31:18 2010 [HOST][D]: Discovered Hosts (enabled):
Tue Nov  9 20:31:18 2010 [VM][D]: Pending virtual machines : 3
Tue Nov  9 20:31:18 2010 [RANK][W]: No rank defined for VM
Tue Nov  9 20:31:18 2010 [SCHED][I]: Select hosts
PRI HID
---
Virtual Machine: 3


Any ideas of what might be happening?

I saw other threads here at the list with similar problems, but  
their solution didn't applied to my case.


Best Regards,

Fernando.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] The URL Does not work

2010-11-10 Thread mohab.elnaggar
I got this website, and it is supposed that it is talking about the
documentation/ manual to build private even (hybrid, public) cloud(s), but
it did not work
here you are the URL of the website:

http://wiki.opennebula.org

Is there any other alternative, or suggestion?

Thank you.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Need Help please

2010-11-10 Thread mohab.elnaggar
Dear Sir/ Madam,

I am new to the OpenNebula, and i have downloaded the OpenNebula 2.0 and i
would like to use it to build my own cloud, also i would like to know more
about the public and the hybrid clouds in order to know how to work with
them easily. So would you tell me or even pass me the manual documentation
for the codes required to build such cloud(s). And if there are any
tutorials to learn me step be step to do so, this will be fine to me.

Thank you,
And i would be grateful if you replied me back.

Mohab Hassan ElNaggar.
 
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org