Re: [one-users] Images problem

2010-09-15 Thread Csom Gyula
Hi,

though I haven't tried any of the following approaches... it may still help 
you:)

There's a component called "scp-wave" [1] in the ONE ecosystem catalogue that 
may speed up
image transfers.

You may implement a special TM MAD that supports read-write snapshots when 
cloning images,
for instance uses LVM2 as storage backend. The existing LVM-based TM MAD [3] 
could be a
good starting point (it currently seems to copy images, so you should change it 
do do snapshotting).

Cheers,
Gyula

---
[1] scp-wave: http://www.opennebula.org/software:ecosystem:scp-wave
[2] LVM-based TM MAD doc: http://www.opennebula.org/documentation:rel1.4:sm#lvm
the clone script: 
http://dev.opennebula.org/projects/opennebula/repository/revisions/master/entry/src/tm_mad/lvm/tm_clone.sh

Feladó: 
users-boun...@lists.opennebula.org 
[users-boun...@lists.opennebula.org] 
; meghatalmazó: Luca Lorenzini 
[lorenzini.l...@gmail.com]
Küldve: 2010. szeptember 15. 17:10
Címzett: users@lists.opennebula.org
Tárgy: [one-users] Images problem

Hi, i'm an italian student and i'm using OpenNebula 1.4 for my thesis. I need 
to build a Virtual Lab that will permit to the students to use a 
preconfigurated vm with a browser. My problem is that every time a student 
request a vm OpenNebula will copy the "virgin" vm image (it's about 4GB) and it 
takes long time. I know that i could link to the image, but i need that 
students have root access to the vm and with linking there is a concurrent use 
of the image. And now my question: there is a way to keep the image "virgin" 
without copy it?

Luca
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] In Onemc LCM state is unknown

2010-07-30 Thread Csom Gyula
Hi,

it's just a tipp: you may try restarting your vm-s eg:

  onevm restart 44

This worked for us when powering of the vm from within the vm itself.
Your situation is not the same, but similar...

Hope it helps,

Cheers
Gyula


Feladó: 
users-boun...@lists.opennebula.org 
[users-boun...@lists.opennebula.org] 
; meghatalmazó: Mirza Baig [waseem_...@yahoo.com]
Küldve: 2010. július 30. 16:33
Címzett: users@lists.opennebula.org
Tárgy: [one-users] In Onemc LCM state is unknown


Hi,



Due to some reasons my fronetend and node machines got powered off. After 
restarting the systems,  in onemc all running images are showing as below and i 
am unable to connect to those images. please help me in bringing the images up.



IdUser  NameVM State LCM State   Cpu
Memory  Host   VNC Port Time

44   oneadmin   tty9a  activeunknown 0  
131072  192.168.138.2416  3d 2:26:35
[console] [details] [log]



Thanks & Regards,
Waseem



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] vm State Issue

2010-07-28 Thread Csom Gyula
Hi,
to my undertsanding ONE shutdown uses virsh shutdown for KVM VMM [1] . However 
virsh
shutdown is not guaranteed to succeed. As the virsh man [2] says:

shutdown domain-id
Gracefully shuts down a domain. This coordinates with the domain OS to perform
graceful shutdown, so there is no guarantee that it will succeed, and may take a
variable length of time depending on what services must be shutdown in the 
domain.

You may check whether your guest OS supports acpi...

Cheers,
Gyula

---

[1] KVM VMM MAD: 
http://dev.opennebula.org/projects/opennebula/repository/revisions/master/entry/src/vmm_mad/kvm/one_vmm_kvm.rb
[2] virsh man: http://linux.die.net/man/1/virsh
[3] There are some posts on 'graceful shutdown of KVM guests' you might find 
useful too:
http://www.mail-archive.com/k...@vger.kernel.org/msg04032.html (OpenBSD)
http://www.codigomanso.com/en/2009/11/solved-qemu-kvm-virtual-machine-ignores-shutdown-and-reset/
 (Ubuntu)
http://www.mail-archive.com/k...@vger.kernel.org/msg27699.html (Debian)
http://libvirt.org/formatdomain.html#elementsFeatures (libvirt)
http://www.opennebula.org/documentation:rel1.4:kvmg#features (ONE)


Feladó: 
users-boun...@lists.opennebula.org 
[users-boun...@lists.opennebula.org] 
; meghatalmazó: sand...@kqinfotech.com 
[sand...@kqinfotech.com]
Küldve: 2010. július 28. 13:36
Címzett: users@lists.opennebula.org
Tárgy: [one-users] vm State Issue

Hi,
I am using opennebula 1.4. I used Ubuntu 10.04 for all node (front 
node/worker node) .
I Deployed opennebula with few worker node with kvm hyperwiser.
There is lot of confusion about vm states.

When i shutdown or suspend the vm from front node, it show the status
"shutdown / save / suspend" at front node.
But if i check on worker node with virsh or virt-manager, it shows the vm status
running.

I am not able to change vm state by any way from front node.
I just want to know that It's default behavior or I did something wrong with
opennebula configuration.

There was same problem with 1.2.
I Migrated to 1.4 with the hope that the problem will get solved with version 
1.4.
So , kindly reply for the solution of problem.


--
Thanks & Regards,
Sandeep Kapse

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] integrating cgroup into OpenNebula-KVM?

2010-07-13 Thread Csom Gyula
Hi Shin!

As soon as our solution passed the smoke tests I post it here:)

For the current release we just use the cpu subsystem (maybe the cpuacct also 
in order to
catch runtime stats). For a later release we may also support Net_cls in order 
to ensure
bandwidth QoS (BTW: the basic idea comes from a Red Hat arcticle [1] which is 
also a nice
overview on virtualization).

To my understanding overcommiting cpu resources is known to be problematic only 
for smp
guests. At least the spin lock problem doesn't seem to effect single cpu 
machines [2]. Some
background stuff... Earlier we decided to use the convention below just for 
simplicity. The
reason behind: our original plan was to implement a custom scheduler and we 
didn't want to
deal with complex situtations. So far we switched back to the builtin 
scheduler, hence we
should revisit our CPU/VCPU policy for single CPU machines. Thanks for your 
feedback:)

Cheers,
Gyula

---

[1] http://www.redhat.com/f/pdf/rhev/DOC-KVM.pdf

[2] To my understanding the spin lock [3] problem is the following.
* KVM implements VCPUs with Linux threads. When the CPU utilization is near to 
100% KVM usually
  schedules the threads of the same smp guest to the same physical CPU. Then 
the spin lock
  problem means the following:
* One of the VPCPU threads hold a lock (at the guest OS level).
* The other VCPU thread tries to access it with the spin lock technology that 
is it checks the lock
  frequently but does this without going to sleep.
* Since both VCPU threads lives on the same physical CPU, due to the spin lock 
the lock holder
  cannot run, so the lock cannot be released. A dead lock situation...

[3] http://en.wikipedia.org/wiki/Spinlock


Feladó: Shi Jin [jinzish...@gmail.com]
Küldve: 2010. július 13. 19:28
Címzett: Csom Gyula
Másolatot kap: opennebula user list
Tárgy: Re: [one-users] integrating cgroup into OpenNebula-KVM?

Thank you very much Gyula.
I am very interested in learning your solutions. So please post it.

Curious to know, what cgroups subsystems are you using.  I am only considering 
cpu. Are you using anything else, like cpuset or memory?

A note on the CPU overcommiting: do you see a problem in overcommitting single 
CPU VMs, i.e, multiple small size VMs (vCPU=1) sharing a physical core? Your 
note seems to suggest the problem is only with SMP guests.
I think this is a very important feature. And without running multiple VMs on a 
single core, I feel there is not much need for cgroups really.  The current ONE 
seems good enough if we always set CPU=VCPU in the ONE template. What I wanted 
to have is CPU=1 while vCPU=0.5 or even 0.25.

Thanks.
Shi

2010/7/13 Csom Gyula mailto:c...@interface.hu>>
Hi,
regarding ONE plans I have no clue:) Otherwise in our system (currently under 
development)
we are using cgroups also (especially we are using it in order to guarantee CPU 
performance
which is required for vms like web application servers or kinda). We are using 
cgroups in the
following way:

1. The vm cpu number is technically bound to VCPU (both at OpenNebula and 
libvirt).
2. We are using cpu shares in order to give the proper share.
3. We are using the ONE hook system [1] in order to trigger the cgroups script.

We are using the following convenctions:
4. System share: 90% goes to vms and 10% goes to the system itself.
5. We are not overcommiting cpu resources since KVM has problems with such 
environments
   [2]:
   * physical cpu number must be equal to the vcpu number
   * the total number of vcpus on a given host cannot exceed the numbers of 
physical cpus

BTW: Our solution will reach the alpha state this month, if interested I might 
post it here.

Cheers,
Gyula

---

[1] http://www.opennebula.org/documentation:rel1.4:oned_conf#hook_system
[2] SMP overcommiting problem: 
http://www.mail-archive.com/k...@vger.kernel.org/msg32079.html
http://www.mail-archive.com/k...@vger.kernel.org/msg33739.html. The route cause 
seems to be
spin locks: they might cause dead locks in SMP systems when overcommitting host 
resources.
The problem is also named "lock holder preemption", you might find related 
articles on the web,
for instance: 
http://www.amd64.org/fileadmin/user_upload/pub/2008-Friebel-LHP-GI_OS.pdf.
<http://www.reservoir-fp7.eu/index.php?mact=News,cntnt01,detail,0&cntnt01articleid=66&cntnt01returnid=108>

Feladó: 
users-boun...@lists.opennebula.org<mailto:users-boun...@lists.opennebula.org> 
[users-boun...@lists.opennebula.org<mailto:users-boun...@lists.opennebula.org>] 
; meghatalmazó: Shi Jin [jinzish...@gmail.com<mailto:jinzish...@gmail.com>]
Küldve: 2010. július 13. 1:28
Címzett: opennebula user list
Tárgy: [one-users] integrating cgroup into OpenNebula-KVM?

Hi there,

Redhat is going to include cgroups in the new RHEL-6, which is a great way to 
do quality of service (QoS) control on the resources, such as VM CPU, mem

Re: [one-users] integrating cgroup into OpenNebula-KVM?

2010-07-13 Thread Csom Gyula
Hi,
regarding ONE plans I have no clue:) Otherwise in our system (currently under 
development)
we are using cgroups also (especially we are using it in order to guarantee CPU 
performance
which is required for vms like web application servers or kinda). We are using 
cgroups in the
following way:

1. The vm cpu number is technically bound to VCPU (both at OpenNebula and 
libvirt).
2. We are using cpu shares in order to give the proper share.
3. We are using the ONE hook system [1] in order to trigger the cgroups script.

We are using the following convenctions:
4. System share: 90% goes to vms and 10% goes to the system itself.
5. We are not overcommiting cpu resources since KVM has problems with such 
environments
[2]:
* physical cpu number must be equal to the vcpu number
* the total number of vcpus on a given host cannot exceed the numbers of 
physical cpus

BTW: Our solution will reach the alpha state this month, if interested I might 
post it here.

Cheers,
Gyula

---

[1] http://www.opennebula.org/documentation:rel1.4:oned_conf#hook_system
[2] SMP overcommiting problem: 
http://www.mail-archive.com/k...@vger.kernel.org/msg32079.html
http://www.mail-archive.com/k...@vger.kernel.org/msg33739.html. The route cause 
seems to be
spin locks: they might cause dead locks in SMP systems when overcommitting host 
resources.
The problem is also named "lock holder preemption", you might find related 
articles on the web,
for instance: 
http://www.amd64.org/fileadmin/user_upload/pub/2008-Friebel-LHP-GI_OS.pdf.


Feladó: users-boun...@lists.opennebula.org [users-boun...@lists.opennebula.org] 
; meghatalmazó: Shi Jin [jinzish...@gmail.com]
Küldve: 2010. július 13. 1:28
Címzett: opennebula user list
Tárgy: [one-users] integrating cgroup into OpenNebula-KVM?

Hi there,

Redhat is going to include cgroups in the new RHEL-6, which is a great way to 
do quality of service (QoS) control on the resources, such as VM CPU, memory, 
network etc.
Especially on CPU power, I remember the OpenNebula template has a variable CPU 
but it is not really used under KVM but rather a scheduling criteria.
With cgroups, the CPU can have a meaning used to give each VM their proper 
share of the system computing power.
I wonder if there is any plans to integrate this into OpenNebula.

Thank you very much.

--
Shi Jin, Ph.D.

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] hooks on migration

2010-07-07 Thread Csom Gyula
Hi,
I've ran through the codebase and couldn't understand the situtation:)

1. When a live migrate succeedes the VMM Driver [1] seems to call the LCM 
deploy_success_action() [2] method which in turn sets the VM's lcm state to 
RUNNING.

vm->set_state(VirtualMachine::RUNNING);
vmpool->update(vm);

2. According to Hook.h [3] and PoolSQL.h [4] registered update hooks are 
executed on every pool update.

   do_hooks(objsql, Hook::UPDATE);

That is VMPool a PoolSQL descendant should trigger its registered update hooks.

3. According to VirtualMachinePool [5] and VirtualMachineHook [6] the hook 
registered for the RUNNING event is an update hook: it is a 
VirtualMachineStateHook 
descendant which itself is a VirtualMachineStateMapHook descendant which seems 
to be an update hook. 

So that RUNNING hooks' do_hook method seems to be triggered on every vmpool 
update.

  VirtualMachineStateMapHook(...):
Hook(name, cmd, args, Hook::UPDATE, remote){};

4.  According to the VirtualMachineStateHook [7] the do_hook method seems to 
trigger the registered script if (1) the state is changed and (2) the actual 
state is
the registered target lcm/vm state (ie. RUNNING and ACTIVE). 

if ( prev_lcm == cur_lcm && prev_vm == cur_vm ) //Still in the same state
{
return;
}

if ( cur_lcm == lcm && cur_vm == this->vm )
{
   ...
   hmd->execute(...)
}

So after all it seems that RUNNING hooks are executed whenever a live migration 
succeed (since I guess a RUNNING vm is also an ACTIVE vm).


Note that I've did just a quick code walk through and I'm not a C++ programmer
(in fact have no C++ experience). So I might be absolutely wrong:)

I guess Szabolcs we might simple test the use case:)

Cheers,
Gyula

[1] VMM driver:
http://dev.opennebula.org/projects/opennebula/repository/revisions/master/entry/src/vmm/VirtualMachineManagerDriver.cc

[2] LCM deploy_success_action
http://dev.opennebula.org/projects/opennebula/repository/revisions/master/entry/src/lcm/LifeCycleStates.cc

[3] Hook.h
http://dev.opennebula.org/projects/opennebula/repository/revisions/master/entry/include/Hook.h

[4] PoolSQL.h
http://dev.opennebula.org/projects/opennebula/repository/revisions/master/entry/include/PoolSQL.h

[5] VirtualMachinePool.cc
http://dev.opennebula.org/projects/opennebula/repository/revisions/master/entry/src/vm/VirtualMachinePool.cc

[6] VirtualMachineHook.h
http://dev.opennebula.org/projects/opennebula/repository/revisions/master/entry/include/VirtualMachineHook.h

[7] VirtualMachineHook.cc
http://dev.opennebula.org/projects/opennebula/repository/revisions/master/entry/src/vm/VirtualMachineHook.cc

Feladó: users-boun...@lists.opennebula.org [users-boun...@lists.opennebula.org] 
; meghatalmazó: Jaime Melis [j.me...@fdi.ucm.es]
Küldve: 2010. július 7. 12:13
Címzett: Székelyi Szabolcs
Másolatot kap: users@lists.opennebula.org
Tárgy: Re: [one-users] hooks on migration

Hello,

In OpenNebula the hooks can only be executed on the following events:

- CREATE, when the VM is created (onevm create)
- RUNNING, after the VM is successfully booted
- SHUTDOWN, after the VM is shutdown
- STOP, after the VM is stopped (including VM image transfers)
- DONE, after the VM is deleted or shutdown

Therefore after a migration the ebtables script will not be executed.

Regards,
Jaime

2010/7/5 Székelyi Szabolcs :
> Hello,
>
> I'd like to ask about the operation of the hook system. We're using the
> recommended ebtables way to separate virtual networks. The question is, what
> happens if a VM is live-migrated from a host to the other: does the hook
> script that sets up ebtables run at that time as well to set up the proper
> rules on the destination host?
>
> Thanks,
> --
> Szabolcs
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Image upload within the core

2010-07-01 Thread Csom Gyula
Thanks for the explanations! 

Cheers,
Gyula

Feladó: tinov...@gmail.com [tinov...@gmail.com] ; meghatalmazó: Tino 
Vazquez [tin...@fdi.ucm.es]
Küldve: 2010. június 30. 15:14
Címzett: Csom Gyula
Másolatot kap: desha...@gmail.com; users@lists.opennebula.org
Tárgy: Re: [one-users] Image upload within the core

Hi Csom,

comments inline,

2010/6/24 Csom Gyula :
> Hi Tino!
>
> Thanks for your response! I've got the idea:) Some questions regarding the 
> details:
>
> * Private cloud:
> 1. Will remote terminals be supported for uploads? scp or such?

In principle nothing other than filesystem permissions will impede
such behavior (although it won't be offered out-of-the-box in v1.6)

>
> 2. Will metadata and file upload happen in one "transaction" from the CLI 
> perspective?
> or in separate transactions? Eg.: Will the CLI offer a single command for 
> image creation?
> a command that behind the scenes does the image upload (through scp or such) 
> and
> do the metadata creation. Or will the CLI provide 2 separate commands 
> instead: one for
> upload and one for metadata creation?

This will happen in one operation, atomic for more details (if the
copy is not feasible, then the image won't be created).

>
> * Public and private cloud integration:
> 3. Similar question but now applied to the OCA API (since OCCI and EC2 
> servers seem to
> use OCA for backend access): will the OCA API offer a single method for image 
> creation?
> or will it provide separate commands for upload and metadata instead?

OCA will provide two separate methods (allocate and enable). The CLI
(oneimage) will use allocate, then copy the image, then enable the
image (or delete it in case of copy failure).

Best regards,

-Tino

>
> Cheers,
> Gyula
> 
> Feladó: tinov...@gmail.com [tinov...@gmail.com] ; meghatalmazó: Tino 
> Vazquez [tin...@fdi.ucm.es]
> Küldve: 2010. június 24. 16:13
> Címzett: Csom Gyula
> Másolatot kap: desha...@gmail.com; users@lists.opennebula.org
> Tárgy: Re: [one-users] Image upload within the core
>
> Hi Csom,
>
> Our approach to image upload is:
>
> * Private clouds: The ImagePool will handle the metadata of the
> images. The physical files would be handled by the CLI, transferring
> files using the unix filesystem copy command. This will offer the
> possibility to update the file and eventually delete it.
>
> * Public clouds: Current repository manager in the OCCI and EC2
> servers will be swapped with the ImagePool. Uploading of images will
> be performed using pure http as currently.
>
> We appreciate any feedback, comment on this.
>
> Best regards,
>
> -Tino
>
> --
> Constantino Vázquez, Grid & Virtualization Technology
> Engineer/Researcher: http://www.dsa-research.org/tinova
> DSA Research Group: http://dsa-research.org
> Globus GridWay Metascheduler: http://www.GridWay.org
> OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
>
>
>
> 2010/6/24 Csom Gyula :
>> Hi!
>>
>> Thanks Todd for your response! It was really helpful... maybe not directly, 
>> but it gave me a useful tipp to look for cloud-oriented data transfer 
>> solutions.
>>
>> I Googled the topic but so far haven't found much. After all here's the list:
>>
>> * There's GridFTP you proposed.
>> * CDMI [1] which is a rather complex standard even donno whether it supports 
>> file uploads or not:))
>> * UDT [2] which is a brand new technology meanwhile seems to be the fastest 
>> solution among data transfer methods (Supercomputing Bandwidth Challenge 
>> Winner at 2006, 2008 and 2009).
>> * And of course one can always choose well known protocols like scp, sftp, 
>> pure http, etc. [3]
>>
>> Cheers
>> Gyula
>>
>> ---
>>
>> [1] CDMI: 
>> http://cloud-standards.org/wiki/index.php?title=SNIA_Cloud_Data_Management_Interface_%28CDMI%29
>> [2] UDT: http://en.wikipedia.org/wiki/UDP-based_Data_Transfer_Protocol, 
>> http://udt.sourceforge.net/
>> 
>> Feladó: Todd Deshane [desha...@gmail.com]
>> Küldve: 2010. június 23. 23:53
>> Címzett: Csom Gyula
>> Másolatot kap: users@lists.opennebula.org
>> Tárgy: Re: [one-users] Image upload within the core
>>
>> Hi Gyula,
>>
>> I can't answer the roadmap question, but just to let you know about a
>> couple related projects in case you were unaware of them.
>>
>> For sending large amounts of data (such as disk images), Nimbus [1]
>> uses GridFTP [2].
>>
>> Another really promising project for creati

Re: [one-users] Image upload within the core

2010-06-24 Thread Csom Gyula
Hi Tino!

Thanks for your response! I've got the idea:) Some questions regarding the 
details:

* Private cloud:
1. Will remote terminals be supported for uploads? scp or such?

2. Will metadata and file upload happen in one "transaction" from the CLI 
perspective? 
or in separate transactions? Eg.: Will the CLI offer a single command for image 
creation? 
a command that behind the scenes does the image upload (through scp or such) 
and 
do the metadata creation. Or will the CLI provide 2 separate commands instead: 
one for 
upload and one for metadata creation?

* Public and private cloud integration:
3. Similar question but now applied to the OCA API (since OCCI and EC2 servers 
seem to 
use OCA for backend access): will the OCA API offer a single method for image 
creation? 
or will it provide separate commands for upload and metadata instead?

Cheers,
Gyula

Feladó: tinov...@gmail.com [tinov...@gmail.com] ; meghatalmazó: Tino 
Vazquez [tin...@fdi.ucm.es]
Küldve: 2010. június 24. 16:13
Címzett: Csom Gyula
Másolatot kap: desha...@gmail.com; users@lists.opennebula.org
Tárgy: Re: [one-users] Image upload within the core

Hi Csom,

Our approach to image upload is:

* Private clouds: The ImagePool will handle the metadata of the
images. The physical files would be handled by the CLI, transferring
files using the unix filesystem copy command. This will offer the
possibility to update the file and eventually delete it.

* Public clouds: Current repository manager in the OCCI and EC2
servers will be swapped with the ImagePool. Uploading of images will
be performed using pure http as currently.

We appreciate any feedback, comment on this.

Best regards,

-Tino

--
Constantino Vázquez, Grid & Virtualization Technology
Engineer/Researcher: http://www.dsa-research.org/tinova
DSA Research Group: http://dsa-research.org
Globus GridWay Metascheduler: http://www.GridWay.org
OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org



2010/6/24 Csom Gyula :
> Hi!
>
> Thanks Todd for your response! It was really helpful... maybe not directly, 
> but it gave me a useful tipp to look for cloud-oriented data transfer 
> solutions.
>
> I Googled the topic but so far haven't found much. After all here's the list:
>
> * There's GridFTP you proposed.
> * CDMI [1] which is a rather complex standard even donno whether it supports 
> file uploads or not:))
> * UDT [2] which is a brand new technology meanwhile seems to be the fastest 
> solution among data transfer methods (Supercomputing Bandwidth Challenge 
> Winner at 2006, 2008 and 2009).
> * And of course one can always choose well known protocols like scp, sftp, 
> pure http, etc. [3]
>
> Cheers
> Gyula
>
> ---
>
> [1] CDMI: 
> http://cloud-standards.org/wiki/index.php?title=SNIA_Cloud_Data_Management_Interface_%28CDMI%29
> [2] UDT: http://en.wikipedia.org/wiki/UDP-based_Data_Transfer_Protocol, 
> http://udt.sourceforge.net/
> ____
> Feladó: Todd Deshane [desha...@gmail.com]
> Küldve: 2010. június 23. 23:53
> Címzett: Csom Gyula
> Másolatot kap: users@lists.opennebula.org
> Tárgy: Re: [one-users] Image upload within the core
>
> Hi Gyula,
>
> I can't answer the roadmap question, but just to let you know about a
> couple related projects in case you were unaware of them.
>
> For sending large amounts of data (such as disk images), Nimbus [1]
> uses GridFTP [2].
>
> Another really promising project for creating base images and
> filesystem stacks is a project called Stacklet [3,4].
>
> Hope that helps.
>
> Thanks,
> Todd
>
> [1] http://www.nimbusproject.org/
> [2] http://www.globus.org/toolkit/data/gridftp/
> [3] http://stacklet.com/
> [4] http://bitbucket.org/stacklet/stacklet/
>
> On Wed, Jun 23, 2010 at 3:09 PM, Csom Gyula  wrote:
>> Hi!
>>
>>
>> Do you have plans to support image uploads? Is it on your 1.6 roadmap? I ask 
>> this
>> in order to coordinate our (extension) development with your roadmap. Some
>> background:
>>
>>
>> Currently we are in the procces to specify our golden image management 
>> service, eg:
>>
>> what features to provide exactly ? how to implement it? in a maintanable 
>> manner (eg.
>>
>> starting with ONE v.1.4 then smoothly migrate to v.1.6)?
>>
>>
>> We've found that one of the biggest challenges is the image upload 
>> functionality.
>>
>> A clean solution would implement the upload service within the core. 
>> Meanwhile
>> XML-RPC used by the RequestManager is inappropriate for large file uploads.
>> XML-RPC provides nothing but base64 types for binary data, meanwhile base64
>> coding-decodi

Re: [one-users] onehost problem with kvm

2010-06-24 Thread Csom Gyula
Hi,

ops, yes the main branch is for the current development line, sorry for the 
mistake:)

Back to the eth interface... Please find attached a simple patch that can adopt 
the NIC names used by the hosts 
to the kvm.rb script. It works like this: 

The bash script reads the BRIDGE setting from the oned.conf and puts this value 
to kvm.rb. The script should 
be started when oned is started, but could be executed manually as well. 
Something like this:

  nic_patch.sh [apply]

See the script for more.

Cheers,
Gyula

Feladó: Javier Fontan [jfon...@gmail.com]
Küldve: 2010. június 24. 15:06
Címzett: Csom Gyula
Másolatot kap: Andrea Turli; users@lists.opennebula.org
Tárgy: Re: [one-users] onehost problem with kvm

Hello,

I made the same mistake. That path in the git repository is for the
development version, the foundation for 1.6, so it is normal that the
files are different. To check the file that comes in the tar.gz you
have to look at one-1.4 branch, or more specifically its release tag:
http://dev.opennebula.org/projects/opennebula/repository/revisions/release-1.4/entry/src/im_mad/kvm/kvm.rb

This bug is solved in one-1.4 branch but 1.4.1 tar.gz is unfortunately
not yet released.

Bye

2010/6/23 Csom Gyula :
> Hi,
>
> We met a similar problem and it turned out that the IM MAD responsible for 
> KVM-specific
> monitoring is 
> kvm.rb<http://dev.opennebula.org/projects/opennebula/repository/revisions/master/entry/src/im_mad/kvm/kvm.rb>.
>  Unfotunately the script found in the ONE v.1.4 download was not in
> synch with the codebase, namely the virsh call at the begining was 
> erroneous...
>
> You may check whether kvm.rb of your installation contains the following line 
> at the
> begining:
>
>  nodeinfo_text = `virsh -c qemu:///system nodeinfo`
>
> if not then you may simple correct it...
>
> Note that: kvm.rb is found in the ONE lib directory within the im_probes 
> subdir.
>
>
> You may also check whether NIC names used in the hosts are in synch with the 
> kvm.rb
> setting, otherwise NETRX, NETTX might not receive monitoring values at all. 
> kvm.rb looks
> for network monitoring values through the eth1 interface by default. If this 
> is not your case
> you might change the setting, eg.:
>
>  NETINTERFACE = "real nic name"
>
> Cheers,
> Gyula
>
> 
> Feladó: users-boun...@lists.opennebula.org 
> [users-boun...@lists.opennebula.org] ; meghatalmazó: Andrea Turli 
> [andrea.tu...@eng.it]
> Küldve: 2010. június 23. 17:04
> Címzett: users@lists.opennebula.org
> Tárgy: [one-users] onehost problem with kvm
>
> Dear all,
>
> I'm starting today a new installation of OpenNebula 1.4 with the really 
> useful OpenNebula Express guide at 
> http://dev.opennebula.org/projects/opennebula-express/wiki
> Unfortunately it seems to be a communication problem between my front-end 
> (Ubuntu 10.04 - KVM - NFS) and my node (Ubuntu 10.04 - KVM - NFS)
>
> Here some outputs from the frot-end:
>
> $ onehost list
>  ID NAME  RVM   TCPU   FCPU   ACPUTMEMFMEM STAT
>   2 grids21.eng.it  0  0  0100   0 7891240   on
>
>
> onead...@lisa:~$ onehost show 2
> HOST 2 INFORMATION
> ID: 2
> NAME  : grids21.eng.it
> STATE : MONITORED
> IM_MAD: im_kvm
> VM_MAD: vmm_kvm
> TM_MAD: tm_nfs
>
> HOST SHARES
> MAX MEM   : 0
> USED MEM (REAL)   : 547284
> USED MEM (ALLOCATED)  : 0
> MAX CPU   : 0
> USED CPU (REAL)   : 0
> USED CPU (ALLOCATED)  : 0
> RUNNING VMS   : 0
>
> MONITORING INFORMATION
> ARCH=x86_64
> CPUSPEED=
> FREECPU=0.0
> FREEMEMORY=7891364
> HOSTNAME=bart
> HYPERVISOR=kvm
> MODELNAME=Intel(R) Xeon(R) CPU5130  @ 2.00GHz
> NETRX=0
> NETTX=0
> TOTALCPU=
> TOTALMEMORY=
> USEDCPU=0.0
> USEDMEMORY=547284
>
> Here also my oned.conf:
>
> Wed Jun 23 16:54:52 2010 [InM][I]: --Mark--
> Wed Jun 23 16:54:52 2010 [InM][D]: Host 2 successfully monitored.
> Wed Jun 23 16:55:30 2010 [ReM][D]: HostInfo method invoked
> Wed Jun 23 16:55:36 2010 [ReM][D]: HostPoolInfo method invoked
> Wed Jun 23 16:55:52 2010 [InM][I]: Monitoring host grids21.eng.it (2)
> Wed Jun 23 16:56:17 2010 [InM][D]: Host 2 successfully monitored.
> Wed Jun 23 16:56:28 2010 [ReM][D]: HostInfo method invoked
> Wed Jun 23 16:57:22 2010 [InM][I]: Monitoring host grids21.eng.it (2)
> Wed Jun 23 16:57:47 2010 [InM][D]: Host 2 successfully monitored.
> Wed Jun 23 16:58:52 2010 [InM][I]: Monitoring host grids21.eng.it (2)
> Wed Jun 23 16:59:17 2010 [InM][D]: Host 2 successfully monitor

Re: [one-users] Image upload within the core

2010-06-24 Thread Csom Gyula
Hi!

Thanks Todd for your response! It was really helpful... maybe not directly, but 
it gave me a useful tipp to look for cloud-oriented data transfer solutions.

I Googled the topic but so far haven't found much. After all here's the list:

* There's GridFTP you proposed.
* CDMI [1] which is a rather complex standard even donno whether it supports 
file uploads or not:))
* UDT [2] which is a brand new technology meanwhile seems to be the fastest 
solution among data transfer methods (Supercomputing Bandwidth Challenge Winner 
at 2006, 2008 and 2009).
* And of course one can always choose well known protocols like scp, sftp, pure 
http, etc. [3]

Cheers
Gyula

---

[1] CDMI: 
http://cloud-standards.org/wiki/index.php?title=SNIA_Cloud_Data_Management_Interface_%28CDMI%29
[2] UDT: http://en.wikipedia.org/wiki/UDP-based_Data_Transfer_Protocol, 
http://udt.sourceforge.net/

Feladó: Todd Deshane [desha...@gmail.com]
Küldve: 2010. június 23. 23:53
Címzett: Csom Gyula
Másolatot kap: users@lists.opennebula.org
Tárgy: Re: [one-users] Image upload within the core

Hi Gyula,

I can't answer the roadmap question, but just to let you know about a
couple related projects in case you were unaware of them.

For sending large amounts of data (such as disk images), Nimbus [1]
uses GridFTP [2].

Another really promising project for creating base images and
filesystem stacks is a project called Stacklet [3,4].

Hope that helps.

Thanks,
Todd

[1] http://www.nimbusproject.org/
[2] http://www.globus.org/toolkit/data/gridftp/
[3] http://stacklet.com/
[4] http://bitbucket.org/stacklet/stacklet/

On Wed, Jun 23, 2010 at 3:09 PM, Csom Gyula  wrote:
> Hi!
>
>
> Do you have plans to support image uploads? Is it on your 1.6 roadmap? I ask 
> this
> in order to coordinate our (extension) development with your roadmap. Some
> background:
>
>
> Currently we are in the procces to specify our golden image management 
> service, eg:
>
> what features to provide exactly ? how to implement it? in a maintanable 
> manner (eg.
>
> starting with ONE v.1.4 then smoothly migrate to v.1.6)?
>
>
> We've found that one of the biggest challenges is the image upload 
> functionality.
>
> A clean solution would implement the upload service within the core. Meanwhile
> XML-RPC used by the RequestManager is inappropriate for large file uploads.
> XML-RPC provides nothing but base64 types for binary data, meanwhile base64
> coding-decoding of large files is a pain... Also xmlrpc-c used by ONE seems to
> represent binary data in byte arrays in memory which is unacceptable for 
> large,
> multi-GB files...
>
>
> So if we decide to implement the upload service we will likely choose an 
> alternative
> approach, eg. a separate channel for text data (XML-RPC) and another one for 
> large
> files (scp, http whatever seems most reasonable). Meanwhile we'd like to 
> adhere to
> the ONE mainline as much as possible.
>
>
> BTW: We are aware of the upcomming image repository feature [1]. To our
>
> understanding (after a quick code walk through) it provides image 
> metadata-only
>
> service.
> .
>
> Cheers,
> Gyula
>
>
> [1] http://dev.opennebula.org/issues/200
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



--
Todd Deshane
http://todddeshane.net
http://runningxen.com
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Image upload within the core

2010-06-23 Thread Csom Gyula
Hi!


Do you have plans to support image uploads? Is it on your 1.6 roadmap? I ask 
this
in order to coordinate our (extension) development with your roadmap. Some
background:


Currently we are in the procces to specify our golden image management service, 
eg:

what features to provide exactly ? how to implement it? in a maintanable manner 
(eg.

starting with ONE v.1.4 then smoothly migrate to v.1.6)?


We've found that one of the biggest challenges is the image upload 
functionality.

A clean solution would implement the upload service within the core. Meanwhile
XML-RPC used by the RequestManager is inappropriate for large file uploads.
XML-RPC provides nothing but base64 types for binary data, meanwhile base64
coding-decoding of large files is a pain... Also xmlrpc-c used by ONE seems to
represent binary data in byte arrays in memory which is unacceptable for large,
multi-GB files...


So if we decide to implement the upload service we will likely choose an 
alternative
approach, eg. a separate channel for text data (XML-RPC) and another one for 
large
files (scp, http whatever seems most reasonable). Meanwhile we'd like to adhere 
to
the ONE mainline as much as possible.


BTW: We are aware of the upcomming image repository feature [1]. To our

understanding (after a quick code walk through) it provides image metadata-only

service.
.

Cheers,
Gyula


[1] http://dev.opennebula.org/issues/200
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] onehost problem with kvm

2010-06-23 Thread Csom Gyula
Hi,

We met a similar problem and it turned out that the IM MAD responsible for 
KVM-specific
monitoring is 
kvm.rb.
 Unfotunately the script found in the ONE v.1.4 download was not in
synch with the codebase, namely the virsh call at the begining was erroneous...

You may check whether kvm.rb of your installation contains the following line 
at the
begining:

  nodeinfo_text = `virsh -c qemu:///system nodeinfo`

if not then you may simple correct it...

Note that: kvm.rb is found in the ONE lib directory within the im_probes subdir.


You may also check whether NIC names used in the hosts are in synch with the 
kvm.rb
setting, otherwise NETRX, NETTX might not receive monitoring values at all. 
kvm.rb looks
for network monitoring values through the eth1 interface by default. If this is 
not your case
you might change the setting, eg.:

  NETINTERFACE = "real nic name"

Cheers,
Gyula


Feladó: users-boun...@lists.opennebula.org [users-boun...@lists.opennebula.org] 
; meghatalmazó: Andrea Turli [andrea.tu...@eng.it]
Küldve: 2010. június 23. 17:04
Címzett: users@lists.opennebula.org
Tárgy: [one-users] onehost problem with kvm

Dear all,

I'm starting today a new installation of OpenNebula 1.4 with the really useful 
OpenNebula Express guide at 
http://dev.opennebula.org/projects/opennebula-express/wiki
Unfortunately it seems to be a communication problem between my front-end 
(Ubuntu 10.04 - KVM - NFS) and my node (Ubuntu 10.04 - KVM - NFS)

Here some outputs from the frot-end:

$ onehost list
  ID NAME  RVM   TCPU   FCPU   ACPUTMEMFMEM STAT
   2 grids21.eng.it  0  0  0100   0 7891240   on


onead...@lisa:~$ onehost show 2
HOST 2 INFORMATION
ID: 2
NAME  : grids21.eng.it
STATE : MONITORED
IM_MAD: im_kvm
VM_MAD: vmm_kvm
TM_MAD: tm_nfs

HOST SHARES
MAX MEM   : 0
USED MEM (REAL)   : 547284
USED MEM (ALLOCATED)  : 0
MAX CPU   : 0
USED CPU (REAL)   : 0
USED CPU (ALLOCATED)  : 0
RUNNING VMS   : 0

MONITORING INFORMATION
ARCH=x86_64
CPUSPEED=
FREECPU=0.0
FREEMEMORY=7891364
HOSTNAME=bart
HYPERVISOR=kvm
MODELNAME=Intel(R) Xeon(R) CPU5130  @ 2.00GHz
NETRX=0
NETTX=0
TOTALCPU=
TOTALMEMORY=
USEDCPU=0.0
USEDMEMORY=547284

Here also my oned.conf:

Wed Jun 23 16:54:52 2010 [InM][I]: --Mark--
Wed Jun 23 16:54:52 2010 [InM][D]: Host 2 successfully monitored.
Wed Jun 23 16:55:30 2010 [ReM][D]: HostInfo method invoked
Wed Jun 23 16:55:36 2010 [ReM][D]: HostPoolInfo method invoked
Wed Jun 23 16:55:52 2010 [InM][I]: Monitoring host grids21.eng.it (2)
Wed Jun 23 16:56:17 2010 [InM][D]: Host 2 successfully monitored.
Wed Jun 23 16:56:28 2010 [ReM][D]: HostInfo method invoked
Wed Jun 23 16:57:22 2010 [InM][I]: Monitoring host grids21.eng.it (2)
Wed Jun 23 16:57:47 2010 [InM][D]: Host 2 successfully monitored.
Wed Jun 23 16:58:52 2010 [InM][I]: Monitoring host grids21.eng.it (2)
Wed Jun 23 16:59:17 2010 [InM][D]: Host 2 successfully monitored.
Wed Jun 23 17:00:22 2010 [InM][I]: Monitoring host grids21.eng.it (2)
Wed Jun 23 17:00:47 2010 [InM][D]: Host 2 successfully monitored.
Wed Jun 23 17:01:52 2010 [InM][I]: Monitoring host grids21.eng.it (2)


I think there is a problem in retrieving the CPUSPEED, TOTALCPU and TOTALMEMORY 
and for this reason I cannot run any VM.
It seems really similar to this post 
http://lists.opennebula.org/pipermail/users-opennebula.org/2010-February/001453.html
 but I cannot find there a solution.

Thank you in advance for your help and suggestions,
Andrea




--
Andrea Turli
Ricercatore
Direzione Ricerca e Innovazione
andrea.tu...@eng.it

Engineering Ingegneria Informatica spa
Via Riccardo Morandi, 32 00148 Roma (RM)
Tel. +39 06 8307 4710
Fax +39 06 8307 4200
www.eng.it


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Cannot suspend or migrate/livemigrate a VM

2010-06-21 Thread Csom Gyula
Hi Daniele,

we are just testing live migration as well, with one of my colleagues. So far we
have discovered the following that might help you:

The built in ONE KVM driver [1] uses ssh libvirt transport [2], that is ONE 
calls
virsh by passing a qemu+ssh URI, something like this:

virsh -c qemu:///system migrate --live deployment_id qemu+ssh://dest_host/system

So one reason for the B->A failure could be the lack of SSH credentials at host 
B.
You might test whether oneadmin can ssh into host A from host B. If not you
may copy oneadmin's private key to host B or configure ssh agent.

If the above doesn't work you may debug libvirt [3] to see what's happening
behind the scenes. You may issue something like this on both hosts A and B:

sudo su -c "export LIBVIRT_DEBUG=1; export 
LIBVIRT_LOG_OUTPUTS=\"1:file:/tmp/libvirt.log\"; /etc/init.d/libvirt-bin 
restart"

Then check libvirt logs...

Cheers,
Gyula

---

[1] ONE KVM driver:
http://dev.opennebula.org/projects/opennebula/repository/revisions/master/entry/src/vmm_mad/kvm/one_vmm_kvm.rb

[2] libvirt transports: http://libvirt.org/remote.html#Remote_transports

[3] libvirt logging: http://libvirt.org/logging.html


Feladó: users-boun...@lists.opennebula.org [users-boun...@lists.opennebula.org] 
; meghatalmazó: Daniele Fetoni [daniele.fet...@hotmail.it]
Küldve: 2010. június 21. 14:53
Címzett: tin...@fdi.ucm.es; users@lists.opennebula.org
Tárgy: Re: [one-users] Cannot suspend or migrate/livemigrate a VM

Hi Tino, and thanks for reply

I am still suffering this...and it's exactly as you said...from host A to B 
everything is OK, but if I try to restore the VM from B to A, (live)migration 
fails.
Sadly, I cannot see if I can migrate to C... we are using just few computer, 
two hosts and  a PC without VT as frontend.
As soon as I can prove this (I need another machine with VTs, we are using KVM) 
I'll tell you.
Thanks again!!!

Daniele

> From: tin...@fdi.ucm.es
> Date: Mon, 21 Jun 2010 13:16:09 +0200
> Subject: Re: [one-users] Cannot suspend or migrate/livemigrate a VM
> To: daniele.fet...@hotmail.it
> CC: users@lists.opennebula.org
>
> Hi Daniele,
>
> Are you still suffering this? If I get you correctly, you cannot
> (live)migrate one VM more than once.
>
> * So you migrate one VM from hostA to hostB, that goes allright.
> * You migrate from hostB to hostA, and that fails
> * Can you migrate from hostB to hostC?
>
> Regards,
>
> -Tino
>
> --
> Constantino Vázquez, Grid & Virtualization Technology
> Engineer/Researcher: http://www.dsa-research.org/tinova
> DSA Research Group: http://dsa-research.org
> Globus GridWay Metascheduler: http://www.GridWay.org
> OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
>
>
>
> On Fri, Jun 18, 2010 at 4:06 PM, Daniele Fetoni
>  wrote:
> > Maybe, I have solved my issue, but I still have one question.
> > Now I can migrate/livemigrate or suspend VMs, but I can do one of these
> > operations just ONE TIME: if I migrate a VM, then re-migrate to the first
> > host, I got error on qemu///session, because, qemu doesn't find the VM
> > domain.
> >
> > I wonder if this is normal, or I have still a problem.
> > If so, I'll report specific logs.
> >
> > Thanks again
> >
> > Daniele Fetoni
> >
> > 
> > From: daniele.fet...@hotmail.it
> > To: users@lists.opennebula.org
> > Date: Fri, 18 Jun 2010 12:38:08 +0200
> > Subject: [one-users] Cannot suspend or migrate/livemigrate a VM
> >
> >
> > Hi
> >
> > I'm using opennebula 1.4, qemu-kvm and a nfs shared folder.
> > When I try to suspend a VM previously created and working, or even if I try
> > to migrate/livemigrate it, I obtain always the same error:
> >
> > Fri Jun 18 12:23:03 2010 [LCM][I]: New VM state is BOOT
> > Fri Jun 18 12:23:03 2010 [VMM][I]: Generating deployment file:
> > /srv/cloud/one/var/0/deployment.0
> > Fri Jun 18 12:23:04 2010 [LCM][I]: New VM state is RUNNING
> > Fri Jun 18 12:23:22 2010 [LCM][I]: New VM state is SAVE_SUSPEND
> > Fri Jun 18 12:23:23 2010 [VMM][I]: Command execution fail: 'touch
> > /srv/cloud/one/var/0/images/checkpoint;virsh --connect qemu:///system save
> > one-0 /srv/cloud/one/var/0/images/checkpoint'
> > Fri Jun 18 12:23:23 2010 [VMM][I]: STDERR follows.
> > Fri Jun 18 12:23:23 2010 [VMM][I]: error: Failed to save domain one-0 to
> > /srv/cloud/one/var/0/images/checkpoint
> > Fri Jun 18 12:23:23 2010 [VMM][I]: error: operation failed: failed to create
> > '/srv/cloud/one/var/0/images/checkpoint'
> > Fri Jun 18 12:23:23 2010 [VMM][I]: ExitCode: 1
> > Fri Jun 18 12:23:23 2010 [VMM][E]: Error saving VM state, -
> > Fri Jun 18 12:23:23 2010 [LCM][I]: Fail to save VM state. Assuming that the
> > VM is still RUNNING (will poll VM).
> > Fri Jun 18 12:23:23 2010 [VMM][I]: VM running but new state from monitor is
> > PAUSED.
> > Fri Jun 18 12:23:23 2010 [LCM][I]: VM is suspended.
> > Fri Jun 18 12:23:23 2010 [DiM][I]: New VM state is SUSPENDED
> >
> > I have seen in archives this