Re: [one-users] LDAP Authentication - default driver unavailable

2012-02-10 Thread Shantanu Pavgi


On Feb 10, 2012, at 12:17 PM, Shantanu Pavgi wrote:


I am trying to configure LDAP authentication in OpenNebula 3.2.1. I have 
followed steps mentioned in the documentation - 
http://opennebula.org/documentation:rel3.2:ldap . I have created a symlink 
which points default directory to ldap directory as mentioned in the 
documentation.  However I am getting following error in the logs:

{{{

Fri Feb 10 11:52:42 2012 [ReM][D]: VirtualMachinePoolInfo method invoked
Fri Feb 10 11:52:42 2012 [AuM][D]: Message received: AUTHENTICATE FAILURE 1 
Authentication driver 'default' not available

}}}

The AUTH_MAD is configured as follows:
{{{
AUTH_MAD = [
executable = "one_auth_mad",
arguments  = "--authn ssh,x509,ldap,server_cipher,server_x509"
#arguments  = "--authz quota --authn 
ssh,x509,ldap,server_cipher,server_x509"
]
}}}

Any help on what might be the problem here? Please let me know if I should post 
any additional details.



I added 'default' driver type to AUTH_MAD arguments and it's making the LDAP 
call now. I am getting new errors now but the 'default' driver not available 
error is resolved now.

--
Shantanu
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Disk order meaningful ?

2012-02-10 Thread Gian Uberto Lauri
Hello gentlemen!

It seems that in OpenNebula 2.2 these two snippets ov a VM template are not
equivalent

-[1]
DISK = [
TYPE=fs,
SIZE = 128,
FORMAT = "ext3",
TARGET = "hdc"]
DISK = [
SOURCE = "file:///home/oneadmin/one-images/swap.qcow",
TYPE=swap,
SIZE = 200,
TARGET = "hdd"]
DISK = [
IMAGE = "venuscdebianbase", 
TARGET = "hda"]


-[2]
DISK = [
IMAGE = "venuscdebianbase", 
TARGET = "hda"]
DISK = [
TYPE=fs,
SIZE = 128,
FORMAT = "ext3",
TARGET = "hdc"]
DISK = [
SOURCE = "file:///home/oneadmin/one-images/swap.qcow",
TYPE=swap,
SIZE = 200,
TARGET = "hdd"]


Both declare a swap area, a file system image and a registered os image.

But the  with the declarations  as in [1]  the machine does  not boots
since it seems it tries to boot from the wrong disk.

Does OpenNebula 3.x have the same behaviour?

--
ing. Gian Uberto Lauri
Ricercatore / Reasearcher
Laboratorio Ricerca e Sviluppo / Research & Development Lab.
Area Calcolo Distribuito / Distributed Computation Area

gianuberto.la...@eng.it

Engineering Ingegneria Informatica spa
Corso Stati Uniti 23/C, 35127 Padova (PD) 

Tel. +39-049.8283.571 | main(){printf(&unix["\021%six\012\0"], 
Fax  +39-049.8283.569 |(unix)["have"]+"fun"-0x60);}   
Skype: gian.uberto.lauri  |  David Korn, AT&T Bell Labs 

http://www.eng.it |  ioccc best One Liner, 1987 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] LDAP Authentication - default driver unavailable

2012-02-10 Thread Shantanu Pavgi

I am trying to configure LDAP authentication in OpenNebula 3.2.1. I have 
followed steps mentioned in the documentation - 
http://opennebula.org/documentation:rel3.2:ldap . I have created a symlink 
which points default directory to ldap directory as mentioned in the 
documentation.  However I am getting following error in the logs:

{{{

Fri Feb 10 11:52:42 2012 [ReM][D]: VirtualMachinePoolInfo method invoked
Fri Feb 10 11:52:42 2012 [AuM][D]: Message received: AUTHENTICATE FAILURE 1 
Authentication driver 'default' not available

}}}

The AUTH_MAD is configured as follows:
{{{
AUTH_MAD = [
executable = "one_auth_mad",
arguments  = "--authn ssh,x509,ldap,server_cipher,server_x509"
#arguments  = "--authz quota --authn 
ssh,x509,ldap,server_cipher,server_x509"
]
}}}

Any help on what might be the problem here? Please let me know if I should post 
any additional details.

--
Thanks,
Shantanu
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] libvirt not allowing access to /dev/kvm

2012-02-10 Thread Jaime Melis
Hi Michael,

to figure out what's wrong, can you send us:
$ grep -Ev '^(#|$)' /etc/libvirt/qemu.conf
$ grep -Ev '^(#|$)' /etc/libvirt/libvirtd.conf

I'm aware you already sent part of your qemu.conf... but I'd like to
know if there's anything else besides what you pasted.

cheers,
Jaime


On Wed, Feb 8, 2012 at 7:53 PM, Michael Brown  wrote:
> I think I've finally nailed the root cause of my troubles. I posted this
> on http://serverfault.com/q/358118/2101 but you guys may be able to
> answer with more authority:
>
> I have a fresh Open Nebula 3.2.1 installation which I'm trying to get
> working and manage some freshly-installed debian squeeze kvm hosts.
>
> My problem is that when Open Nebula deploys VMs the KVM process does not
> have access to the /dev/kvm device on the host.
>
> I've set up everything according to documentation:
> root@onhost1:~# ls -al /dev/kvm
> crw-rw 1 root kvm 10, 232 Feb 8 11:24 /dev/kvm
>
> root@onhost1:~# id oneadmin
> uid=500(oneadmin) gid=500(oneadmin)
> groups=500(oneadmin),106(kvm),108(libvirt)
>
> libvirt/qemu.conf has:
> user = "oneadmin"
> group = "oneadmin"
>
> When libvirt creates VMs they do not have any of the secondary groups
> set so the process doesn't have access to /dev/kvm via file permissions.
> OK, fair enough, though the Open Nebula documentation seems to indicate
> it should be set up this way.
>
> I've tried mounting cgroups to try and resolve this problem. After I do
> so, the kvm process has the following cgroup entry:
>
> 1:devices,cpu:/libvirt/qemu/one-29
>
> corresponding to:
>
> /dev/cgroup/libvirt/qemu/one-29/devices.list:c 10:232 rwm
>
> My lack of understanding of how cgroups work indicate to me that this
> ought to allow the process to access /dev/kvm, but no go.
>
> I can make things work by adding an ACL entry (setfacl -m u:oneadmin:rw
> /dev/kvm) but that doesn't Seem Right. Shouldn't Open Nebula/libvirt be
> handling this?
>
> * What are the Correct Changes to make?
> * Should the documentation be changed?
> * Have I missed something?
>
>
> --
> Michael Brown               | `One of the main causes of the fall of
> Systems Consultant          | the Roman Empire was that, lacking zero,
> Net Direct Inc.             | they had no way to indicate successful
> ☎: +1 519 883 1172 x5106    | termination of their C programs.' - Firth
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



-- 
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | jme...@opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VM groups & deadlocks

2012-02-10 Thread Carlos Martín Sánchez
Hi,

That functionality is not supported. VMs can be put 'on hold', with the
'onevm hold' command. You could write a higher layer application to manage
these groups of VMs using this hold state.

Take a look at the OpenNebula Service Manager [1] ecosystem project. It
was developed for OpenNebula 2.0, and I can't really tell if it is
incompatible with the 3.x series. I'm guessing that the interaction with
OpenNebula is limited to basic VM operations such as deploy and shutdown,
so it might be worth to give it a try.

Regards.

[1] http://opennebula.org/software:ecosystem:oneservice
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open Source Toolkit for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebula


2012/1/26 Tomáš Nechutný 

> Hello,
>
> I've a problem in my mind. Suppose I've an OpenNebula setup with
> capacity for 4 VMs of certain type. I also have a separate machine
> which runs some integration tests at ranomd time with tests T1 and T2
> both requiring 3 VMs. When both tests start at same time a deadlock
> can occur. Is it possible to somehow schedule groups of VMs so such
> deadlocks can't occur? Thank you.
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] OpenNebula 3.2 need memory and CPU over-commit.

2012-02-10 Thread Carlos Martín Sánchez
Hi Maxim,

CPU overcommitment can be set up with the CPU / VCPU attributes. Let's say
you have an 8-core host, and you want to run a max. of 16 VMs, each with 2
virtual CPUs:

CPU = 0.5
VCPU = 2


Unfortunately, OpenNebula doesn't have any equivalent for memory
overcommitment. Maybe we should add a new attribute, VMEMORY, and use the
same idea as with VCPU:

MEMORY: requested host memory
VMEMORY: amount of memory that the VM guest will see


We'd like to hear your thoughts. Would that be enough for your use-case, or
do you have any other suggestion to manage ballooning?


And now, how to force the memory overcommitment: You can edit the host
monitoring script to report the double of the real memory. Multiply by
2 $total_memory in /var/lib/one/remotes/im/kvm.d/kvm.rb
Then execute 'onehost sync' as oneadmin in the front-end. OpenNebula will
update the poll script in the next host monitorization cycle.


Cheers
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open Source Toolkit for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebula


On Tue, Feb 7, 2012 at 4:25 AM, Maxim Mikheev  wrote:

>  Hi Steven,
>
> Thank you for your answer. Unfortunately for me it has some checks
> (OpenNebula 3.2.1).
> Parameters of a host:
> USEDCPU230.4
> CPUSPEED1400
> NETTX168512869
> HYPERVISORkvm
> TOTALCPU6400
> ARCHx86_64
> NETRX4634446500
> *FREEMEMORY164233748*
> *FREECPU6169.6*
> TOTALMEMORY264610312
> MODELNAMEAMD Opteron(TM) Processor 6272
> USEDMEMORY100376564
> HOSTNAMEs0
>
> The system has running 4 VM's with total allocation 64 CPUs. and VM
> template with CPU=4 and MEMORY=5000 never started (in the Pending state).
> When I turn of any of other running VM's this VM's immediately start.
>
> I have seen some old e-mails (2010-2011) which are  requested such
> behavior and it was implemented. But I didn't find how to adjust it. I
> would like to turn this feature off or to make to possible make 2x (or
> others) overcommitment.
> Can anyone suggest how to change rules?
>
> Regards,
>Max
>
> On 02/06/2012 09:55 PM, Steven C Timm wrote:
>
> Hi Max--
> We are using OpenNebula 2.0 here which does not have any protection on either 
> cpu or memory over-commit.
> When you are running OpenNebula with KVM, at least with opennebula 2.0, it 
> counts not the total amount of memory that the user requested but the amount 
> currently being used by the VM, which is usually less.  I don't have direct 
> experience with opennebula 3.2 but the same
> must be possible there.
>
> -Original Message-
> From: users-boun...@lists.opennebula.org 
> [mailto:users-boun...@lists.opennebula.org 
> ] On Behalf Of Maxim Mikheev
> Sent: Monday, February 06, 2012 7:59 PM
> To: users@lists.opennebula.org
>
> Subject: [one-users] OpenNebula 3.2 need memory and CPU over-commit.
>
> Hi Everyone,
>
> We are using Open Nebula on Ubuntu 11.10. Our VM's usually never use a whole 
> system in the same time. But when they are active they usually require huge 
> amount of computational power.
>
> OpenNebula has a protection from Overcommitment of memory and CPU. I have a 
> question: How can I manually setup Overcommitment by factor 2 for CPU and 
> Memory?
> Significant amount of memory is helpful for our calculations but all this 
> extra memory is a caches. It means that nothing will be happen if Cache size 
> will be decreased for the VM. The cache  can be dynamically scale down by 
> balloon driver which is a default for Libvirt and KVM in Ubuntu.
> It means that I absolutely sure that would like to over-commit memory.
> There are no problem for a processor too. Our calculations took usually 
> several days and one day extra for one VM does not change anything.
> I already used these scheme with over-commitment on Proxmox and it works well.
>
> Regards,
>  Max
> ___
> Users mailing 
> listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Next IRC Session, 21 Feb

2012-02-10 Thread Daniel Molina
Dear community,

I would like to let you know that the next OpenNebula IRC Session will
take place on Tuesday, 21 Feb 2012 16:00 UTC on the #opennebula
channel (Freenode). In this session we will discuss:

     * Questions about our latest stable release 3.2
       * [1] OpenNebula 3.2 features
       * [2] OpenNebula 3.2.1 features
     * Upcoming release
       * [3] OpenNebula 3.4 Spring 0 new features:
       * [4] OpenNebula 3.4 new features:

I hope you to join us!

More info: http://www.opennebula.org/community:irc?&#irc_sessions_scheduling

[1] http://www.opennebula.org/software:rnotes:rn-rel3.2
[2] http://www.opennebula.org/software:rnotes:rn-rel3.2.1
[3] http://dev.opennebula.org/rb/queries/1?sprint_id=38
[4] http://dev.opennebula.org/rb/queries/1?sprint_id=29

Cheers

--
Daniel Molina
Project Engineer
OpenNebula - The Open Source Toolkit for Data Center Virtualization
www.OpenNebula.org | dmol...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] (thanks) Re: experiences with distributed FS?

2012-02-10 Thread Hans-Joachim Ehlers
Just a few remarks on performance: ( All imho )

1) Do not use any kind of asynchronies write. In case of a serious problem your 
data might not contains what you expecting.
2) If possible use Jumbo frames. To handle 100K interrupts (1500 bytes ) or 10k 
(9000 bytes) interrupts on a 1GbE links makes for a CPU and network adapter 
quite a difference
3) Sine any data goes down to disk - the disk subsystem will determine your 
overall speed. Use SDD for IOPS intensive application.*

*
a) 4 x 2TB SATA disk can handle about 4 * 100 IOPS which makes 400 IOPS. In 
case of 4 K writes your total reachable write speed would be 1.6 MB/s. Now 
think what happen if a VM starts swapping.
b) Slow disk systems can not be made faster on higher software stacks except in 
case where you do not care about your data ( async writes for example )

@ Michael ( xtremFS )
AFAIK uses GPFS VirtFS (v9fs). Maybe its an option for xtremFS as well.
I found on scribd (sic) 
http://www.scribd.com/darkcompanion/d/54839002-xVI05-Open-Source-Virtualization-With-KVM
 some information about GPFS /KVM and VIRTIO. Maybe usefull for you as well.


Have a nice weekend
Hajo


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Open vSwitch port of VM instance

2012-02-10 Thread Carlos Martín Sánchez
Hi,

You can return any extra attribute in the polling script [1].
Modify /var/lib/one/remotes/vmm/kvm/poll, and then exectute 'onehost sync'.
This will update the contents of /var/lib/one/remotes in the next host
monitorization cycle.

Cheers

[1] http://opennebula.org/documentation:rel3.2:devel-vmm#poll_information
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open Source Toolkit for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebula


On Thu, Feb 9, 2012 at 10:50 PM, Greg Stabler  wrote:

> Hi,
>
> I'm trying to get the port number that my virtual machine is attached to
> on a bridge from Open vSwitch from within OpenNebula. Ideally, I'd like to
> have OpenNebula store that information in the Network template of the
> virtual machine so I could retrieve it later. However, I'm not entirely
> sure what code to modify to make that happen. I was thinking somewhere in
> the kvm deploy or polling code.
>
> I'm using OpenNebula 3.0 and I have the Open vSwitch hook enabled. Any
> help or suggestions in the right direction would be greatly appreciated.
>
> Thank you,
> Greg
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] (thanks) Re: experiences with distributed FS?

2012-02-10 Thread Michael Berlin

Hi,

On 02/09/2012 01:50 PM, richard -rw- weinberger wrote:

On Thu, Feb 9, 2012 at 1:38 PM, João Pagaime  wrote:

here's a short summary by FS:
• (RW)… you are using FUSE.


No, I'm not using FUSE.
My OpenNebula cluster is built on top of ocfs2.


Richard meant that the XtreemFS client implementation does use FUSE just 
as many other distributed file systems do.


Regarding the general FUSE performance discussion:

On 02/09/2012 11:49 AM, richard -rw- weinberger wrote:
[...]
> Hmm, you are using FUSE.
> Performance measurements would be really nice to have.
>

Any suggestions how to conduct OpenNebula specific measurements are welcome.

On our mailing list I wrote about the write throughput performance of 
XtreemFS: http://groups.google.com/group/xtreemfs/msg/f5a70a1780d9f4f9


Write throughput is usually limited by the execution time of the write 
operation since the application on top does not issue the next write() 
before the previous did return. Therefore we also allow asynchronous 
writes which acknowledge a number of write()s to the application before 
they are actually confirmed by the storage server. To be on the safe 
side in that case, you have to execute fsync() and evaluate the return 
value of close(). As written in the post mentioned above, with 
asynchronous writes you are able to almost max out a GbE link (up to 
100MB/s write speed), but it also incurs a lot of overhead: I saw up to 
70% CPU usage for the XtreemFS client during that test.


> Writing fancy file systems using FUSE is easy. Making them fast and 
scalable

> is a damn hard job and often impossible.

I fully agree that kernel-based file systems in Linux will always have a 
lower overhead than its FUSE counterparts. But the overhead is mainly 
caused by the structure of the Linux kernel: all data read and written 
by FUSE file systems has to be copied between kernel space and user 
space. If this was optimized, the overhead would be much less significant.


In general, the overhead of a FUSE implementation is the cost of the 
"fanciness". If a required feature is only available in a FUSE based 
file system, you would rather use that than waiting for a never 
appearing kernel implementation. Writing a distributed file system in 
the kernel is a damn hard job and often impossible. Therefore FUSE file 
systems are an alternative.


The scalability (of a distributed file system) is independent of the 
choice of a kernel or userspace implementation. That's a matter of the 
design of the file system.


At the end it's up to the user. If there's a kernel based file system 
available which suits your needs, than you can use that (as in your 
case). If not, you're willing to pay the price for the overhead since 
the FUSE based file system has a lot more to offer.


Best regards,
Michael
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org