Re: [one-users] oned cannot start after upgrading to opennebula 3.6

2013-09-11 Thread Lukman Fikri
So, am i supposed to uninstall the 3.4 version first using apt-get remove?
Well, i read on the documentation that if we used the 3.4 version, we were 
unabe to upgrade it directly to 4.2. 
So i upgrade it to 3.6 version first.

Thank you



From: cmar...@opennebula.org
Date: Wed, 11 Sep 2013 18:00:37 +0200
Subject: Re: [one-users] oned cannot start after upgrading to opennebula 3.6
To: lukman.fi...@outlook.com
CC: users@lists.opennebula.org

Hi,
It could several things. You said that you executed install.sh as oneadmin, so 
I assume you did a self-contained installation. Which means that you probably 
have both 3.4 and 3.6 versions installed.


Is there any reason to use OpenNebula 3.6 instead of the last stable, 4.2?
Regards--
Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013


--Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center 
Virtualizationwww.OpenNebula.org | cmar...@opennebula.org | @OpenNebula




On Wed, Sep 11, 2013 at 9:41 AM, Lukman Fikri  wrote:





Hello, 

Previously, I already had opennebula 3.4.1 installed on my machine.
I want to upgrade it to 3.6 version, so i downloaded the tarball from 
http://downloads.opennebula.org/


i extract it to certain user home directory (not the oneadmin home directory)
then i executed ./install.sh as oneadmin user
However, i cannot start the oned process now
oneadmin@cloud1:/home/lukmanf$ one start


Could not open log file
Could not open log file


oned failed to start
scheduler failed to start


oneadmin@cloud1:/home/lukmanf$ onehost list
ONE_AUTH file not present




could you tell me what went wrong or the mistake that possibly happened?
thank you in advance,

-
Lukman Fikri
  

___

Users mailing list

Users@lists.opennebula.org

http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



  ___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] How to connect to VMs

2013-09-11 Thread Kenneth
 

What Network model are you using when you create Virtual networks -
Default, 802.1Q, ebtables, Open vSwitch etc? 

Try Default first, then
define the Bridge interface of your OpenNebula hosts such as br0 or br1.
You should know already that the network interfaces of the hosts nodes
of OpenNebula is bridged. 

On the virtual network, define your IP range
of 192.168.42.x IPs. Assign this Virtual network to your VMs. (You may
need to configure you VM to use this address). 

On 09/12/2013 07:38 AM,
Johannes Schuster wrote: 

> Hi,
> 
> OpenNebula is running without
Problems.
> I created a virtual network (192.168.20.1 - 192.168.20.5)
and 2 VMs 
> (using the "ttylinux - kvm" from the OpenNebula
Marketplace). I can 
> access them through VNC in Sunstone and they are
up and running. They 
> have IPs (192.168.20.1 and 192.168.20.2) and I
can successfully ping 
> among themeselves.
> 
> But now I want to
access them from the outside. I have a frontend and 2 
> hosts. They
have 192.168.42.x IPs. Regardless from which computer I try 
> I get the
error "Destination Host Unreachable" (ping 192.168.20.1).
> 
> How do I
connect to the VMs?
> 
> Thank you and best regards,
> 
> Johannes
>
___
> Users mailing list
>
Users@lists.opennebula.org
>
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org [1]

--

Thanks,
Kenneth
 

Links:
--
[1]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] domU.cfg and deployment.0

2013-09-11 Thread kenny . kenny
Hello,
Im having a problem with deployment.0
 
I can create a domU, by xm create with this domU.cfg:
 
bootloader = '/usr/bin/pygrub'vcpus   = '1'memory  = '512'root    = '/dev/xvda1 ro'disk    = [  'file:/var/lib/one//datastores/0/89/disk.0,xvda1,w',  ]#name    = 'ubuntu'#  Networkingdhcp    = 'dhcp'vif = [ 'mac=00:16:3E:8F:D9:05' ]
This is my opennebula template:
 
CLUSTER_100="100"CPU="1"DISK=[DRIVER="file:",IMAGE_ID="16",READONLY="no",TARGET="xvda1"]GRAPHICS=[LISTEN="0.0.0.0",PASSWD="123456",TYPE="VNC"]MEMORY="256"NIC=[NETWORK_ID="1"]OS=[BOOT="xvda1",BOOTLOADER="/usr/lib/xen/bin/pygrub",ROOT="/dev/xvda1 ro"]REQUIREMENTS="CLUSTER_ID=\"100\""VCPU="1"
 
and this is my deployment.0
name = 'one-89'#O CPU_CREDITS = 256memory  = '256'vcpus  = '1'bootloader = "/usr/lib/xen/bin/pygrub"disk = [    'file:/var/lib/one//datastores/0/89/disk.0,xvda1,w',]vif = [    ' mac=02:00:0a:00:03:6c,ip=10.40.3.108,bridge=virbr0',]vfb = ['type=vnc,vnclisten=0.0.0.0,vncdisplay=89,vncpasswd=123456']
My domU dont load when i try it by opennebula (igot initramfs error).
the deployment.0 is just missing this "root    = '/dev/xvda1 ro'" , how can i pass it to opennebula by template ?
thanks.
 
 
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Xen Paravirtualized error - cfg and template

2013-09-11 Thread kenny . kenny
i solved this problem, i just put sda1 in the  target parameter. 
Do you  know what i need to do to pass this parameter root = '/dev/xvda1 ro' 
by opennebula ?
 

De: kenny.ke...@bol.com.brEnviada: Segunda-feira, 9 de Setembro de 2013 14:06Para: Ruben S. Montero < rsmont...@opennebula.org >Assunto: Re: [one-users] Xen Paravirtualized error - cfg and template
Thanks Ruben. 
Do you know what i need to do in the template file to have sda1 and sda2 (like my xen.cfg) in the deployment.0 ?
 
 

Em 09/09/2013 13:22, Ruben S. Montero < rsmont...@opennebula.org > escreveu:
That's the context device, either add a target for it (as part of CONTEXT definition) or remove the context section http://opennebula.org/documentation:rel4.2:template#context_section
 
CONTEXT=[NETWORK="YES",SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]"]
 
Cheers


On Mon, Sep 9, 2013 at 6:19 PM,  wrote:

Thanks for the reply. I take a look at the deployment.0 and i see that:
name = 'one-45'#O CPU_CREDITS = 256memory  = '256'vcpus  = '1'bootloader = "/usr/lib/xen/bin/pygrub"disk = [     'file:/var/lib/one//datastores/0/45/disk.0,sda,w',    'file:/var/lib/one//datastores/0/45/disk.1,sdb,w',    'tap:aio:/var/lib/one//datastores/0/45/disk.2,hda,r', ]vif = [    ' mac=02:00:0a:00:03:6a,ip=10.0.3.106,bridge=virbr0',]vfb = ['type=vnc,vnclisten=0.0.0.0,vncdisplay=45,vncpasswd=123456']
I did´nt configure the disk in bold. Whyy opennebula created this?
 
My template:
CLUSTER_100="100"CONTEXT=[NETWORK="YES",SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]"]CPU="1"DISK=[DEV_PREFIX="sd",DRIVER="file:",IMAGE_ID="7",READONLY="no"] DISK=[DEV_PREFIX="sd",DRIVER="file:",READONLY="no",SIZE="256",TYPE="swap"]GRAPHICS=[LISTEN="0.0.0.0",PASSWD="123456",TYPE="VNC"] MEMORY="256"NIC=[NETWORK_ID="1"]OS=[BOOTLOADER="/usr/lib/xen/bin/pygrub",ROOT="sda"]REQUIREMENTS="CLUSTER_ID=\"100\"" VCPU="1"
 
Thanks.
 
 
 
 
 
 

Em 09/09/2013 05:25, Ruben S. Montero < rsmont...@opennebula.org > escreveu:
Hi
 
Take a look at a file named "deployment.0", it is the .cfg file generated by OpenNebula. Compare this file with your working cfg file, probably you are not using the right bus/mapping. OpenNebula uses a disk approach to automatically set targets (sda, sdb,...) while your template uses a partition based layout (sda1, sda2...). So probably you'll need to set TARGET attribute in the Image template or in the VM template in the DISK attribute.
 
Cheers
 
Ruben


On Sun, Sep 8, 2013 at 9:33 PM,  wrote:

Hello,
I have a xen 4.2 , running ok on a ubuntu 12.10 machine.
 
When i create the virtual machine by xm create ubuntu.cfg , it is ok.
But when i create the virtual machine, with the same image disk by opennebula (4.2) i get  this error in domU ( i see the error by xm console in xen host):
 
 1.274565] device-mapper: ioctl: 4.22.0-ioctl (2011-10-19) initialised: dm-de...@redhat.com[    1.274590] EFI Variables Facility v0.08 2004-May-17 [    1.274975] TCP cubic registered[    1.275354] NET: Registered protocol family 10[    1.276538] NET: Registered protocol family 17[    1.276557] Registering the dns_resolver key type [    1.276762] registered taskstats version 1[    6.388086] XENBUS: Waiting for devices to initialise: 25s...20s...15s...10s...5s...0s...235s...230s...225s...220s...215s...210s...205s...200s...195s...190s...185s...180s...175s...170s...165s...160s...155s...150s...145s...140s...135s...130s...125s...120s...115s...110s...105s...100s... Gave up waiting for root device.  Common problems: - Boot args (cat 
 /proc/ cmdline)   - Check rootdelay= (did the system wait long enough?)   - Check root= (did the system wait for the right device?)  - Missing modules (cat /proc/modules; ls /dev)ALERT!  /dev/xvda2 does not exist.  Dropping to a shell! 
my ubuntu.cfg :
#kernel = "/vmlinuz"#ramdisk = "/initrd.gz"bootloader ='/usr/lib/xen/bin/pygrub'name = "ubuntu3"memory = '512' dhcp    = 'dhcp'vif = ['mac=00:00:00:8F:D9:46']disk    = [   'file:/vms/images/ubuntu3/disk.img,sda2,w',   'file:/vms/images/ubuntu3/swap.img,sda1,w',   ]_on_poweroff_ = 'destroy'on_reboot   = 'restart'on_crash    = 'restart'
Anybody knows what is the problem ?
How can i "convert" a xen .cfg to a opennebula template ?
 
Thanks.
 
___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


 
-- 



-- 
Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013

-- 

Ruben S. Montero, PhD Project co-Lead and Chief ArchitectOpenNebula - The Open Source Solution for Data Center Virtualizationwww.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula

 



 
-- 



-- 
Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013

-- 

Ruben S. Montero, PhD Project co-Lead and Chief ArchitectOpenNebula - The Open Source Solution for Data Center Virtualizationwww.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula

 
 
___

[one-users] How to connect to VMs

2013-09-11 Thread Johannes Schuster

Hi,

OpenNebula is running without Problems.
I created a virtual network (192.168.20.1 - 192.168.20.5) and 2 VMs 
(using the "ttylinux - kvm" from the OpenNebula Marketplace). I can 
access them through VNC in Sunstone and they are up and running. They 
have IPs (192.168.20.1 and 192.168.20.2) and I can successfully ping 
among themeselves.


But now I want to access them from the outside. I have a frontend and 2 
hosts. They have 192.168.42.x IPs. Regardless from which computer I try 
I get the error "Destination Host Unreachable" (ping 192.168.20.1).


How do I connect to the VMs?

Thank you and best regards,

Johannes
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] File system performance testing suite tailored to OpenNebula

2013-09-11 Thread Carlo Daffara
Actually the point is that *it is* possible to get near-native performance, 
when appropriate tuning or precautions are taken.
Take as an example the graphs in page 5:
the throughput is *higher* with XFS as host filesystem than the raw device (BD 
in the graph) for the filesystem workload, and using XFS it's within 10% (apart 
for ext3, that has an higher performance hit); for the database workload it's 
JFS that's on a par or slightly faster.
Another important fact is latency (added latency due to multiple stacked FS) 
and again, the graph on page 6 shows that there are specific combinations of 
guest/host FS have very small added latencies due to filesystem stacking.
It is also clear that the default ext4 used in many guest VMs is absolutely 
sub-optimal for write workloads, where JFS is twice as fast.
Other aspects to consider:
The default io scheduler in linux is *abysmal* for VM workloads. Deadline is 
the clear winner, along with noop for SSD disks. Other small touches may be 
tuning the default readahead for rotational media (and removing it for ssd), 
increasing the retention of read cache pages, increasing (a little) the flush 
time of the write cache, that even with a 5 second sweep time increases the 
iops rate for write workloads by increasing the opportunities for optimizing 
the disk head path, and on and on...
so, my point is that it is possible with relatively small effort, to get 
near-disk performance from kvm with libvirt (same concept, with different 
aspects, for Xen). 
it's a fascinating area of work, and we had one of our people work for two 
weeks only doing tests using a windows VM with a benchmark application inside, 
over a large number of different fs/kvm parameters. We found out a lot of 
interesting cases :-)
cheers
carlo daffara
cloudweavers

- Messaggio originale -
Da: "João Pagaime" 
A: users@lists.opennebula.org
Inviato: Mercoledì, 11 settembre 2013 19:31:07
Oggetto: Re: [one-users] File system performance testing suite tailored to 
OpenNebula

thanks  for pointing out the paper

I've glanced at it and somewhat confirmed my impressions on write 
operations (which are very relevant on transactional environments):  the 
penalty on write operations doesn't seem to be negligible.

best regards,
João

Em 11-09-2013 14:55, Carlo Daffara escreveu:
> Not a simple answer, however this article by LE and Huang provide quite some 
> details:
> https://www.usenix.org/legacy/event/fast12/tech/full_papers/Le.pdf
> we ended up using ext4 and xfs mainly, with btrfs for mirrored disks or for 
> very slow rotational media.
> Raw is good if you are able to map disks directly and you don't change them, 
> but our results find that the difference is not that great- but the 
> inconvenience is major :-)
> When using kvm and virtio, the actual loss in IO performance is not very high 
> for the majority of workloads. Windows is a separate issue- ntfs has very 
> poor performance on small blocks for sparse writes, and this tends to 
> increase the apparent inefficiency of kvm.
> Actually, using the virtio device drivers the penalty is very small for most 
> workloads; we tested a windows7 machine both as native (physical) and 
> virtualized using a simple crystalmark test, and we found that using virtio 
> the 4k random io write test is just 15% slower, while the sequential ones are 
> much faster virtualized (thanks to the linux native page cache).
> We use for the intensive io workloads a combination of a single ssd plus one 
> or more rotative disks, combined using enhanceio.
> We observed an increase of the available IOPS for random write (especially 
> important for database servers, AD machines...) of 8 times using 
> consumer-grade ssds.
> cheers,
> Carlo Daffara
> cloudweavers
>
> - Messaggio originale -
> Da: "João Pagaime" 
> A: users@lists.opennebula.org
> Inviato: Mercoledì, 11 settembre 2013 15:20:19
> Oggetto: Re: [one-users] File system performance testing suite tailored to 
> OpenNebula
>
> Hello all,
>
> the topic is very interesting
>
> I wonder if anyone could answer this:
>
> what is the penalty of using a file-system on top of a file-system? that
> is what happens when the VM disk is a regular file on the hypervisor's
> filesystem. I mean: the VM has its own file-system and then the
> hypervisor maps that vm-disk on a regular file on another filesystem
> (the hypervisor filesystem). Thus the file-system on top of a
> file-system issue
>
> putting the question the other way around: what is the benefit of using
> raw disk-device (local disk, LVM, iSCSI, ...) as an open-nebula datastore?
>
> didn't test this but I feel the benefit should be substantial
>
> anyway simple bonnie++ tests within a VM show heavy penalties, comparing
> test running in  the VM and outside (directly on the hipervisor).  That
> isn't of course an opennebula related performance issue, but a more
> general technology challenge
>
> best regards,
> João
>
>
>
>
> Em 11-09-2013 13:10, Gerry O'Br

Re: [one-users] File system performance testing suite tailored to OpenNebula

2013-09-11 Thread João Pagaime

thanks  for pointing out the paper

I've glanced at it and somewhat confirmed my impressions on write 
operations (which are very relevant on transactional environments):  the 
penalty on write operations doesn't seem to be negligible.


best regards,
João

Em 11-09-2013 14:55, Carlo Daffara escreveu:

Not a simple answer, however this article by LE and Huang provide quite some 
details:
https://www.usenix.org/legacy/event/fast12/tech/full_papers/Le.pdf
we ended up using ext4 and xfs mainly, with btrfs for mirrored disks or for 
very slow rotational media.
Raw is good if you are able to map disks directly and you don't change them, 
but our results find that the difference is not that great- but the 
inconvenience is major :-)
When using kvm and virtio, the actual loss in IO performance is not very high 
for the majority of workloads. Windows is a separate issue- ntfs has very poor 
performance on small blocks for sparse writes, and this tends to increase the 
apparent inefficiency of kvm.
Actually, using the virtio device drivers the penalty is very small for most 
workloads; we tested a windows7 machine both as native (physical) and 
virtualized using a simple crystalmark test, and we found that using virtio the 
4k random io write test is just 15% slower, while the sequential ones are much 
faster virtualized (thanks to the linux native page cache).
We use for the intensive io workloads a combination of a single ssd plus one or 
more rotative disks, combined using enhanceio.
We observed an increase of the available IOPS for random write (especially 
important for database servers, AD machines...) of 8 times using consumer-grade 
ssds.
cheers,
Carlo Daffara
cloudweavers

- Messaggio originale -
Da: "João Pagaime" 
A: users@lists.opennebula.org
Inviato: Mercoledì, 11 settembre 2013 15:20:19
Oggetto: Re: [one-users] File system performance testing suite tailored to 
OpenNebula

Hello all,

the topic is very interesting

I wonder if anyone could answer this:

what is the penalty of using a file-system on top of a file-system? that
is what happens when the VM disk is a regular file on the hypervisor's
filesystem. I mean: the VM has its own file-system and then the
hypervisor maps that vm-disk on a regular file on another filesystem
(the hypervisor filesystem). Thus the file-system on top of a
file-system issue

putting the question the other way around: what is the benefit of using
raw disk-device (local disk, LVM, iSCSI, ...) as an open-nebula datastore?

didn't test this but I feel the benefit should be substantial

anyway simple bonnie++ tests within a VM show heavy penalties, comparing
test running in  the VM and outside (directly on the hipervisor).  That
isn't of course an opennebula related performance issue, but a more
general technology challenge

best regards,
João




Em 11-09-2013 13:10, Gerry O'Brien escreveu:

Hi Carlo,

   Thanks for the reply. I should really look at XFS for the
replication and performance.

   Do you have any thoughts on my second questions about qcow2 copies
form /datastores/1 to /datastores/0 in a single filesystem?

 Regards,
   Gerry


On 11/09/2013 12:53, Carlo Daffara wrote:

It's difficult to provide an indication of what a typical workload
may be, as it depends greatly on the
I/O properties of the VM that run inside (we found that the
"internal" load of OpenNebula itself to be basically negligible).
For example, if you have lots of sequential I/O heavy VMs you may get
benefits from one kind, while transactional and random I/O VMs may be
more suitably served by other file systems.
We tend to use fio for benchmarks (http://freecode.com/projects/fio)
that is included in most linux distributions; it provides for
flexible selection of read-vs-write patterns, can select different
probability distributions and includes a few common presets (like
file server, mail server etc.)
Selecting the bottom file system for the store is thus extremely
depending on application, feature and load. For example, we use in
some configurations BTRFS with compression (slow rotative devices,
especially when there are several of them in parallel), in other we
use ext4 (good, all-around balanced) and in other XFS. For example
XFS supports filesystem replication in a way similar to that of zfs
(not as sofisticated, though), excellent performance for multiple
parallel I/O operations.
ZFS in our tests tend to be extremely slow outside of a few "sweet
spots"; a fact confirmed by external benchmarks like this one:
http://www.phoronix.com/scan.php?page=article&item=zfs_linux_062&num=3 We
tried it (and we continue to do so, both for the FUSE and native
kernel version) but for the moment the performance hit is excessive
despite the nice feature set. BTRFS continue to improve nicely, and a
set of patches to implement send/receive like ZFS are here:
https://btrfs.wiki.kernel.org/index.php/Design_notes_on_Send/Receive
but it is still marked as experimental.

I personally *love* ZFS,

Re: [one-users] oned cannot start after upgrading to opennebula 3.6

2013-09-11 Thread Carlos Martín Sánchez
Hi,

It could several things. You said that you executed install.sh as oneadmin,
so I assume you did a self-contained installation. Which means that you
probably have both 3.4 and 3.6 versions installed.

Is there any reason to use OpenNebula 3.6 instead of the last stable, 4.2?

Regards

--
Join us at OpenNebulaConf2013  in Berlin, 24-26
September, 2013
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebula


On Wed, Sep 11, 2013 at 9:41 AM, Lukman Fikri wrote:

> Hello,
>
> Previously, I already had opennebula 3.4.1 installed on my machine.
> I want to upgrade it to 3.6 version, so i downloaded the tarball from
> http://downloads.opennebula.org/
> i extract it to certain user home directory (not the oneadmin home
> directory)
> then i executed ./install.sh as oneadmin user
> However, i cannot start the oned process now
>
> oneadmin@cloud1:/home/lukmanf$ one start
> Could not open log file
> Could not open log file
> oned failed to start
> scheduler failed to start
> oneadmin@cloud1:/home/lukmanf$ onehost list
> ONE_AUTH file not present
>
>
> could you tell me what went wrong or the mistake that possibly happened?
> thank you in advance,
>
> -
> Lukman Fikri
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] oneimage QCOW2 problem: Error copying image in the datastore: Not allowed to copy image file

2013-09-11 Thread Carlos Martín Sánchez
Well, yes. If I register a new image with the path
/datastores/0//deployment.0 I could get your vnc password, for
example. Or if I point it to the context cdrom image, I could get some
variables that may contain important information. And, of course, I could
copy one of your images or running VM disks.

Cheers


--
Join us at OpenNebulaConf2013  in Berlin, 24-26
September, 2013
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebula


On Wed, Sep 11, 2013 at 2:05 PM, Gerry O'Brien  wrote:

> Hi,
>
> By using /datastores instead of /var/lib/one/datastores, have I opened
> a security hole?
>
>
>
> On 11/09/2013 12:51, Carlos Martín Sánchez wrote:
>
>> Hi,
>>
>> On Wed, Sep 11, 2013 at 1:06 PM, Gerry O'Brien  wrote:
>>
>>  Hi Carlos,
>>>
>>>  I appreciate the security issues. I'm just wondering why
>>> /var/lib/one/datastores is not a safe directory by default given it is
>>> the
>>> default location for datastores?
>>>
>>>  Oneadmin's home /var/lib/one is restricted by default, because it
>> contains
>> the one_auth file, the database one.db... And /var/lib/one/datastores must
>> also be restricted, because a user should not be able to copy another
>> registered image in there. I hope this makes sense.
>>
>> Cheers
>> --
>> Join us at OpenNebulaConf2013  in Berlin,
>> 24-26
>>
>> September, 2013
>> --
>> Carlos Martín, MSc
>> Project Engineer
>> OpenNebula - The Open-source Solution for Data Center Virtualization
>> www.OpenNebula.org  | cmar...@opennebula.org|
>> @OpenNebula  
>>
>>
>>
>>   Regards,
>>>  Gerry
>>>
>>>
>>>
>>> On 11/09/2013 11:51, Carlos Martín Sánchez wrote:
>>>
>>>  Hi,

 Tue Sep 10 14:32:48 2013 [ImM][E]: cp: Not allowed to copy images from

  /var/lib/one/ /etc/one/ /var/lib/one/
>
>  The dir /var/lib/one is a restricted dir, and OpenNebula won't allow
 you
 to
 copy images from there. Otherwise, you could copy the DB or other
 authentication files. That's why it works from /datastores.

 See [1] for more information.

 Best regards.

 [1]
 http://opennebula.org/documentation:rel4.2:fs_ds#**
 configuring_the_filesystem_datastores
 >



 --
 Join us at OpenNebulaConf2013  in Berlin,
 24-26

 September, 2013
 --
 Carlos Martín, MSc
 Project Engineer
 OpenNebula - The Open-source Solution for Data Center Virtualization
 www.OpenNebula.org | cmar...@opennebula.org |
 @OpenNebula>

> >
>


 On Tue, Sep 10, 2013 at 4:59 PM, Gerry O'Brien 
 wrote:

   Hi,

>   This seems to be a general issue not specific to QCOW2. For the
> moment
> I've solved the issue by mounting the datastores (which are NFS exports
> for
> a filestore) on the root partition at /datastores and created a symlink
> form /var/lib/one/datatstore to /datastores.
>
>Is this correct?
>
>   Gerry
>
>
> On 10/09/2013 14:38, Gerry O'Brien wrote:
>
>   Hi,
>
>>   I get the following error when trying to create an image from a
>> QCOW2
>> file:"Error copying image in the datastore: Not allowed to copy
>> image
>> file /var/lib/one/datastores/1/**DELETEME.qcow2"
>>
>>
>>   Below are the commands I use to create the QCOW2 file before
>> trying
>> to create the image named DELETEME using oneimage. The QCOW2 file is
>> has
>> been created with a backing file.
>>
>>   This used to work in Opennebula 3. I have made sure the use
>> oneadmin
>> is also in the cloud group in case it is some kind of permissions
>> file.
>>
>>   Any ideas?
>>
>>   Regards,
>>   Gerry
>>
>>
>>
>> qemu-img create -f qcow2 -o backing_file=/var/lib/one/
>> datastores/1/**
>> e1e1735dada84a7c6290001b9a244e**be /var/lib/one/datastores/1/
>> DELETEME.qcow2
>>
>> qemu-img info /var/lib/one/datastores/1/**DELETEME.qcow2
>> image: /var/lib/one/datastores/1/**DELETEME.qcow2
>>
>>
>> file format: qcow2
>> virtual size: 50G (53687091200 bytes)
>> disk size: 12K
>> cluster_size: 65536
>> backing file: /var/lib/one/datastores/1/
>> e1e1735dada84a7

[one-users] Number of KVM hosts per cluster

2013-09-11 Thread Dmitri Chebotarov
Hi

Is there a best practice for a number of KVM hosts per cluster? 

Concern I have is storage performance with large number of hosts per cluster 
(on system datastore).

Cluster's system datastore is on a NetApp NAS (NFS). 
I have 40 hosts. I was thinking going with 5 clusters, 8 hosts in each cluster. 
This should distribute load on storage side between 5 different volumes (5 
system DS).

Does anyone have a recommendation on the subj?

--
Thank you,

Dmitri Chebotarov
VCL Sys Eng, Engineering & Architectural Support, TSD - Ent Servers & Messaging
223 Aquia Building, Ffx, MSN: 1B5
Phone: (703) 993-6175 | Fax: (703) 993-3404
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] File system performance testing suite tailored to OpenNebula

2013-09-11 Thread Carlo Daffara
no (xfs on linux does not perform snapshots); it uses xfsdump. It allows for 
progressive dumps, with differential backups to a remote xfs server. It uses a 
concept of "levels" (0 to 9) where 0 is a full backup, and you can provide 
differential backups at different levels. Some pointers are here:
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/xfsbackuprestore.html
cheers
carlo daffara
cloudweavers

- Messaggio originale -
Da: "Gerry O'Brien" 
A: "Carlo Daffara" 
Cc: "Users OpenNebula" 
Inviato: Mercoledì, 11 settembre 2013 16:38:41
Oggetto: Re: [one-users] File system performance testing suite tailored to 
OpenNebula

I presume this uses the XFS snapshot facility?

On 11/09/2013 14:57, Carlo Daffara wrote:
> As for the second part of the question, having a single filesystem helps in 
> reducing the copy cost.
> We have moved from the underlying FS to a distributed fs that does r/w 
> snapshots, and changed the tm scripts to convert
> copies into snapshot operations, so we have a little bit more flexibility in 
> managing the filesystems and stores.
> cheers
> carlo daffara
> cloudweavers
>
> - Messaggio originale -
> Da: "Gerry O'Brien" 
> A: "Users OpenNebula" 
> Inviato: Mercoledì, 11 settembre 2013 13:16:52
> Oggetto: [one-users] File system performance testing suite tailored to
> OpenNebula
>
> Hi,
>
>   Are there any recommendations for a file system performance testing
> suite tailored to OpenNebula typical workloads? I would like to compare
> the performance of zfs v. ext4. One of the reasons for considering zfs
> is that it allows replication to a remote site using snapshot streaming.
> Normal nightly backups, using something like rsync, are not suitable for
> virtual machine images where a single block change means the whole image
> has to be copied. The amount of change is to great.
>
>   On a related issue, does it make sense to have datastores 0 and 1
> in a single files system so that the instantiations of non-persistent
> images does not require a copy from one file system to another? I have
> in mind the case where the original image is a qcow2 image.
>
>   Regards,
>   Gerry
>


-- 
Gerry O'Brien

Systems Manager
School of Computer Science and Statistics
Trinity College Dublin
Dublin 2
IRELAND

00 353 1 896 1341

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] File system performance testing suite tailored to OpenNebula

2013-09-11 Thread Gerry O'Brien

I presume this uses the XFS snapshot facility?

On 11/09/2013 14:57, Carlo Daffara wrote:

As for the second part of the question, having a single filesystem helps in 
reducing the copy cost.
We have moved from the underlying FS to a distributed fs that does r/w 
snapshots, and changed the tm scripts to convert
copies into snapshot operations, so we have a little bit more flexibility in 
managing the filesystems and stores.
cheers
carlo daffara
cloudweavers

- Messaggio originale -
Da: "Gerry O'Brien" 
A: "Users OpenNebula" 
Inviato: Mercoledì, 11 settembre 2013 13:16:52
Oggetto: [one-users] File system performance testing suite tailored to  
OpenNebula

Hi,

  Are there any recommendations for a file system performance testing
suite tailored to OpenNebula typical workloads? I would like to compare
the performance of zfs v. ext4. One of the reasons for considering zfs
is that it allows replication to a remote site using snapshot streaming.
Normal nightly backups, using something like rsync, are not suitable for
virtual machine images where a single block change means the whole image
has to be copied. The amount of change is to great.

  On a related issue, does it make sense to have datastores 0 and 1
in a single files system so that the instantiations of non-persistent
images does not require a copy from one file system to another? I have
in mind the case where the original image is a qcow2 image.

  Regards,
  Gerry




--
Gerry O'Brien

Systems Manager
School of Computer Science and Statistics
Trinity College Dublin
Dublin 2
IRELAND

00 353 1 896 1341

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] File system performance testing suite tailored to OpenNebula

2013-09-11 Thread Carlo Daffara
As for the second part of the question, having a single filesystem helps in 
reducing the copy cost.
We have moved from the underlying FS to a distributed fs that does r/w 
snapshots, and changed the tm scripts to convert
copies into snapshot operations, so we have a little bit more flexibility in 
managing the filesystems and stores.
cheers
carlo daffara
cloudweavers

- Messaggio originale -
Da: "Gerry O'Brien" 
A: "Users OpenNebula" 
Inviato: Mercoledì, 11 settembre 2013 13:16:52
Oggetto: [one-users] File system performance testing suite tailored to  
OpenNebula

Hi,

 Are there any recommendations for a file system performance testing 
suite tailored to OpenNebula typical workloads? I would like to compare 
the performance of zfs v. ext4. One of the reasons for considering zfs 
is that it allows replication to a remote site using snapshot streaming. 
Normal nightly backups, using something like rsync, are not suitable for 
virtual machine images where a single block change means the whole image 
has to be copied. The amount of change is to great.

 On a related issue, does it make sense to have datastores 0 and 1 
in a single files system so that the instantiations of non-persistent 
images does not require a copy from one file system to another? I have 
in mind the case where the original image is a qcow2 image.

 Regards,
 Gerry

-- 
Gerry O'Brien

Systems Manager
School of Computer Science and Statistics
Trinity College Dublin
Dublin 2
IRELAND

00 353 1 896 1341

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] File system performance testing suite tailored to OpenNebula

2013-09-11 Thread Carlo Daffara
Not a simple answer, however this article by LE and Huang provide quite some 
details:
https://www.usenix.org/legacy/event/fast12/tech/full_papers/Le.pdf
we ended up using ext4 and xfs mainly, with btrfs for mirrored disks or for 
very slow rotational media.
Raw is good if you are able to map disks directly and you don't change them, 
but our results find that the difference is not that great- but the 
inconvenience is major :-)
When using kvm and virtio, the actual loss in IO performance is not very high 
for the majority of workloads. Windows is a separate issue- ntfs has very poor 
performance on small blocks for sparse writes, and this tends to increase the 
apparent inefficiency of kvm.
Actually, using the virtio device drivers the penalty is very small for most 
workloads; we tested a windows7 machine both as native (physical) and 
virtualized using a simple crystalmark test, and we found that using virtio the 
4k random io write test is just 15% slower, while the sequential ones are much 
faster virtualized (thanks to the linux native page cache).
We use for the intensive io workloads a combination of a single ssd plus one or 
more rotative disks, combined using enhanceio.
We observed an increase of the available IOPS for random write (especially 
important for database servers, AD machines...) of 8 times using consumer-grade 
ssds.
cheers,
Carlo Daffara
cloudweavers

- Messaggio originale -
Da: "João Pagaime" 
A: users@lists.opennebula.org
Inviato: Mercoledì, 11 settembre 2013 15:20:19
Oggetto: Re: [one-users] File system performance testing suite tailored to 
OpenNebula

Hello all,

the topic is very interesting

I wonder if anyone could answer this:

what is the penalty of using a file-system on top of a file-system? that 
is what happens when the VM disk is a regular file on the hypervisor's 
filesystem. I mean: the VM has its own file-system and then the 
hypervisor maps that vm-disk on a regular file on another filesystem 
(the hypervisor filesystem). Thus the file-system on top of a 
file-system issue

putting the question the other way around: what is the benefit of using 
raw disk-device (local disk, LVM, iSCSI, ...) as an open-nebula datastore?

didn't test this but I feel the benefit should be substantial

anyway simple bonnie++ tests within a VM show heavy penalties, comparing 
test running in  the VM and outside (directly on the hipervisor).  That 
isn't of course an opennebula related performance issue, but a more 
general technology challenge

best regards,
João




Em 11-09-2013 13:10, Gerry O'Brien escreveu:
> Hi Carlo,
>
>   Thanks for the reply. I should really look at XFS for the 
> replication and performance.
>
>   Do you have any thoughts on my second questions about qcow2 copies 
> form /datastores/1 to /datastores/0 in a single filesystem?
>
> Regards,
>   Gerry
>
>
> On 11/09/2013 12:53, Carlo Daffara wrote:
>> It's difficult to provide an indication of what a typical workload 
>> may be, as it depends greatly on the
>> I/O properties of the VM that run inside (we found that the 
>> "internal" load of OpenNebula itself to be basically negligible).
>> For example, if you have lots of sequential I/O heavy VMs you may get 
>> benefits from one kind, while transactional and random I/O VMs may be 
>> more suitably served by other file systems.
>> We tend to use fio for benchmarks (http://freecode.com/projects/fio) 
>> that is included in most linux distributions; it provides for 
>> flexible selection of read-vs-write patterns, can select different 
>> probability distributions and includes a few common presets (like 
>> file server, mail server etc.)
>> Selecting the bottom file system for the store is thus extremely 
>> depending on application, feature and load. For example, we use in 
>> some configurations BTRFS with compression (slow rotative devices, 
>> especially when there are several of them in parallel), in other we 
>> use ext4 (good, all-around balanced) and in other XFS. For example 
>> XFS supports filesystem replication in a way similar to that of zfs 
>> (not as sofisticated, though), excellent performance for multiple 
>> parallel I/O operations.
>> ZFS in our tests tend to be extremely slow outside of a few "sweet 
>> spots"; a fact confirmed by external benchmarks like this one:
>> http://www.phoronix.com/scan.php?page=article&item=zfs_linux_062&num=3 We 
>> tried it (and we continue to do so, both for the FUSE and native 
>> kernel version) but for the moment the performance hit is excessive 
>> despite the nice feature set. BTRFS continue to improve nicely, and a 
>> set of patches to implement send/receive like ZFS are here: 
>> https://btrfs.wiki.kernel.org/index.php/Design_notes_on_Send/Receive 
>> but it is still marked as experimental.
>>
>> I personally *love* ZFS, and the feature set is unparalleled. 
>> Unfortunately, the poor license choice means that it never got the 
>> kind of hammering and tuning that other linux kernel

Re: [one-users] Persistent and non persistent images - is there a way to convert between them? Image permissions.

2013-09-11 Thread Carlos Martín Sánchez
Hi Pentium,

On Tue, Sep 10, 2013 at 1:28 PM, Pentium100  wrote:

> Hi,
>
> Let's say a user created a VM using a non persistent image (a template).
> Is there a way to now clone the image and make it persistent (remember -
> the user does not own the original non persistent image) without losing
> data.
>

Yes, the user can do a disk snapshot [1]. This saves the disk as a new
image once the VM is shutdown (or immediately if the snapshot is live).


> Alternatively, is there a way to allow a user to clone the image but not
> allow him to use the original (clone, make persistent, then use)?
>

The USE permissions allows both actions [2], so there is no simple way to
change it.
Maybe it is enough for you to change the template creation wizard and
filter the Images owned by the connected user & persistent.

Regards

[1] http://opennebula.org/documentation:rel4.2:vm_guide_2#disk_snapshoting
 [2] http://opennebula.org/documentation:rel4.2:api

--
Join us at OpenNebulaConf2013  in Berlin, 24-26
September, 2013
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org  | cmar...@opennebula.org |
@OpenNebula  
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Countdown for the First OpenNebula Conference - Closing Registration

2013-09-11 Thread Tino Vazquez
Dear OpenNebula community,

As you may be aware, we are holding the first OpenNebula Conference
[1] in Berlin, this 24-26 September. The conference is the perfect
place to learn about practical Cloud Computing, aimed at cloud users,
developers, executives and IT managers to help them tackle their
computational and business challenges. The goal is to foster fruitful
and educational discussions around Cloud Computing and OpenNebula.

We just want to make sure you don't lose this chance to learn what
Cloud Computing is about. Registration closing date is getting closer,
so seize the moment and register now [2]!

This conference is highly valuable for those who want to understand
and benefit from this massive trend of mainstream, with speakers
covering topics from the birds eye view of Cloud Computing as a deep
change in IT to very specialised talks covering tricky technical
details. So, whether you are a tech layman or a devop hack guy, this
is the right spot for you.

The conferences attendees will be in for a treat:

* Keynotes speakers include Daniel Concepción from Produban – Bank
Santander Group, Thomas Higon from Akamai, Steven Timm from FermiLab,
André von Deetzen from Deutsche Post E-Post, Jordi Farrés from
European Space Agency, Karanbir Singh from CentOS Project, and Ignacio
M. Llorente and Rubén S. Montero from the OpenNebula Project.

 * The talks are organized in three tracks about user experiences and
case studies, integration with other cloud tools, and interoperability
and HPC clouds and include speakers from leading organizations like
CloudWeavers, Terradue, NetWays, INRIA, BBC, inovex, AGS Group,
Hedera, NetOpenServices, KTH, CESNET or CESCA.

 * The Hands-on Tutorial will show how to build, configure and operate
your own OpenNebula cloud.

 * The Hacking and Open Space Sessions will provide an opportunity to
discuss burning ideas, and meet face to face to discuss development.

 * The Lightning Talks will provide an opportunity to present new
projects, products, features, integrations, experiences, use cases,
collaboration invitations, quick tips or demonstrations. This session
is an opportunity for ideas to get the attention they deserve.

What's not to like? See you all at Berlin!

The OpenNebula Team

[1] http://opennebulaconf.com/
[2] http://opennebulaconf.com/registration/
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Cloud init with Open Nebula

2013-09-11 Thread Javier Fontan
Rejoice! OpenNebula provider for cloud-init was merged. Hopefully the
new releases with come with OpenNebula support:

https://code.launchpad.net/~vlastimil-holer/cloud-init/opennebula/+merge/184278

Thanks to Vlastimil Holer for the effort.

Zeeshan, I would try the metadata server Ricardo is pointing out. It
makes the environment like EC2 and as he says you could use cloud-init
right now.

On Tue, Sep 10, 2013 at 1:16 PM, Javier Fontan  wrote:
> There's an ongoing work adding OpenNebula support to cloud-init. I
> hope it gets included soon:
>
> https://code.launchpad.net/~vlastimil-holer/cloud-init/opennebula/+merge/184278
>
> Any help or idea is appreciated
>
> On Tue, Sep 10, 2013 at 12:33 PM, Olivier Sallou
>  wrote:
>>
>> On 09/10/2013 12:30 PM, Zeeshan Ali Shah wrote:
>>
>> Hi, Any one tried to work with cloudinit with open nebula ?
>>
>> https://cloudinit.readthedocs.org/en/latest/topics/examples.html
>>
>> OpenNeubal is not available via CloudInit, but it is possible to develop new
>> "providers" and contribute to upstream effort.
>>
>> Olivier
>>
>>
>> --
>>
>> Regards
>>
>> Zeeshan Ali Shah
>> System Administrator - PDC HPC
>> PhD researcher (IT security)
>> Kungliga Tekniska Hogskolan
>> +46 8 790 9115
>> http://www.pdc.kth.se/members/zashah
>>
>>
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>> --
>> Olivier Sallou
>> IRISA / University of Rennes 1
>> Campus de Beaulieu, 35000 RENNES - FRANCE
>> Tel: 02.99.84.71.95
>>
>> gpg key id: 4096R/326D8438  (keyring.debian.org)
>> Key fingerprint = 5FB4 6F83 D3B9 5204 6335  D26D 78DC 68DB 326D 8438
>>
>>
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>
>
>
> --
> Join us at OpenNebulaConf2013 in Berlin from the 24th to the 26th of
> September 2013!
>
> Javier Fontán Muiños
> Developer
> OpenNebula - The Open Source Toolkit for Data Center Virtualization
> www.OpenNebula.org | @OpenNebula | github.com/jfontan



-- 
Join us at OpenNebulaConf2013 in Berlin from the 24th to the 26th of
September 2013!

Javier Fontán Muiños
Developer
OpenNebula - The Open Source Toolkit for Data Center Virtualization
www.OpenNebula.org | @OpenNebula | github.com/jfontan
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] File system performance testing suite tailored to OpenNebula

2013-09-11 Thread Liu, Guang Jun (Gene)
Its something I am looking for too. I am considering ZFS for the reasons
that large image clone happens concurrently.

Regards,
Gene
On 13-09-11 07:16 AM, Gerry O'Brien wrote:
> Hi,
>
> Are there any recommendations for a file system performance
> testing suite tailored to OpenNebula typical workloads? I would like
> to compare the performance of zfs v. ext4. One of the reasons for
> considering zfs is that it allows replication to a remote site using
> snapshot streaming. Normal nightly backups, using something like
> rsync, are not suitable for virtual machine images where a single
> block change means the whole image has to be copied. The amount of
> change is to great.
>
> On a related issue, does it make sense to have datastores 0 and 1
> in a single files system so that the instantiations of non-persistent
> images does not require a copy from one file system to another? I have
> in mind the case where the original image is a qcow2 image.
>
> Regards,
> Gerry
>

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] File system performance testing suite tailored to OpenNebula

2013-09-11 Thread João Pagaime

Hello all,

the topic is very interesting

I wonder if anyone could answer this:

what is the penalty of using a file-system on top of a file-system? that 
is what happens when the VM disk is a regular file on the hypervisor's 
filesystem. I mean: the VM has its own file-system and then the 
hypervisor maps that vm-disk on a regular file on another filesystem 
(the hypervisor filesystem). Thus the file-system on top of a 
file-system issue


putting the question the other way around: what is the benefit of using 
raw disk-device (local disk, LVM, iSCSI, ...) as an open-nebula datastore?


didn't test this but I feel the benefit should be substantial

anyway simple bonnie++ tests within a VM show heavy penalties, comparing 
test running in  the VM and outside (directly on the hipervisor).  That 
isn't of course an opennebula related performance issue, but a more 
general technology challenge


best regards,
João




Em 11-09-2013 13:10, Gerry O'Brien escreveu:

Hi Carlo,

  Thanks for the reply. I should really look at XFS for the 
replication and performance.


  Do you have any thoughts on my second questions about qcow2 copies 
form /datastores/1 to /datastores/0 in a single filesystem?


Regards,
  Gerry


On 11/09/2013 12:53, Carlo Daffara wrote:
It's difficult to provide an indication of what a typical workload 
may be, as it depends greatly on the
I/O properties of the VM that run inside (we found that the 
"internal" load of OpenNebula itself to be basically negligible).
For example, if you have lots of sequential I/O heavy VMs you may get 
benefits from one kind, while transactional and random I/O VMs may be 
more suitably served by other file systems.
We tend to use fio for benchmarks (http://freecode.com/projects/fio) 
that is included in most linux distributions; it provides for 
flexible selection of read-vs-write patterns, can select different 
probability distributions and includes a few common presets (like 
file server, mail server etc.)
Selecting the bottom file system for the store is thus extremely 
depending on application, feature and load. For example, we use in 
some configurations BTRFS with compression (slow rotative devices, 
especially when there are several of them in parallel), in other we 
use ext4 (good, all-around balanced) and in other XFS. For example 
XFS supports filesystem replication in a way similar to that of zfs 
(not as sofisticated, though), excellent performance for multiple 
parallel I/O operations.
ZFS in our tests tend to be extremely slow outside of a few "sweet 
spots"; a fact confirmed by external benchmarks like this one:
http://www.phoronix.com/scan.php?page=article&item=zfs_linux_062&num=3 We 
tried it (and we continue to do so, both for the FUSE and native 
kernel version) but for the moment the performance hit is excessive 
despite the nice feature set. BTRFS continue to improve nicely, and a 
set of patches to implement send/receive like ZFS are here: 
https://btrfs.wiki.kernel.org/index.php/Design_notes_on_Send/Receive 
but it is still marked as experimental.


I personally *love* ZFS, and the feature set is unparalleled. 
Unfortunately, the poor license choice means that it never got the 
kind of hammering and tuning that other linux kernel filesystem can get.

regards,
carlo daffara
cloudweavers

- Messaggio originale -
Da: "Gerry O'Brien" 
A: "Users OpenNebula" 
Inviato: Mercoledì, 11 settembre 2013 13:16:52
Oggetto: [one-users] File system performance testing suite tailored 
toOpenNebula


Hi,

  Are there any recommendations for a file system performance 
testing

suite tailored to OpenNebula typical workloads? I would like to compare
the performance of zfs v. ext4. One of the reasons for considering zfs
is that it allows replication to a remote site using snapshot streaming.
Normal nightly backups, using something like rsync, are not suitable for
virtual machine images where a single block change means the whole image
has to be copied. The amount of change is to great.

  On a related issue, does it make sense to have datastores 0 and 1
in a single files system so that the instantiations of non-persistent
images does not require a copy from one file system to another? I have
in mind the case where the original image is a qcow2 image.

  Regards,
  Gerry






___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] File system performance testing suite tailored to OpenNebula

2013-09-11 Thread Gerry O'Brien

Hi Carlo,

  Thanks for the reply. I should really look at XFS for the replication 
and performance.


  Do you have any thoughts on my second questions about qcow2 copies 
form /datastores/1 to /datastores/0 in a single filesystem?


Regards,
  Gerry


On 11/09/2013 12:53, Carlo Daffara wrote:

It's difficult to provide an indication of what a typical workload may be, as 
it depends greatly on the
I/O properties of the VM that run inside (we found that the "internal" load of 
OpenNebula itself to be basically negligible).
For example, if you have lots of sequential I/O heavy VMs you may get benefits 
from one kind, while transactional and random I/O VMs may be more suitably 
served by other file systems.
We tend to use fio for benchmarks (http://freecode.com/projects/fio) that is 
included in most linux distributions; it provides for flexible selection of 
read-vs-write patterns, can select different probability distributions and 
includes a few common presets (like file server, mail server etc.)
Selecting the bottom file system for the store is thus extremely depending on 
application, feature and load. For example, we use in some configurations BTRFS 
with compression (slow rotative devices, especially when there are several of 
them in parallel), in other we use ext4 (good, all-around balanced) and in 
other XFS. For example XFS supports filesystem replication in a way similar to 
that of zfs (not as sofisticated, though), excellent performance for multiple 
parallel I/O operations.
ZFS in our tests tend to be extremely slow outside of a few "sweet spots"; a 
fact confirmed by external benchmarks like this one:
http://www.phoronix.com/scan.php?page=article&item=zfs_linux_062&num=3 We tried 
it (and we continue to do so, both for the FUSE and native kernel version) but for the 
moment the performance hit is excessive despite the nice feature set. BTRFS continue to 
improve nicely, and a set of patches to implement send/receive like ZFS are here: 
https://btrfs.wiki.kernel.org/index.php/Design_notes_on_Send/Receive but it is still 
marked as experimental.

I personally *love* ZFS, and the feature set is unparalleled. Unfortunately, 
the poor license choice means that it never got the kind of hammering and 
tuning that other linux kernel filesystem can get.
regards,
carlo daffara
cloudweavers

- Messaggio originale -
Da: "Gerry O'Brien" 
A: "Users OpenNebula" 
Inviato: Mercoledì, 11 settembre 2013 13:16:52
Oggetto: [one-users] File system performance testing suite tailored to  
OpenNebula

Hi,

  Are there any recommendations for a file system performance testing
suite tailored to OpenNebula typical workloads? I would like to compare
the performance of zfs v. ext4. One of the reasons for considering zfs
is that it allows replication to a remote site using snapshot streaming.
Normal nightly backups, using something like rsync, are not suitable for
virtual machine images where a single block change means the whole image
has to be copied. The amount of change is to great.

  On a related issue, does it make sense to have datastores 0 and 1
in a single files system so that the instantiations of non-persistent
images does not require a copy from one file system to another? I have
in mind the case where the original image is a qcow2 image.

  Regards,
  Gerry




--
Gerry O'Brien

Systems Manager
School of Computer Science and Statistics
Trinity College Dublin
Dublin 2
IRELAND

00 353 1 896 1341

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] oneimage QCOW2 problem: Error copying image in the datastore: Not allowed to copy image file

2013-09-11 Thread Gerry O'Brien

Hi,

By using /datastores instead of /var/lib/one/datastores, have I 
opened a security hole?



On 11/09/2013 12:51, Carlos Martín Sánchez wrote:

Hi,

On Wed, Sep 11, 2013 at 1:06 PM, Gerry O'Brien  wrote:


Hi Carlos,

 I appreciate the security issues. I'm just wondering why
/var/lib/one/datastores is not a safe directory by default given it is the
default location for datastores?


Oneadmin's home /var/lib/one is restricted by default, because it contains
the one_auth file, the database one.db... And /var/lib/one/datastores must
also be restricted, because a user should not be able to copy another
registered image in there. I hope this makes sense.

Cheers
--
Join us at OpenNebulaConf2013  in Berlin, 24-26
September, 2013
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org  | cmar...@opennebula.org |
@OpenNebula  




 Regards,
 Gerry



On 11/09/2013 11:51, Carlos Martín Sánchez wrote:


Hi,

Tue Sep 10 14:32:48 2013 [ImM][E]: cp: Not allowed to copy images from


/var/lib/one/ /etc/one/ /var/lib/one/


The dir /var/lib/one is a restricted dir, and OpenNebula won't allow you
to
copy images from there. Otherwise, you could copy the DB or other
authentication files. That's why it works from /datastores.

See [1] for more information.

Best regards.

[1]
http://opennebula.org/**documentation:rel4.2:fs_ds#**
configuring_the_filesystem_**datastores


--
Join us at OpenNebulaConf2013  in Berlin,
24-26

September, 2013
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebula

>



On Tue, Sep 10, 2013 at 4:59 PM, Gerry O'Brien  wrote:

  Hi,

  This seems to be a general issue not specific to QCOW2. For the
moment
I've solved the issue by mounting the datastores (which are NFS exports
for
a filestore) on the root partition at /datastores and created a symlink
form /var/lib/one/datatstore to /datastores.

   Is this correct?

  Gerry


On 10/09/2013 14:38, Gerry O'Brien wrote:

  Hi,

  I get the following error when trying to create an image from a
QCOW2
file:"Error copying image in the datastore: Not allowed to copy
image
file /var/lib/one/datastores/1/DELETEME.qcow2"

  Below are the commands I use to create the QCOW2 file before trying
to create the image named DELETEME using oneimage. The QCOW2 file is has
been created with a backing file.

  This used to work in Opennebula 3. I have made sure the use
oneadmin
is also in the cloud group in case it is some kind of permissions file.

  Any ideas?

  Regards,
  Gerry



qemu-img create -f qcow2 -o backing_file=/var/lib/one/
datastores/1/**
e1e1735dada84a7c6290001b9a244ebe /var/lib/one/datastores/1/
DELETEME.qcow2

qemu-img info /var/lib/one/datastores/1/DELETEME.qcow2
image: /var/lib/one/datastores/1/DELETEME.qcow2

file format: qcow2
virtual size: 50G (53687091200 bytes)
disk size: 12K
cluster_size: 65536
backing file: /var/lib/one/datastores/1/
e1e1735dada84a7c6290001b9a244e***
*be



ls -la /var/lib/one/datastores/1/DELETEME.qcow2

-rw-r--r-- 1 oneadmin oneadmin 197632 Sep 10 13:27
/var/lib/one/datastores/1/DELETEME.qcow2


   oneimage create -d default --name DELETEME  --path
/var/lib/one/datastores/1/DELETEME.qcow2 --prefix hd --type OS

--driver qcow2 --persistent






Below is a similar error message when using the sunstone GUI


Tue Sep 10 14:32:48 2013 [ImM][I]: Copying /var/lib/one/datastores/1/**
**VlabC_1.qcow2

to repository for image 37
Tue Sep 10 14:32:48 2013 [ReM][D]: Req:7232 UID:0 ImageAllocate result
SUCCESS, 37
Tue Sep 10 14:32:48 2013 [ReM][D]: Req:4064 UID:0 ImageInfo invoked, 37
Tue Sep 10 14:32:48 2013 [ReM][D]: Req:4064 UID:0 ImageInfo result
SUCCESS, "37
  ZJWD48IVtDREFUQVtoZF1dPjwvREVW**X1BSRUZJWD48RFJJVkVSPjwhW0NEQV

RBW3Fjb3cyXV0+PC9EUklWRVI+PC9URU1QTEFURT48L0lNQUdFPjxEQV
RBU1RPUkU+PElEPjE8L0lEPjxVSUQ+MDwvVUlEPjxHSUQ+**
MDwvR0lEPjxVTkFNRT5vbmVhZG1pbjwvVU5BTUU+**
PEdOQU1FPm9uZWFkbWluPC9HTkFNRT**48TkFNRT5kZWZhdWx0PC9OQU1FPjxQ
RVJNSVNTSU9OUz48T1dORVJfVT4xPC**9PV05FUl9VPjxPV05FUl9NPjE8L09X
TkVSX00+PE9XTkVSX0E+MDwvT1dORVJfQT48R1JPVVBfVT4xPC
9HUk9VUF9VPjxHUk9VUF9NPjA8L0dST1VQX00+PEdST1VQX0E+**
MDwvR1JPVVBfQT48T1RIRVJfVT4xPC**9PVEhFUl9VPjxPVEhFUl9NPjA8L09U
SEVSX00+PE9USEVSX0E+MDwvT1RIRVJfQT48L1BFUk1JU1NJT0**
**5TPjxEU19NQUQ+**
ZnM8L0RTX01BRD48VE1fTUFEPnNoYX**JlZDwvVE1fTUFEPjxCQVNFX1BBVEg+
L3Zhci9saWIvb25lL2RhdGFzdG9yZX**MvMTwvQkFTRV9QQVRIPjxUWVBFPjA8**
**L1RZUEU+*
***PERJU0tfVFlQRT4wPC9ESVNLX1RZUE**

Re: [one-users] File system performance testing suite tailored to OpenNebula

2013-09-11 Thread Carlo Daffara
It's difficult to provide an indication of what a typical workload may be, as 
it depends greatly on the
I/O properties of the VM that run inside (we found that the "internal" load of 
OpenNebula itself to be basically negligible).
For example, if you have lots of sequential I/O heavy VMs you may get benefits 
from one kind, while transactional and random I/O VMs may be more suitably 
served by other file systems.
We tend to use fio for benchmarks (http://freecode.com/projects/fio) that is 
included in most linux distributions; it provides for flexible selection of 
read-vs-write patterns, can select different probability distributions and 
includes a few common presets (like file server, mail server etc.)
Selecting the bottom file system for the store is thus extremely depending on 
application, feature and load. For example, we use in some configurations BTRFS 
with compression (slow rotative devices, especially when there are several of 
them in parallel), in other we use ext4 (good, all-around balanced) and in 
other XFS. For example XFS supports filesystem replication in a way similar to 
that of zfs (not as sofisticated, though), excellent performance for multiple 
parallel I/O operations.
ZFS in our tests tend to be extremely slow outside of a few "sweet spots"; a 
fact confirmed by external benchmarks like this one:
http://www.phoronix.com/scan.php?page=article&item=zfs_linux_062&num=3 We tried 
it (and we continue to do so, both for the FUSE and native kernel version) but 
for the moment the performance hit is excessive despite the nice feature set. 
BTRFS continue to improve nicely, and a set of patches to implement 
send/receive like ZFS are here: 
https://btrfs.wiki.kernel.org/index.php/Design_notes_on_Send/Receive but it is 
still marked as experimental.

I personally *love* ZFS, and the feature set is unparalleled. Unfortunately, 
the poor license choice means that it never got the kind of hammering and 
tuning that other linux kernel filesystem can get.
regards,
carlo daffara
cloudweavers

- Messaggio originale -
Da: "Gerry O'Brien" 
A: "Users OpenNebula" 
Inviato: Mercoledì, 11 settembre 2013 13:16:52
Oggetto: [one-users] File system performance testing suite tailored to  
OpenNebula

Hi,

 Are there any recommendations for a file system performance testing 
suite tailored to OpenNebula typical workloads? I would like to compare 
the performance of zfs v. ext4. One of the reasons for considering zfs 
is that it allows replication to a remote site using snapshot streaming. 
Normal nightly backups, using something like rsync, are not suitable for 
virtual machine images where a single block change means the whole image 
has to be copied. The amount of change is to great.

 On a related issue, does it make sense to have datastores 0 and 1 
in a single files system so that the instantiations of non-persistent 
images does not require a copy from one file system to another? I have 
in mind the case where the original image is a qcow2 image.

 Regards,
 Gerry

-- 
Gerry O'Brien

Systems Manager
School of Computer Science and Statistics
Trinity College Dublin
Dublin 2
IRELAND

00 353 1 896 1341

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] oneimage QCOW2 problem: Error copying image in the datastore: Not allowed to copy image file

2013-09-11 Thread Carlos Martín Sánchez
Hi,

On Wed, Sep 11, 2013 at 1:06 PM, Gerry O'Brien  wrote:

> Hi Carlos,
>
> I appreciate the security issues. I'm just wondering why
> /var/lib/one/datastores is not a safe directory by default given it is the
> default location for datastores?
>

Oneadmin's home /var/lib/one is restricted by default, because it contains
the one_auth file, the database one.db... And /var/lib/one/datastores must
also be restricted, because a user should not be able to copy another
registered image in there. I hope this makes sense.

Cheers
--
Join us at OpenNebulaConf2013  in Berlin, 24-26
September, 2013
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org  | cmar...@opennebula.org |
@OpenNebula  



> Regards,
> Gerry
>
>
>
> On 11/09/2013 11:51, Carlos Martín Sánchez wrote:
>
>> Hi,
>>
>> Tue Sep 10 14:32:48 2013 [ImM][E]: cp: Not allowed to copy images from
>>
>>> /var/lib/one/ /etc/one/ /var/lib/one/
>>>
>>
>> The dir /var/lib/one is a restricted dir, and OpenNebula won't allow you
>> to
>> copy images from there. Otherwise, you could copy the DB or other
>> authentication files. That's why it works from /datastores.
>>
>> See [1] for more information.
>>
>> Best regards.
>>
>> [1]
>> http://opennebula.org/**documentation:rel4.2:fs_ds#**
>> configuring_the_filesystem_**datastores
>>
>>
>> --
>> Join us at OpenNebulaConf2013  in Berlin,
>> 24-26
>>
>> September, 2013
>> --
>> Carlos Martín, MSc
>> Project Engineer
>> OpenNebula - The Open-source Solution for Data Center Virtualization
>> www.OpenNebula.org | cmar...@opennebula.org |
>> @OpenNebula
>> >>
>>
>>
>>
>> On Tue, Sep 10, 2013 at 4:59 PM, Gerry O'Brien  wrote:
>>
>>  Hi,
>>>
>>>  This seems to be a general issue not specific to QCOW2. For the
>>> moment
>>> I've solved the issue by mounting the datastores (which are NFS exports
>>> for
>>> a filestore) on the root partition at /datastores and created a symlink
>>> form /var/lib/one/datatstore to /datastores.
>>>
>>>   Is this correct?
>>>
>>>  Gerry
>>>
>>>
>>> On 10/09/2013 14:38, Gerry O'Brien wrote:
>>>
>>>  Hi,

  I get the following error when trying to create an image from a
 QCOW2
 file:"Error copying image in the datastore: Not allowed to copy
 image
 file /var/lib/one/datastores/1/DELETEME.qcow2"

  Below are the commands I use to create the QCOW2 file before trying
 to create the image named DELETEME using oneimage. The QCOW2 file is has
 been created with a backing file.

  This used to work in Opennebula 3. I have made sure the use
 oneadmin
 is also in the cloud group in case it is some kind of permissions file.

  Any ideas?

  Regards,
  Gerry



 qemu-img create -f qcow2 -o backing_file=/var/lib/one/
 datastores/1/**
 e1e1735dada84a7c6290001b9a244ebe /var/lib/one/datastores/1/
 DELETEME.qcow2

 qemu-img info /var/lib/one/datastores/1/DELETEME.qcow2
 image: /var/lib/one/datastores/1/DELETEME.qcow2

 file format: qcow2
 virtual size: 50G (53687091200 bytes)
 disk size: 12K
 cluster_size: 65536
 backing file: /var/lib/one/datastores/1/
 e1e1735dada84a7c6290001b9a244e***
 *be



 ls -la /var/lib/one/datastores/1/DELETEME.qcow2

 -rw-r--r-- 1 oneadmin oneadmin 197632 Sep 10 13:27
 /var/lib/one/datastores/1/DELETEME.qcow2


   oneimage create -d default --name DELETEME  --path
 /var/lib/one/datastores/1/DELETEME.qcow2 --prefix hd --type OS

 --driver qcow2 --persistent






 Below is a similar error message when using the sunstone GUI


 Tue Sep 10 14:32:48 2013 [ImM][I]: Copying /var/lib/one/datastores/1/**
 **VlabC_1.qcow2

 to repository for image 37
 Tue Sep 10 14:32:48 2013 [ReM][D]: Req:7232 UID:0 ImageAllocate result
 SUCCESS, 37
 Tue Sep 10 14:32:48 2013 [ReM][D]: Req:4064 UID:0 ImageInfo invoked, 37
 Tue Sep 10 14:32:48 2013 [ReM][D]: Req:4064 UID:0 ImageInfo result
 SUCCESS, "37>>> Tue Sep 10 14:32:48 2013 [ImM][I]: Command execution fail:
 /var/lib/one/remotes/datastore/fs/cp PERTX0RSSVZFUl9BQ1RJT05fREFUQT
 
 48SU1BR0U+PElEPjM3PC9JRD48VUlEPjA8L1VJRD**
 **48R0lEPjA8L0dJRD48VU5BTUU+**
 b25lYWRtaW48L1VOQU1FPjxHTkFNRT5vbmVhZG1pbjwvR05BTUU+**PE5BTUU+**
 UUNPVzItRXhhbXBsZTwvTkFNRT48UEVSTUlTU0lPTlM+PE9XTkVSX1U+**
 MTwvT1dORVJfVT48T1dORVJfTT4xPC**9PV05FUl9NPjxPV05FUl9BPjA8L09X
 TkVSX0E+PEdST1VQX1U+MDwvR1JPVVBfVT48R1JPVV

[one-users] File system performance testing suite tailored to OpenNebula

2013-09-11 Thread Gerry O'Brien

Hi,

Are there any recommendations for a file system performance testing 
suite tailored to OpenNebula typical workloads? I would like to compare 
the performance of zfs v. ext4. One of the reasons for considering zfs 
is that it allows replication to a remote site using snapshot streaming. 
Normal nightly backups, using something like rsync, are not suitable for 
virtual machine images where a single block change means the whole image 
has to be copied. The amount of change is to great.


On a related issue, does it make sense to have datastores 0 and 1 
in a single files system so that the instantiations of non-persistent 
images does not require a copy from one file system to another? I have 
in mind the case where the original image is a qcow2 image.


Regards,
Gerry

--
Gerry O'Brien

Systems Manager
School of Computer Science and Statistics
Trinity College Dublin
Dublin 2
IRELAND

00 353 1 896 1341

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] oneimage QCOW2 problem: Error copying image in the datastore: Not allowed to copy image file

2013-09-11 Thread Gerry O'Brien

Hi Carlos,

I appreciate the security issues. I'm just wondering why 
/var/lib/one/datastores is not a safe directory by default given it is 
the default location for datastores?


Regards,
Gerry


On 11/09/2013 11:51, Carlos Martín Sánchez wrote:

Hi,

Tue Sep 10 14:32:48 2013 [ImM][E]: cp: Not allowed to copy images from

/var/lib/one/ /etc/one/ /var/lib/one/


The dir /var/lib/one is a restricted dir, and OpenNebula won't allow you to
copy images from there. Otherwise, you could copy the DB or other
authentication files. That's why it works from /datastores.

See [1] for more information.

Best regards.

[1]
http://opennebula.org/documentation:rel4.2:fs_ds#configuring_the_filesystem_datastores


--
Join us at OpenNebulaConf2013  in Berlin, 24-26
September, 2013
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebula


On Tue, Sep 10, 2013 at 4:59 PM, Gerry O'Brien  wrote:


Hi,

 This seems to be a general issue not specific to QCOW2. For the moment
I've solved the issue by mounting the datastores (which are NFS exports for
a filestore) on the root partition at /datastores and created a symlink
form /var/lib/one/datatstore to /datastores.

  Is this correct?

 Gerry


On 10/09/2013 14:38, Gerry O'Brien wrote:


Hi,

 I get the following error when trying to create an image from a QCOW2
file:"Error copying image in the datastore: Not allowed to copy image
file /var/lib/one/datastores/1/**DELETEME.qcow2"
 Below are the commands I use to create the QCOW2 file before trying
to create the image named DELETEME using oneimage. The QCOW2 file is has
been created with a backing file.

 This used to work in Opennebula 3. I have made sure the use oneadmin
is also in the cloud group in case it is some kind of permissions file.

 Any ideas?

 Regards,
 Gerry



qemu-img create -f qcow2 -o backing_file=/var/lib/one/**datastores/1/**
e1e1735dada84a7c6290001b9a244e**be /var/lib/one/datastores/1/**DELETEME.qcow2

qemu-img info /var/lib/one/datastores/1/**DELETEME.qcow2
image: /var/lib/one/datastores/1/**DELETEME.qcow2
file format: qcow2
virtual size: 50G (53687091200 bytes)
disk size: 12K
cluster_size: 65536
backing file: /var/lib/one/datastores/1/**e1e1735dada84a7c6290001b9a244e*
*be



ls -la /var/lib/one/datastores/1/**DELETEME.qcow2
-rw-r--r-- 1 oneadmin oneadmin 197632 Sep 10 13:27
/var/lib/one/datastores/1/**DELETEME.qcow2

  oneimage create -d default --name DELETEME  --path
/var/lib/one/datastores/1/**DELETEME.qcow2 --prefix hd --type OS
--driver qcow2 --persistent






Below is a similar error message when using the sunstone GUI


Tue Sep 10 14:32:48 2013 [ImM][I]: Copying 
/var/lib/one/datastores/1/**VlabC_1.qcow2
to repository for image 37
Tue Sep 10 14:32:48 2013 [ReM][D]: Req:7232 UID:0 ImageAllocate result
SUCCESS, 37
Tue Sep 10 14:32:48 2013 [ReM][D]: Req:4064 UID:0 ImageInfo invoked, 37
Tue Sep 10 14:32:48 2013 [ReM][D]: Req:4064 UID:0 ImageInfo result
SUCCESS, "37
RU


ZJWD48IVtDREFUQVtoZF1dPjwvREVW**X1BSRUZJWD48RFJJVkVSPjwhW0NEQV**
RBW3Fjb3cyXV0+PC9EUklWRVI+**PC9URU1QTEFURT48L0lNQUdFPjxEQV**
RBU1RPUkU+PElEPjE8L0lEPjxVSUQ+**MDwvVUlEPjxHSUQ+**
MDwvR0lEPjxVTkFNRT5vbmVhZG1pbj**wvVU5BTUU+**
PEdOQU1FPm9uZWFkbWluPC9HTkFNRT**48TkFNRT5kZWZhdWx0PC9OQU1FPjxQ**
RVJNSVNTSU9OUz48T1dORVJfVT4xPC**9PV05FUl9VPjxPV05FUl9NPjE8L09X**
TkVSX00+PE9XTkVSX0E+**MDwvT1dORVJfQT48R1JPVVBfVT4xPC**
9HUk9VUF9VPjxHUk9VUF9NPjA8L0dS**T1VQX00+PEdST1VQX0E+**
MDwvR1JPVVBfQT48T1RIRVJfVT4xPC**9PVEhFUl9VPjxPVEhFUl9NPjA8L09U**
SEVSX00+PE9USEVSX0E+**MDwvT1RIRVJfQT48L1BFUk1JU1NJT0**5TPjxEU19NQUQ+**
ZnM8L0RTX01BRD48VE1fTUFEPnNoYX**JlZDwvVE1fTUFEPjxCQVNFX1BBVEg+**
L3Zhci9saWIvb25lL2RhdGFzdG9yZX**MvMTwvQkFTRV9QQVRIPjxUWVBFPjA8**L1RZUEU+*
*PERJU0tfVFlQRT4wPC9ESVNLX1RZUE**U+PENMVVNURVJfSUQ+**LTE8L0NMVVNURVJfSUQ+
**PENMVVNURVI+**PC9DTFVTVEVSPjxUT1RBTF9NQj4yMj**QwNzIzNjwvVE9UQUxfTUI+**
PEZSRUVfTUI+**MjIzNjQ1MzI8L0ZSRUVfTUI+**PFVTRURfTUI+**
NDI3MDc8L1VTRURfTUI+**PElNQUdFUz48SUQ+MDwvSUQ+**
PElEPjE8L0lEPjxJRD4yPC9JRD48SU**Q+MzwvSUQ+**PElEPjQ8L0lEPjxJRD4xNjwvSUQ+*
*PElEPjIwPC9JRD48L0lNQU


d


FUz48VEVNUExBVEU+**PERTX01BRD48IVtDREFUQVtmc11dPj**wvRFNfTUFEPjxUTV9NQUQ+
**PCFbQ0RBVEFbc2hhcmVkXV0+**PC9UTV9NQUQ+PFRZUEU+**
PCFbQ0RBVEFbSU1BR0VfRFNdXT48L1**RZUEU+**PC9URU1QTEFURT48L0RBVEFTVE9SRT**
48L0RTX0RSSVZFUl9BQ1RJT05fREFU**QT4= 37
Tue Sep 10 14:32:48 2013 [ImM][E]: cp: Not allowed to copy images from
/var/lib/one/ /etc/one/ /var/lib/one/
Tue Sep 10 14:32:48 2013 [ImM][E]: Not allowed to copy image file
/var/lib/one/datastores/1/**VlabC_1.qcow2
Tue Sep 10 14:32:48 2013 [ImM][I]: ExitCode: 255
Tue Sep 10 14:32:48 2013 [ImM][E]: Error copying image in the datastore:
Not allowed to copy image file /var/lib/one/datastores/1/**VlabC_1.qcow2











--
Gerry O'Brien

Systems Manager
School of Computer Science and Statistics

Re: [one-users] oneimage QCOW2 problem: Error copying image in the datastore: Not allowed to copy image file

2013-09-11 Thread Carlos Martín Sánchez
Hi,

Tue Sep 10 14:32:48 2013 [ImM][E]: cp: Not allowed to copy images from
> /var/lib/one/ /etc/one/ /var/lib/one/


The dir /var/lib/one is a restricted dir, and OpenNebula won't allow you to
copy images from there. Otherwise, you could copy the DB or other
authentication files. That's why it works from /datastores.

See [1] for more information.

Best regards.

[1]
http://opennebula.org/documentation:rel4.2:fs_ds#configuring_the_filesystem_datastores


--
Join us at OpenNebulaConf2013  in Berlin, 24-26
September, 2013
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebula


On Tue, Sep 10, 2013 at 4:59 PM, Gerry O'Brien  wrote:

> Hi,
>
> This seems to be a general issue not specific to QCOW2. For the moment
> I've solved the issue by mounting the datastores (which are NFS exports for
> a filestore) on the root partition at /datastores and created a symlink
> form /var/lib/one/datatstore to /datastores.
>
>  Is this correct?
>
> Gerry
>
>
> On 10/09/2013 14:38, Gerry O'Brien wrote:
>
>> Hi,
>>
>> I get the following error when trying to create an image from a QCOW2
>> file:"Error copying image in the datastore: Not allowed to copy image
>> file /var/lib/one/datastores/1/**DELETEME.qcow2"
>> Below are the commands I use to create the QCOW2 file before trying
>> to create the image named DELETEME using oneimage. The QCOW2 file is has
>> been created with a backing file.
>>
>> This used to work in Opennebula 3. I have made sure the use oneadmin
>> is also in the cloud group in case it is some kind of permissions file.
>>
>> Any ideas?
>>
>> Regards,
>> Gerry
>>
>>
>>
>> qemu-img create -f qcow2 -o backing_file=/var/lib/one/**datastores/1/**
>> e1e1735dada84a7c6290001b9a244e**be /var/lib/one/datastores/1/**DELETEME.qcow2
>>
>> qemu-img info /var/lib/one/datastores/1/**DELETEME.qcow2
>> image: /var/lib/one/datastores/1/**DELETEME.qcow2
>> file format: qcow2
>> virtual size: 50G (53687091200 bytes)
>> disk size: 12K
>> cluster_size: 65536
>> backing file: /var/lib/one/datastores/1/**e1e1735dada84a7c6290001b9a244e*
>> *be
>>
>>
>>
>> ls -la /var/lib/one/datastores/1/**DELETEME.qcow2
>> -rw-r--r-- 1 oneadmin oneadmin 197632 Sep 10 13:27
>> /var/lib/one/datastores/1/**DELETEME.qcow2
>>
>>  oneimage create -d default --name DELETEME  --path
>> /var/lib/one/datastores/1/**DELETEME.qcow2 --prefix hd --type OS
>> --driver qcow2 --persistent
>>
>>
>>
>>
>>
>>
>> Below is a similar error message when using the sunstone GUI
>>
>>
>> Tue Sep 10 14:32:48 2013 [ImM][I]: Copying 
>> /var/lib/one/datastores/1/**VlabC_1.qcow2
>> to repository for image 37
>> Tue Sep 10 14:32:48 2013 [ReM][D]: Req:7232 UID:0 ImageAllocate result
>> SUCCESS, 37
>> Tue Sep 10 14:32:48 2013 [ReM][D]: Req:4064 UID:0 ImageInfo invoked, 37
>> Tue Sep 10 14:32:48 2013 [ReM][D]: Req:4064 UID:0 ImageInfo result
>> SUCCESS, "37> Tue Sep 10 14:32:48 2013 [ImM][I]: Command execution fail:
>> /var/lib/one/remotes/**datastore/fs/cp PERTX0RSSVZFUl9BQ1RJT05fREFUQT**
>> 48SU1BR0U+**PElEPjM3PC9JRD48VUlEPjA8L1VJRD**48R0lEPjA8L0dJRD48VU5BTUU+**
>> b25lYWRtaW48L1VOQU1FPjxHTkFNRT**5vbmVhZG1pbjwvR05BTUU+PE5BTUU+**
>> UUNPVzItRXhhbXBsZTwvTkFNRT48UE**VSTUlTU0lPTlM+PE9XTkVSX1U+**
>> MTwvT1dORVJfVT48T1dORVJfTT4xPC**9PV05FUl9NPjxPV05FUl9BPjA8L09X**
>> TkVSX0E+PEdST1VQX1U+**MDwvR1JPVVBfVT48R1JPVVBfTT4wPC**
>> 9HUk9VUF9NPjxHUk9VUF9BPjA8L0dS**T1VQX0E+PE9USEVSX1U+**
>> MDwvT1RIRVJfVT48T1RIRVJfTT4wPC**9PVEhFUl9NPjxPVEhFUl9BPjA8L09U**SEVSX0E+*
>> *PC9QRVJNSVNTSU9OUz48VFlQRT4yPC**9UWVBFPjxESVNLX1RZUEU+**
>> MDwvRElTS19UWVBFPjxQRVJTSVNURU**5UPjE8L1BFUlNJU1RFTlQ+**PFJFR1RJTUU+**
>> MTM3ODgxOTk2ODwvUkVHVElNRT48U0**9VUkNFPjwvU09VUkNFPjxQQVRIPi92**
>> YXIvbGliL29uZS9kYXRhc3RvcmVzLz**EvVmxhYkNfMS5xY293MjwvUEFUSD48**
>> RlNUWVBFPjwvRlNUWVBFPjxTSVpFPj**E8L1NJWkU+**
>> PFNUQVRFPjQ8L1NUQVRFPjxSVU5OSU**5HX1ZNUz4wPC9SVU5OSU5HX1ZNUz48**
>> Q0xPTklOR19PUFM+**MDwvQ0xPTklOR19PUFM+**PENMT05JTkdfSUQ+**
>> LTE8L0NMT05JTkdfSUQ+**PERBVEFTVE9SRV9JRD4xPC9EQVRBU1**RPUkVfSUQ+**
>> PERBVEFTVE9SRT5kZWZhdWx0PC9EQV**RBU1RPUkU+**
>> PFZNUz48L1ZNUz48Q0xPTkVTPjwvQ0**xPTkVTPjxURU1QTEFURT48REVWX1BS
>>
> RU
>
>> ZJWD48IVtDREFUQVtoZF1dPjwvREVW**X1BSRUZJWD48RFJJVkVSPjwhW0NEQV**
>> RBW3Fjb3cyXV0+PC9EUklWRVI+**PC9URU1QTEFURT48L0lNQUdFPjxEQV**
>> RBU1RPUkU+PElEPjE8L0lEPjxVSUQ+**MDwvVUlEPjxHSUQ+**
>> MDwvR0lEPjxVTkFNRT5vbmVhZG1pbj**wvVU5BTUU+**
>> PEdOQU1FPm9uZWFkbWluPC9HTkFNRT**48TkFNRT5kZWZhdWx0PC9OQU1FPjxQ**
>> RVJNSVNTSU9OUz48T1dORVJfVT4xPC**9PV05FUl9VPjxPV05FUl9NPjE8L09X**
>> TkVSX00+PE9XTkVSX0E+**MDwvT1dORVJfQT48R1JPVVBfVT4xPC**
>> 9HUk9VUF9VPjxHUk9VUF9NPjA8L0dS**T1VQX00+PEdST1VQX0E+**
>> MDwvR1JPVVBfQT48T1RIRVJfVT4xPC**9PVEhFUl9VPjxPVEhFUl9NPjA8L09U**
>> SEVSX00+PE9USEVSX0E+**MDwvT1RIRVJfQT48L1BFUk1JU1NJT0**5TPjxEU19NQUQ+**
>> ZnM8L0RTX01BRD48VE1fTUFEPnNoYX**JlZDwvVE1fTUFEPjxCQVNFX1BBV

Re: [one-users] Fwd: how running vms moved(not recreate) on another host on host error

2013-09-11 Thread Carlos Martín Sánchez
Hi,

What do you exactly mean by "move"? If you are referring to migration,
that's not possible, once a host goes down, the VM state is lost.

Regards

--
Join us at OpenNebulaConf2013  in Berlin, 24-26
September, 2013
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebula


On Tue, Sep 10, 2013 at 11:47 PM, Romany Nageh wrote:

> HI
> i am using opennebula 4.2 how to handle vms running on specific host to
> move (not recreate) to another host when host error(down)
>
> please could any on help me ?
> Thanks
>
> -- Forwarded message --
> From: "Romany Nageh" 
> Date: Sep 9, 2013 9:46 PM
> Subject: how running vms moved(not recreate) on another host on host error
> To: , "Carlos Martín Sánchez" <
> cmar...@opennebula.org>
>
> HI
> i am  using opennebula 4.2
> how to handle vms running on specific host to move (not recreate) to
> another host when host error(down)
>
> please could any on help me ?
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] 答复: Question about attach_disk on one-3.8.4

2013-09-11 Thread Carlos Martín Sánchez
Hi Sam,

As I said, that's something you need to configure in your guest. Did you
try 王根意's suggestion?

Regards

--
Join us at OpenNebulaConf2013  in Berlin, 24-26
September, 2013
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebula


On Wed, Sep 11, 2013 at 8:51 AM, Sam Song  wrote:

> Hi, Carlos:
>
> ** **
>
> Thanks for your reply. 
>
> I’ve got that there is no need to reboot the guest. But, is there a method
> to trigger the whole thing happen **automatically** in a guest OS? 
>
> ** **
>
> Thanks
>
> ** **
>
> Sam
>
> ** **
>
> *发件人**:* Carlos Martín Sánchez [mailto:cmar...@opennebula.org]
> *发送时间:* 2013年9月9日 17:34
> *收件人:* Sam Song
> *抄送:* users
> *主题:* Re: [one-users] Question about attach_disk on one-3.8.4
>
> ** **
>
> Hi,
>
> ** **
>
> Once the disk is attached, it is up to the guest OS to detect and mount
> it. In ubuntu, you can rescan the scsi bus with the command:
>
> ** **
>
> echo “- �C -” > /sys/class/scsi_host/host0/scan
>
> ** **
>
> You may need to change host0 to other id.
>
> ** **
>
> Regards.
>
>
> 
>
> --
> Join us at OpenNebulaConf2013  in Berlin,
> 24-26 September, 2013
> --
>
> Carlos Martín, MSc
> Project Engineer
> OpenNebula - The Open-source Solution for Data Center Virtualization
>
> www.OpenNebula.org | cmar...@opennebula.org | 
> @OpenNebula
> 
>
> ** **
>
> On Sat, Sep 7, 2013 at 2:48 AM, Sam Song  wrote:***
> *
>
> Hi, forks:
>
> When I attach a datablock type image to a running vm, I need to reboot vm
> to
> find the new device and mount it manually.
> I want to know how do you attach an image to a vm and make it ready to use?
> Is there a designing of opennebula to enable mounting the attached disk
> automaticly?
>
> My environment:
> Hypervisor: kvm
> Guest os: Ubuntu 12.04, centos 6.3, rhel 6.4
> One version: 3.8.4
> Ds_mad: fs
> Tm_mad:shared
>
> Thanks
>
> sam
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
> ** **
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] about radius AUTH with opennebula

2013-09-11 Thread Jonathan Chen
Hi everyone,

I try write the radius auth driver with opennebula.

I hope the driver, help more people.

the driver need improve its code.

share for Opennebula Community


must need install rubygem ruby-radius-1.1


authenticate
Description: Binary data


radius_auth.conf
Description: Binary data


radius_auth.rb
Description: Binary data
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] oned cannot start after upgrading to opennebula 3.6

2013-09-11 Thread Lukman Fikri
Hello, 

Previously, I already had opennebula 3.4.1 installed on my machine.
I want to upgrade it to 3.6 version, so i downloaded the tarball from 
http://downloads.opennebula.org/
i extract it to certain user home directory (not the oneadmin home directory)
then i executed ./install.sh as oneadmin user
However, i cannot start the oned process now
oneadmin@cloud1:/home/lukmanf$ one start
Could not open log file
Could not open log file
oned failed to start
scheduler failed to start
oneadmin@cloud1:/home/lukmanf$ onehost list
ONE_AUTH file not present


could you tell me what went wrong or the mistake that possibly happened?
thank you in advance,

-
Lukman Fikri
  ___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org