[one-users] listing images and its vms

2014-02-05 Thread João Pagaime

Hello all

I'm trying to put together a very simple ruby script that outputs 
opennebula's images and its vms


as far as listing opennebula's images, everything is fine, but I get an 
error when trying, for each image, to lists its vms


acording to the 4.2 API documentation the  image class should have a 
vms list/array member, but I get an error when I try to instantiate that


where's the script. The offending statement in red. Any help would be 
appreciated


thanks
joão

-- (snatched from openneb examples and then 
adapted...)


#!/usr/bin/env ruby

##
# Environment Configuration
##
ONE_LOCATION=ENV[ONE_LOCATION]

if !ONE_LOCATION
RUBY_LIB_LOCATION=/usr/lib/one/ruby
else
RUBY_LIB_LOCATION=ONE_LOCATION+/lib/ruby
end

$:  RUBY_LIB_LOCATION

##
# Required libraries
##
require 'opennebula'

include OpenNebula

# OpenNebula credentials
CREDENTIALS = oneadmin:(REMOVED)
# XML_RPC endpoint where OpenNebula is listening
ENDPOINT=http://localhost:2633/RPC2;

client = Client.new(CREDENTIALS, ENDPOINT)

img_pool = ImagePool.new(client, -1)

rc2 = img_pool.info
if OpenNebula.is_error?(rc2)
 puts rc2.message
 exit -1
end

img_pool.each do|img|
   puts img id=#{img.id},#{img.name}
   img.vms.each do|vm|
   end
end


---
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] listing images and its vms

2014-02-05 Thread João Pagaime

thanks, you answer was very useful!

--joão

Em 05-02-2014 11:38, Javier Fontan escreveu:

img does not have a method called vms. You can get the IDs of the
VMs using the retrieve_elements method [1]:

vm_ids=img.retrieve_elements(VMS/ID)

As an example you can check the code for oneimage show [2].

[1] 
http://docs.opennebula.org/doc/4.4/oca/ruby/OpenNebula/XMLElement.html#retrieve_elements-instance_method
[2] 
https://github.com/OpenNebula/one/blob/one-4.4/src/cli/one_helper/oneimage_helper.rb#L301


On Wed, Feb 5, 2014 at 9:44 AM, João Pagaime joao.paga...@gmail.com wrote:

Hello all

I'm trying to put together a very simple ruby script that outputs
opennebula's images and its vms

as far as listing opennebula's images, everything is fine, but I get an
error when trying, for each image, to lists its vms

acording to the 4.2 API documentation the  image class should have a vms
list/array member, but I get an error when I try to instantiate that

where's the script. The offending statement in red. Any help would be
appreciated

thanks
joão

-- (snatched from openneb examples and then
adapted...)

#!/usr/bin/env ruby

##
# Environment Configuration
##
ONE_LOCATION=ENV[ONE_LOCATION]

if !ONE_LOCATION
 RUBY_LIB_LOCATION=/usr/lib/one/ruby
else
 RUBY_LIB_LOCATION=ONE_LOCATION+/lib/ruby
end

$:  RUBY_LIB_LOCATION

##
# Required libraries
##
require 'opennebula'

include OpenNebula

# OpenNebula credentials
CREDENTIALS = oneadmin:(REMOVED)
# XML_RPC endpoint where OpenNebula is listening
ENDPOINT= http://localhost:2633/RPC2;

client = Client.new(CREDENTIALS, ENDPOINT)

img_pool = ImagePool.new(client, -1)

rc2 = img_pool.info
if OpenNebula.is_error?(rc2)
  puts rc2.message
  exit -1
end

img_pool.each do |img|
puts img id=#{img.id},#{img.name}
img.vms.each do |vm|
end
end


---

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org






___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] File system performance testing suite tailored to OpenNebula

2013-09-11 Thread João Pagaime

Hello all,

the topic is very interesting

I wonder if anyone could answer this:

what is the penalty of using a file-system on top of a file-system? that 
is what happens when the VM disk is a regular file on the hypervisor's 
filesystem. I mean: the VM has its own file-system and then the 
hypervisor maps that vm-disk on a regular file on another filesystem 
(the hypervisor filesystem). Thus the file-system on top of a 
file-system issue


putting the question the other way around: what is the benefit of using 
raw disk-device (local disk, LVM, iSCSI, ...) as an open-nebula datastore?


didn't test this but I feel the benefit should be substantial

anyway simple bonnie++ tests within a VM show heavy penalties, comparing 
test running in  the VM and outside (directly on the hipervisor).  That 
isn't of course an opennebula related performance issue, but a more 
general technology challenge


best regards,
João




Em 11-09-2013 13:10, Gerry O'Brien escreveu:

Hi Carlo,

  Thanks for the reply. I should really look at XFS for the 
replication and performance.


  Do you have any thoughts on my second questions about qcow2 copies 
form /datastores/1 to /datastores/0 in a single filesystem?


Regards,
  Gerry


On 11/09/2013 12:53, Carlo Daffara wrote:
It's difficult to provide an indication of what a typical workload 
may be, as it depends greatly on the
I/O properties of the VM that run inside (we found that the 
internal load of OpenNebula itself to be basically negligible).
For example, if you have lots of sequential I/O heavy VMs you may get 
benefits from one kind, while transactional and random I/O VMs may be 
more suitably served by other file systems.
We tend to use fio for benchmarks (http://freecode.com/projects/fio) 
that is included in most linux distributions; it provides for 
flexible selection of read-vs-write patterns, can select different 
probability distributions and includes a few common presets (like 
file server, mail server etc.)
Selecting the bottom file system for the store is thus extremely 
depending on application, feature and load. For example, we use in 
some configurations BTRFS with compression (slow rotative devices, 
especially when there are several of them in parallel), in other we 
use ext4 (good, all-around balanced) and in other XFS. For example 
XFS supports filesystem replication in a way similar to that of zfs 
(not as sofisticated, though), excellent performance for multiple 
parallel I/O operations.
ZFS in our tests tend to be extremely slow outside of a few sweet 
spots; a fact confirmed by external benchmarks like this one:
http://www.phoronix.com/scan.php?page=articleitem=zfs_linux_062num=3 We 
tried it (and we continue to do so, both for the FUSE and native 
kernel version) but for the moment the performance hit is excessive 
despite the nice feature set. BTRFS continue to improve nicely, and a 
set of patches to implement send/receive like ZFS are here: 
https://btrfs.wiki.kernel.org/index.php/Design_notes_on_Send/Receive 
but it is still marked as experimental.


I personally *love* ZFS, and the feature set is unparalleled. 
Unfortunately, the poor license choice means that it never got the 
kind of hammering and tuning that other linux kernel filesystem can get.

regards,
carlo daffara
cloudweavers

- Messaggio originale -
Da: Gerry O'Brien ge...@scss.tcd.ie
A: Users OpenNebula users@lists.opennebula.org
Inviato: Mercoledì, 11 settembre 2013 13:16:52
Oggetto: [one-users] File system performance testing suite tailored 
toOpenNebula


Hi,

  Are there any recommendations for a file system performance 
testing

suite tailored to OpenNebula typical workloads? I would like to compare
the performance of zfs v. ext4. One of the reasons for considering zfs
is that it allows replication to a remote site using snapshot streaming.
Normal nightly backups, using something like rsync, are not suitable for
virtual machine images where a single block change means the whole image
has to be copied. The amount of change is to great.

  On a related issue, does it make sense to have datastores 0 and 1
in a single files system so that the instantiations of non-persistent
images does not require a copy from one file system to another? I have
in mind the case where the original image is a qcow2 image.

  Regards,
  Gerry






___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] File system performance testing suite tailored to OpenNebula

2013-09-11 Thread João Pagaime

thanks  for pointing out the paper

I've glanced at it and somewhat confirmed my impressions on write 
operations (which are very relevant on transactional environments):  the 
penalty on write operations doesn't seem to be negligible.


best regards,
João

Em 11-09-2013 14:55, Carlo Daffara escreveu:

Not a simple answer, however this article by LE and Huang provide quite some 
details:
https://www.usenix.org/legacy/event/fast12/tech/full_papers/Le.pdf
we ended up using ext4 and xfs mainly, with btrfs for mirrored disks or for 
very slow rotational media.
Raw is good if you are able to map disks directly and you don't change them, 
but our results find that the difference is not that great- but the 
inconvenience is major :-)
When using kvm and virtio, the actual loss in IO performance is not very high 
for the majority of workloads. Windows is a separate issue- ntfs has very poor 
performance on small blocks for sparse writes, and this tends to increase the 
apparent inefficiency of kvm.
Actually, using the virtio device drivers the penalty is very small for most 
workloads; we tested a windows7 machine both as native (physical) and 
virtualized using a simple crystalmark test, and we found that using virtio the 
4k random io write test is just 15% slower, while the sequential ones are much 
faster virtualized (thanks to the linux native page cache).
We use for the intensive io workloads a combination of a single ssd plus one or 
more rotative disks, combined using enhanceio.
We observed an increase of the available IOPS for random write (especially 
important for database servers, AD machines...) of 8 times using consumer-grade 
ssds.
cheers,
Carlo Daffara
cloudweavers

- Messaggio originale -
Da: João Pagaime joao.paga...@gmail.com
A: users@lists.opennebula.org
Inviato: Mercoledì, 11 settembre 2013 15:20:19
Oggetto: Re: [one-users] File system performance testing suite tailored to 
OpenNebula

Hello all,

the topic is very interesting

I wonder if anyone could answer this:

what is the penalty of using a file-system on top of a file-system? that
is what happens when the VM disk is a regular file on the hypervisor's
filesystem. I mean: the VM has its own file-system and then the
hypervisor maps that vm-disk on a regular file on another filesystem
(the hypervisor filesystem). Thus the file-system on top of a
file-system issue

putting the question the other way around: what is the benefit of using
raw disk-device (local disk, LVM, iSCSI, ...) as an open-nebula datastore?

didn't test this but I feel the benefit should be substantial

anyway simple bonnie++ tests within a VM show heavy penalties, comparing
test running in  the VM and outside (directly on the hipervisor).  That
isn't of course an opennebula related performance issue, but a more
general technology challenge

best regards,
João




Em 11-09-2013 13:10, Gerry O'Brien escreveu:

Hi Carlo,

   Thanks for the reply. I should really look at XFS for the
replication and performance.

   Do you have any thoughts on my second questions about qcow2 copies
form /datastores/1 to /datastores/0 in a single filesystem?

 Regards,
   Gerry


On 11/09/2013 12:53, Carlo Daffara wrote:

It's difficult to provide an indication of what a typical workload
may be, as it depends greatly on the
I/O properties of the VM that run inside (we found that the
internal load of OpenNebula itself to be basically negligible).
For example, if you have lots of sequential I/O heavy VMs you may get
benefits from one kind, while transactional and random I/O VMs may be
more suitably served by other file systems.
We tend to use fio for benchmarks (http://freecode.com/projects/fio)
that is included in most linux distributions; it provides for
flexible selection of read-vs-write patterns, can select different
probability distributions and includes a few common presets (like
file server, mail server etc.)
Selecting the bottom file system for the store is thus extremely
depending on application, feature and load. For example, we use in
some configurations BTRFS with compression (slow rotative devices,
especially when there are several of them in parallel), in other we
use ext4 (good, all-around balanced) and in other XFS. For example
XFS supports filesystem replication in a way similar to that of zfs
(not as sofisticated, though), excellent performance for multiple
parallel I/O operations.
ZFS in our tests tend to be extremely slow outside of a few sweet
spots; a fact confirmed by external benchmarks like this one:
http://www.phoronix.com/scan.php?page=articleitem=zfs_linux_062num=3 We
tried it (and we continue to do so, both for the FUSE and native
kernel version) but for the moment the performance hit is excessive
despite the nice feature set. BTRFS continue to improve nicely, and a
set of patches to implement send/receive like ZFS are here:
https://btrfs.wiki.kernel.org/index.php/Design_notes_on_Send/Receive
but it is still marked as experimental.

I

[one-users] howto show VM password to a sunstone regular user

2012-11-09 Thread João Pagaime
Hello all,

Is there some way to show the VM initial password  to a regular sunstone user?

one way would be to insert a specific template variable on the VM
template, but the XML-RPC API doesn't have an update operation

other ways of passing that information to the user? sending it to the
VM log with a create-vm-hook? send an email?

best regards
João
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] additional parameters for the DISK section

2012-10-18 Thread João Pagaime
thanks, that may be it (SCSI doesn't work because of lacking support
from the OS, IDE didn't work due to target collision identifiers with
the cd-rom (fixed that; and virt-io isn't supported on the old guest
OS)

--
João

On Wed, Oct 17, 2012 at 2:25 PM, André Monteiro andre.mont...@gmail.com wrote:
 Hello João,

 Be aware that some OS don't have a libvirt version that support SCSI, like
 RHEL and CentOS.

 --
 André Monteiro




 On Wed, Oct 17, 2012 at 2:19 PM, João Pagaime joao.paga...@gmail.com
 wrote:

 Hello Ruben

 got it to work with IDE, with 3 disks (the remaining IDE device is the
 cd-rom, and that was the problem: the cd-rom and one of the disks were
 both using the hdc hint target

 couldn't make it work with SCSI

 thanks
 João


 On Wed, Oct 17, 2012 at 1:33 PM, João Pagaime joao.paga...@gmail.com
 wrote:
  Hello Ruben
 
  that's right  TARGET doesn't seem to be working (at least with SCSI)
 
  here's a quick example:
 
  - open-nebula template:
  ..
  DISK=[
DRIVER=raw,
IMAGE_ID=82,
READONLY=no,
TARGET=sda ]
  DISK=[
DRIVER=raw,
IMAGE_ID=83,
READONLY=no,
TARGET=sdb ]
 
  - libvirt generated  template:
  ..
  disk type='file' device='disk'
  source
  file='/var/lib/one//datastores/0/931/disk.1'/
  target dev='sda' bus='scsi'/
  driver name='qemu' type='raw' cache='default'/
  /disk
  disk type='file' device='disk'
  source
  file='/var/lib/one//datastores/0/931/disk.2'/
  target dev='sdb' bus='scsi'/
  driver name='qemu' type='raw' cache='default'/
  /disk
 
  -- error:
  ..
  process exited while connecting to monitor: kvm: -device
  lsi,id=scsi0,bus=pci.0,addr=0x4: Parameter 'driver' expects a driver
  name
 
 
  best regards,
  João
 
 
 
 
 
  On Wed, Oct 17, 2012 at 10:56 AM, Ruben S. Montero
  rsmont...@opennebula.org wrote:
  Hi  João
 
  Yes you are right, some additional work is needed to get raw content on
  specific sections. RAW will just append the content to the end of the
  generated XML document.
 
  BTW the section you are adding is related to DISK layout, is TARGET not
  working for you? Something similar is added by libvirt for each disk
  based
  on the target (when specified)
 
 
  Best
 
  Ruben
 
 
  On Tue, Oct 16, 2012 at 7:42 PM, João Pagaime joao.paga...@gmail.com
  wrote:
 
  thanks for the reply
 
  didn't say the full story, sorry...
 
  the extra parameters that I need aren't root parameters, but instead
  must be inside a disk section, on libvirt's vm description
 
  I guess the RAW section can't be inside a DISK section
 
  best regards,
  João
 
  On Tue, Oct 16, 2012 at 6:29 PM, André Monteiro
  andre.mont...@gmail.com
  wrote:
   Hello,
  
   Just put on RAW section:
  
   RAW = [
 TYPE  = kvm,
 DATA  = alias name='ide0-1-0'/
  address type='drive' controller='0' bus='1' target='0'
   unit='0'/]
  
   --
   André Monteiro
  
  
  
  
   On Tue, Oct 16, 2012 at 6:06 PM, João Pagaime
   joao.paga...@gmail.com
   wrote:
  
   Hello all
  
   I'm trying to start a VM with 2 IDE ou SCSI disks on KVM. Can't use
   virtio, since this is a physical to virtual operation,  of an old
   system that doesn't support virtio
  
   ths problem is that KVM/QEMU gives out an error
  
   after simulating the situation with virt-manager, it seems that
   libvirt XML needs to indicate the  following two extra parameters
   (example):
 alias name='ide0-1-0'/
 address type='drive' controller='0' bus='1' target='0'
   unit='0'/
  
   is there some way to put this on open-nebula template?
  
   thanks, best regards,
   João
   ___
   Users mailing list
   Users@lists.opennebula.org
   http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
  
  
  ___
  Users mailing list
  Users@lists.opennebula.org
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 
 
 
 
  --
  Ruben S. Montero, PhD
  Project co-Lead and Chief Architect
  OpenNebula - The Open Source Solution for Data Center Virtualization
  www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] additional parameters for the DISK section

2012-10-17 Thread João Pagaime
Hello Ruben

that's right  TARGET doesn't seem to be working (at least with SCSI)

here's a quick example:

- open-nebula template:
..
DISK=[
  DRIVER=raw,
  IMAGE_ID=82,
  READONLY=no,
  TARGET=sda ]
DISK=[
  DRIVER=raw,
  IMAGE_ID=83,
  READONLY=no,
  TARGET=sdb ]

- libvirt generated  template:
..
disk type='file' device='disk'
source file='/var/lib/one//datastores/0/931/disk.1'/
target dev='sda' bus='scsi'/
driver name='qemu' type='raw' cache='default'/
/disk
disk type='file' device='disk'
source file='/var/lib/one//datastores/0/931/disk.2'/
target dev='sdb' bus='scsi'/
driver name='qemu' type='raw' cache='default'/
/disk

-- error:
..
process exited while connecting to monitor: kvm: -device
lsi,id=scsi0,bus=pci.0,addr=0x4: Parameter 'driver' expects a driver
name


best regards,
João





On Wed, Oct 17, 2012 at 10:56 AM, Ruben S. Montero
rsmont...@opennebula.org wrote:
 Hi  João

 Yes you are right, some additional work is needed to get raw content on
 specific sections. RAW will just append the content to the end of the
 generated XML document.

 BTW the section you are adding is related to DISK layout, is TARGET not
 working for you? Something similar is added by libvirt for each disk based
 on the target (when specified)


 Best

 Ruben


 On Tue, Oct 16, 2012 at 7:42 PM, João Pagaime joao.paga...@gmail.com
 wrote:

 thanks for the reply

 didn't say the full story, sorry...

 the extra parameters that I need aren't root parameters, but instead
 must be inside a disk section, on libvirt's vm description

 I guess the RAW section can't be inside a DISK section

 best regards,
 João

 On Tue, Oct 16, 2012 at 6:29 PM, André Monteiro andre.mont...@gmail.com
 wrote:
  Hello,
 
  Just put on RAW section:
 
  RAW = [
TYPE  = kvm,
DATA  = alias name='ide0-1-0'/
 address type='drive' controller='0' bus='1' target='0'
  unit='0'/]
 
  --
  André Monteiro
 
 
 
 
  On Tue, Oct 16, 2012 at 6:06 PM, João Pagaime joao.paga...@gmail.com
  wrote:
 
  Hello all
 
  I'm trying to start a VM with 2 IDE ou SCSI disks on KVM. Can't use
  virtio, since this is a physical to virtual operation,  of an old
  system that doesn't support virtio
 
  ths problem is that KVM/QEMU gives out an error
 
  after simulating the situation with virt-manager, it seems that
  libvirt XML needs to indicate the  following two extra parameters
  (example):
alias name='ide0-1-0'/
address type='drive' controller='0' bus='1' target='0'
  unit='0'/
 
  is there some way to put this on open-nebula template?
 
  thanks, best regards,
  João
  ___
  Users mailing list
  Users@lists.opennebula.org
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 
 
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




 --
 Ruben S. Montero, PhD
 Project co-Lead and Chief Architect
 OpenNebula - The Open Source Solution for Data Center Virtualization
 www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] additional parameters for the DISK section

2012-10-16 Thread João Pagaime
thanks for the reply

didn't say the full story, sorry...

the extra parameters that I need aren't root parameters, but instead
must be inside a disk section, on libvirt's vm description

I guess the RAW section can't be inside a DISK section

best regards,
João

On Tue, Oct 16, 2012 at 6:29 PM, André Monteiro andre.mont...@gmail.com wrote:
 Hello,

 Just put on RAW section:

 RAW = [
   TYPE  = kvm,
   DATA  = alias name='ide0-1-0'/
address type='drive' controller='0' bus='1' target='0' unit='0'/]

 --
 André Monteiro




 On Tue, Oct 16, 2012 at 6:06 PM, João Pagaime joao.paga...@gmail.com
 wrote:

 Hello all

 I'm trying to start a VM with 2 IDE ou SCSI disks on KVM. Can't use
 virtio, since this is a physical to virtual operation,  of an old
 system that doesn't support virtio

 ths problem is that KVM/QEMU gives out an error

 after simulating the situation with virt-manager, it seems that
 libvirt XML needs to indicate the  following two extra parameters
 (example):
   alias name='ide0-1-0'/
   address type='drive' controller='0' bus='1' target='0' unit='0'/

 is there some way to put this on open-nebula template?

 thanks, best regards,
 João
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] auth - stop showing clear password on the logs

2012-05-11 Thread João Pagaime

HEllo Ruben,

thanks for the reply

You're right: configuring  DEBUG_LEVEL to 0 stopped that behavior 
(showing clear password on the logs)


a few issues more:

-
1- one_auth_mad.rb didn't deal well with passwords (secret) with 
special characters like $ ou . Surrounding the secret variable 
with the (') character seems to fix that. The code line now looks like 
this:


command   '  user.gsub(', '\'\'\'')  ' '  
password.gsub(', '\'\'\'')  ' '  secret  '


don't know if there are other points in the code that could use this 
change


-
2- In case of a wrong (LDAP) password, Sunstone gives the following 
error message:
OpenNebula is not running or there was a server exception. Please 
check the server logs.


This message is a bit confusing for LDAP uses. I suggest another error 
messagem like:
Authenticaton failure: wrong username, password or OpenNebula is not 
running or there was a server exception.
Of course a better error messagem would be: Authenticaton failure: 
(real reason)


-
3- I would like to set  the DEBUG_LEVEL to 3 again but really don't 
want passwords going to the logs. Is this possible? Where should I tune 
the system? one_auth_mad.rb? The run shell command method, filtering 
out the problematic cases? Where can I find the run method?


Cheers,
João


On Fri, 11 May 2012 00:05:34 +0200, Ruben S. Montero wrote:

Hi

You may try to change the verbosity of the DEBUG messages in
oned.conf.  DEBUG_LEVEL=0 will only output ERROR messages (those
labeled) with [E]. Once you have deployed and tuned the 
infrastructure

it may be a good idea to decrease the debug messages to ERROR/WARNING
level.

Cheers

Ruben

On Thu, May 10, 2012 at 8:50 PM, João Pagaime j...@fccn.pt wrote:

Hello all

could somebody show where to change open-nebula for it to stop 
showing clear

text passords?

probably somewhere on the code...

it is showing clear text passords for some cases of Sunstone LDAP 
auth

errors (as shown bellow)

--
Thu May 10 19:20:02 2012 [ReM][D]: UserInfo method invoked
Thu May 10 19:20:02 2012 [AuM][D]: Message received: LOG I 2 Command
execution f
ail: /var/lib/one/remotes/auth/default/authenticate 'USER' '-' 
PASSWORD


Thu May 10 19:20:02 2012 [AuM][I]: Command execution fail:
/var/lib/one/remotes/auth/default/authenticate 'USER' '-' PASSWORD
Thu May 10 19:20:02 2012 [AuM][D]: Message received: LOG I 2 User 
USER not

found

Thu May 10 19:20:02 2012 [AuM][I]: User USER not found
Thu May 10 19:20:02 2012 [AuM][D]: Message received: LOG I 2 
ExitCode: 255


Thu May 10 19:20:02 2012 [AuM][I]: ExitCode: 255
Thu May 10 19:20:02 2012 [AuM][D]: Message received: AUTHENTICATE 
FAILURE 2

-

Thu May 10 19:20:02 2012 [AuM][E]: Auth Error:
Thu May 10 19:20:02 2012 [ReM][E]: [UserInfo] User couldn't be
authenticated, aborting call.
--

maybe it would be a good ideia to ship the production versions 
without this

behavior

cheers
João
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


--
João Pagaime
FCCN - Área de Infra-estruturas Aplicacionais
Av. do Brasil, n.º 101 - Lisboa
Telef. +351 218440100  Fax +351 218472167
www.fccn.pt

Aviso de Confidencialidade/Disclaimer
Esta mensagem é exclusivamente destinada ao seu destinatário, podendo
conter informação CONFIDENCIAL, cuja divulgação está expressamente
vedada nos termos da lei. Caso tenha recepcionado indevidamente esta
mensagem, solicitamos-lhe que nos comunique esse mesmo facto por esta
via ou para o telefone +351 218440100 devendo apagar o seu conteúdo de
imediato. This message is intended exclusively for its addressee. It 
may

contain CONFIDENTIAL information protected by law. If this message has
been received by error, please notify us via e-mail or by telephone 
+351

218440100 and delete it immediately.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] auth - stop showing clear password on the logs

2012-05-10 Thread João Pagaime

Hello all

could somebody show where to change open-nebula for it to stop showing 
clear text passords?


probably somewhere on the code...

it is showing clear text passords for some cases of Sunstone LDAP auth 
errors (as shown bellow)


--
Thu May 10 19:20:02 2012 [ReM][D]: UserInfo method invoked
Thu May 10 19:20:02 2012 [AuM][D]: Message received: LOG I 2 Command 
execution f

ail: /var/lib/one/remotes/auth/default/authenticate 'USER' '-' PASSWORD

Thu May 10 19:20:02 2012 [AuM][I]: Command execution fail: 
/var/lib/one/remotes/auth/default/authenticate 'USER' '-' PASSWORD
Thu May 10 19:20:02 2012 [AuM][D]: Message received: LOG I 2 User USER 
not found


Thu May 10 19:20:02 2012 [AuM][I]: User USER not found
Thu May 10 19:20:02 2012 [AuM][D]: Message received: LOG I 2 ExitCode: 255

Thu May 10 19:20:02 2012 [AuM][I]: ExitCode: 255
Thu May 10 19:20:02 2012 [AuM][D]: Message received: AUTHENTICATE 
FAILURE 2 -


Thu May 10 19:20:02 2012 [AuM][E]: Auth Error:
Thu May 10 19:20:02 2012 [ReM][E]: [UserInfo] User couldn't be 
authenticated, aborting call.

--

maybe it would be a good ideia to ship the production versions without 
this behavior


cheers
João
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] question about System Datastore on Opennebula 3.4

2012-04-23 Thread João Pagaime

Hello all,

On Opennebula 3.4 is it possible to configure the System Datastore do 
use the Shared Transfer Driver, and then to use different storages for 
different working-nodes?


Example:

- on working-node A, B and C at datacenter X, /var/lib/one/datastores/0/ 
mounts by NFS on server NFS01


- on working-node D, E and F  at datacenter Y, 
/var/lib/one/datastores/0/ mounts by NFS on server NFS02


Does it seem like something that works? Does it break something? 
(besides some cases of live-migration)


Thanks anyway, best regards,
João

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] LDAP and open-nebula / Sunstone?

2012-04-02 Thread João Pagaime

Hello Opennebula users,

we're trying to set up Sunstone do use LDAP authentication on our local 
network, but it isn't working, and It looks we're kind of  stuck and 
have doubts also


It seems that the LDAP configuration is being ignored by sunstone

we would appreciate for any additional pointers...

some main questions:

- does Sunstone work with LDAP authentication?

- is it necessary to add LDAP users' passwords to open-nebula 
configuration?  Documention [1] says this: The user should add its 
credentials to ... in this fashion: user_dn_or_username:user_password


- what debug information should we look for? where? Where would it be 
expected to see LDAP traffic coming out of open-nebula?


---
more information

==
version: OpenNebula 3.2.1 on CentOS  6.2

==
  /etc/one/auth/ldap_auth.conf

# Ldap user able to query, if not set connects as anonymous
:user: 'one'
:password: '___'

# Ldap authentication method
:auth_method: :simple

# Ldap server
:host: ___
:port: 389

# base hierarchy where to search for users and groups
:base: 'dc=corp,dc=fccn,dc=pt'

# group the users need to belong to. If not set any user will do
:group: 

# field that holds the user name, if not set 'cn' will be used
:user_field: 'cn'

== /etc/one/oned.conf
...
AUTH_MAD = [
executable = one_auth_mad,
arguments = --authz quota --authn 
plain,server_cipher,ssh,x509,ldap,default

]


[1]
http://opennebula.org/documentation:rel3.2:ldap

thanks,
João

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] (thanks) Re: experiences with distributed FS?

2012-02-09 Thread João Pagaime
 am busy investigating moosefs slowness on a separate testing cluster 
(and have so far improved bonnie++ rewrite speed from 1.5MB/s to 
18MB/s). I should have something I am willing to put back into use as a 
proper shared filesystem soon (I really like using the snapshot 
capabilites to deploy vms)
• Just to clarify - moosefs is fast in almost all respects, but is slow 
when continually reading from and writing to the same block of a file. 
Unfortunately the workload on some of my vms seems to hit this case 
fairly often, and that is what I am trying to optimise.

Lustre===
• -(MM) Lustre - does not have redundancy, If you will use 
file striping between nodes and one nodes go offline all data are not 
available.

Gluster===
• (MM) Gluster - does not support KVM virtualization. 
Software developer lead mentioned that it will be fixed in next release 
(April).
• -(HB) Our research project is currently using GlusterFS 
for our distributed NFS storage system.
• We're leveraging the distributed-replica configuration in which every 
two pairs of servers is a replica pair, and all pairs form a distributed 
cluster.
• We do not do data stripping since to achieve up-time reliability with 
stripping would require too many servers.
• Furthermore, another nice feature of GlusterFS, is that you can just 
install it into a VM, clone it a few times, and distribute them across 
VMMs. However, we utilize physical servers using RAID-1.

Shipping Dog ===
• ---(MM) Shipping Dog - Working only with images does not allow 
to store plain files on it. one image can be connected to only one VM 
per a time.

eXtremFS ===
• ---(MM) eXtremFS - does not have support. If something not 
working it is your problem even you ready to pay for fixing.

XtreemFS===
• ---(MB) I'm one of the developers of the distributed 
file system XtreemFS.
• Internally we also run a OpenNebula cluster and use XtreemFS as shared 
file system there. Since XtreemFS is POSIX compatible, you can use the 
tm_nfs and just point OpenNebula to the mounted POSIX volume.
• ….. So far we haven't seen any problems with using XtreemFS in 
OpenNebula, otherwise we would have fixed it
• Regarding the performance: We did not do any measurements so far. Any 
numbers or suggestions how to benchmark different distributed file 
systems are welcome.
• In XtreemFS the metadata of a volume is stored on one metadata server. 
If you like, you can set it up replicated and then your file system will 
have no single point of failure.
• Regarding support: Although we cannot offer commercial support, we 
provide support through our mailing list and are always eager to help.

• (RW)… you are using FUSE.





Em 09-02-2012 10:49, richard -rw- weinberger escreveu:

On Thu, Feb 9, 2012 at 11:17 AM, Michael Berlin

Regarding the performance: We did not do any measurements so far. Any
numbers or suggestions how to benchmark different distributed file systems
are welcome.

Hmm, you are using FUSE.
Performance measurements would be really nice to have.

Writing fancy file systems using FUSE is easy. Making them fast and scalable
is a damn hard job and often impossible.




--
João Pagaime
FCCN - Área de Infra-estruturas Aplicacionais
Av. do Brasil, n.º 101 - Lisboa
Telef. +351 218440100  Fax +351 218472167
www.fccn.pt

Aviso de Confidencialidade/Disclaimer
Esta mensagem é exclusivamente destinada ao seu destinatário, podendo
conter informação CONFIDENCIAL, cuja divulgação está expressamente
vedada nos termos da lei. Caso tenha recepcionado indevidamente esta
mensagem, solicitamos-lhe que nos comunique esse mesmo facto por esta
via ou para o telefone +351 218440100 devendo apagar o seu conteúdo de
imediato. This message is intended exclusively for its addressee. It may
contain CONFIDENTIAL information protected by law. If this message has
been received by error, please notify us via e-mail or by telephone +351
218440100 and delete it immediately.

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] multiple image repositories

2012-02-07 Thread João Pagaime

Hello Ruben,

thanks for the reply

couldn't make it work

for now I'm using tm_ssh, although it probably could be done, maybe with 
a custom TM driver and some tweaking here and there


the situation  I'm trying  to avoid is to move around GB of traffic 
between the front-end and clusters of geographic distributed 
hipervisors, every time a VM starts os stops (shutdown). With 2 ou more 
sites there's probably no perfect place for a single image repository.
The ideia was for the front-end to use images repositories near the 
hipervisors


I'll let you know the results of any other tests, if we get around to it

It's great to hear opennebula is planned to improve on this issues with 
 multiple datastores. Thanks!


bests regards,
João



Em 07-02-2012 09:32, Ruben S. Montero escreveu:

Hi,

Basically, this will  work, as the SOURCE attribute is then used by
the rest of the OpenNebula components. Let us known if you find any
problem

This is a very timely comment, as we are now working in this feature
for 3.4. The current Image Repo will evolve to a more complete system
of datastores, so you'll be able to define multiple datastores for
your images (of multiple types SSH, sharedFS or iSCSI-LVM...)

Cheers

Ruben

On Fri, Feb 3, 2012 at 10:00 AM, João Pagaimej...@fccn.pt  wrote:

Hello users,

I would like opennebula front-end to use multiple NFS image repositories

What's the propper way of doing this?

one way seems to be:
- create the image
- move it to the desired location
- do a oneimage updateimage to change the SOURCE variable to the new
location

I'm using  CentOS-6.0-opennebula-3.2.0-1.x86_64

best regards,
João
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org







--
João Pagaime
FCCN - Área de Infra-estruturas Aplicacionais
Av. do Brasil, n.º 101 - Lisboa
Telef. +351 218440100  Fax +351 218472167
www.fccn.pt

Aviso de Confidencialidade/Disclaimer
Esta mensagem é exclusivamente destinada ao seu destinatário, podendo
conter informação CONFIDENCIAL, cuja divulgação está expressamente
vedada nos termos da lei. Caso tenha recepcionado indevidamente esta
mensagem, solicitamos-lhe que nos comunique esse mesmo facto por esta
via ou para o telefone +351 218440100 devendo apagar o seu conteúdo de
imediato. This message is intended exclusively for its addressee. It may
contain CONFIDENTIAL information protected by law. If this message has
been received by error, please notify us via e-mail or by telephone +351
218440100 and delete it immediately.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] VM after shutdown

2012-02-07 Thread João Pagaime

Hello all,

is there some configuration available for open-nebula (and sunstone) to 
list persistent VMs that are shutdown, instead of disappearing from 
onevm list?


as it is configured now, after a VM shutdown, I have to do a onevm 
create or do a similar action on sunstone, to restart the VM. This 
doesn't feel very intuitive (the first impression is that a persistent 
VM has vanished from the system after shutdown, but, of course, that is 
not the case)


thanks anyway,
João

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] multiple image repositories

2012-02-03 Thread João Pagaime

Hello users,

I would like opennebula front-end to use multiple NFS image repositories

What's the propper way of doing this?

one way seems to be:
- create the image
- move it to the desired location
- do a oneimage update image to change the SOURCE variable to the 
new location


I'm using  CentOS-6.0-opennebula-3.2.0-1.x86_64

best regards,
João
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org