Re: [one-users] Shared datastore only to cluster

2013-11-28 Thread Gary S. Cuozzo
Hi, 
I believe you should be able to do this by creating a custom datastore driver. 

I have a use case which is not too different than you. I created a 'localfs' 
datastore & transfer driver which is able to create images on a specific host, 
utilizing local disk storage (for performance reasons). While your storage is 
actually shared across hosts, it is similar to mine in that the filesystems 
only exist on host machines and not in the front-end. 

All you should have to do is use a shared SYSTEM datastore and create a 
transfer driver which creates the proper symlinks in the system datastore over 
to the image in your ocfs2. I believe the iscsi drivers do something similar. 
You may want to take a look at them. 

Hope that helps, 
gary 

- Original Message -

From: "Ruben S. Montero"  

Hi Mário 

I am not really sure I fully understand your question. So let me briefly 
summarize the use of clusters and datasotres: 
* There are 2 datastore types. Images (to store VM disks) and System (to run 
the VMs from there). Stopped VMs (disks and checkpoint) are in stored in the 
System DS. 
* DS can be either shared among all the clusters (assigned to CLUSTER_NONE) or 
assigned to a cluster. 
So if you have a stopped guest that needs to be allocated to a different 
cluster you should be able to do it as long as all the files are accessible to 
both hosts. Otherwise it may not work (if the VM is using persistent images for 
example). 


Hope it is more clear now, if not come back 

Ruben 


On Wed, Aug 28, 2013 at 7:09 PM, Mário Reis < mario.r...@ivirgula.com > wrote: 



Hi all, 

i'm trying to build a opennebula infrastructure, but i guess i'm missing 
something basic about datastores. 

Currently i have 3 ubuntu hosts virtualizing several kvm guests. Created 
another guest and installed Opennebula 4.2 over a Gentoo in this new guest. 
Removed all VMs from the third host, created a cluster in sunstone interface, 
added to it this host. For far so good, the host is monitored and showing 
available CPU/RAM. 

Next step was to "import" a qcow2 image from a stopped guest and use it in this 
host. This files and shared across all hosts using ocfs2 over fc luns 
(opennebula guest don't actually have access to them). 

So, my problem here is: 

Can I add this image files to a datastore that only exists in the cluster 
hosts? It seems that i can't do this without placing them in the frontend 
guest. 




___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 







-- 
-- 
Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013 
-- 
Ruben S. Montero, PhD 
Project co-Lead and Chief Architect 
OpenNebula - The Open Source Solution for Data Center Virtualization 
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula 

___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] datastore log info missing from oned.log in v4.2

2013-11-26 Thread Gary S. Cuozzo
Then it seems that this has changed since the 3.8.x series.  I use log_debug 
script function to log various messages in my drivers.  I have my log level set 
to DEBUG in conf file.  I used to see those messages, as I would expect, unless 
I changed the log level to something above DEBUG.  The messages were logged 
regardless of the outcome of the script.

It seems that if that is no longer the case, then the log facility can't be 
used for anything other than error conditions and having DEBUG, INFO, etc. is 
not very useful anymore.

Maybe I'm missing the point of the feature, but it definitely did not work this 
way before and was something I relied on for certain things.  I'm now 
overriding the log_debug function locally in my script and writing the output 
to my own log file.  That is not as nice because now the log entries are not 
intertwined with other events which are happening in the oned.log file.

IMO, any message should be written to the file based on the configured log 
level.

Let me know your thoughts.

Thanks for the reply,
gary

- Original Message -
From: "Javier Fontan" 

STDERR is only logged when the action is not successful, maybe that is
your problem.

On Sat, Nov 23, 2013 at 11:35 PM, Gary S. Cuozzo  wrote:
> Hi All,
> I finally got some time to get back to looking at this.  My original task was 
> to get my custom drivers working with ONE 4.2.  I was able to do so.  It was 
> related to a bug fix where the SRC & DST arguments to pre/postmigrate were 
> swapped.  I simply had to swap them back.  The irony in it is that I was the 
> one that suggested the pre/postmigrate feature request and reported that bug. 
>  ;)
>
> So, I'm happily able to live migrate again.
>
> Oddly though, I am unable to get any log messages to show up in oned.log 
> file.  I'm guessing some of the common script libraries have gotten 
> reorganized, so I need to dig in a bit more on this issue.
>
> Cheers,
> gary
>
> - Original Message -
> From: "Gary S. Cuozzo" 
> To: "Users OpenNebula" 
> Sent: Friday, November 8, 2013 8:53:32 AM
> Subject: Re: [one-users] datastore log info missing from oned.log in v4.2
>
> Is this a new behavior?  I used to see any logged messages up to the level of 
> the log mode in the oned.log file.  It seems like 'debug' mode should log 
> regardless of the exit code of the script.  Even if I changed the logs from 
> log_debug to log_error, still no output.  I'll try forcing exit 1 to see if 
> that causes the logs to show up.
>
> Also, I validated that when I first installed ONE 4.2, the log messages were 
> there, but now are not.  I went back to my logs from a few weeks ago and see 
> all the messages I would expect.  I did not have time to work on this, so the 
> server sat for a few weeks.  Now I'm back to working on it and not seeing the 
> messages.  Very odd.
>
> Will let you know if setting exit 1 helps.
>
> Thanks for the reply,
> gary
>
> - Original Message -
> Normally the transfers are not logged when everything goes ok, that
> is, the script exists with 0.
>
> On Fri, Nov 8, 2013 at 1:17 AM, Gary S. Cuozzo  wrote:
>> Hi All,
>> I have recently upgraded our environment from 3.8.x to 4.2 version.  Running
>> on Ubuntu 12.04LTS.  I have some custom datastore/tm drivers that I wrote
>> which worked well in 3.8.  I am trying to troubleshoot something related to
>> live migration and my log messages are not getting to /var/log/one/oned.log
>> file.  I have logging set to debug in oned.conf.  I see debug level messages
>> for other subsystems, such as [ReM] but nothing related to the datastore or
>> tm.
>>
>> Just as a sanity check, I have another server that I did a clean install on
>> and I'm seeing the same results.
>>
>> I also tested adding some log entries into the stock ONE drivers and I do
>> not see anything logged.
>>
>> Any ideas?
>>
>> Thanks in advance,
>> gary
>>
>>
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>
>
>
> --
> Javier Fontán Muiños
> Developer
> OpenNebula - The Open Source Toolkit for Data Center Virtualization
> www.OpenNebula.org | @OpenNebula | github.com/jfontan
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



-- 
Javier Fontán Muiños
Developer
OpenNebula - The Open Source Toolkit for Data Center Virtualization
www.OpenNebula.org | @OpenNebula | github.com/jfontan
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] slow live migration of VM's

2013-11-25 Thread Gary S. Cuozzo
Thanks Javier.
The algorithm makes sense and the link pretty much validates what I figured was 
the process for live migration.

Maybe the VM is a lot more active than I thought and it caused a lot of dirty 
pages.  The last VM I have on the server to migrate off is much larger and 
active, so I guess I should plan for some downtime and do a cold migration 
where I pause the running system.  I think that would take far less time to 
accomplish.

Like I said in my last email, it the VM were usable during the migration, I 
really would not care how long it took.  But having the VM effectively be 
'down' makes live migration far less useful.  My main concern was that I need 
to tune something in ONE to make it effective.  It sounds like that is really 
not the case.

The VM's that I previously migrated must just have been very low use and did 
not take long to sync.  I will build out a test vm and do a non-live sync to 
validate the process.

Thanks again,
gary

- Original Message -

Here you have the algorithm for live migration:

http://www.linux-kvm.org/page/Migration#Algorithm_.28the_short_version.29

In case your VM is doing some work that changes a lot of memory it
will keep marking dirty pages and it will continue in the "Iteratively
transfer all dirty pages (pages that were written to by the guest)."
until the number of dirty pages is small enough so you don't notice
the migration.

Try stopping services changing memory in the VM before the migration.


On Mon, Nov 25, 2013 at 3:32 AM, Gary S. Cuozzo  wrote:
> Hi all,
> I was just moving some VM's around as a prep for some server maintenance,
> via live migration.  I moved 3 VM's which were all fairly small in size of
> 1GB - 3GB RAM and 20GB - 60GB disk.  I did the migrations 1 at a time.  I
> was surprised at how long the migrations took and also that each VM was
> pretty much unusable during the migration.  SSH sessions timed out and
> websites were not able to be accessed.  A VM with 3GB RAM allocation took
> around 10 minutes to migrate.
>
> Is this typical?  Or do I maybe have some sort of issue on my network or
> config?  We run a dedicated 1000Mbps management network for this type of
> thing, and all data storage access is via separate 4x 1000Mbps bonded
> channels.  I would have expected 3GB of RAM data to be pretty quick.
> Probably < 1 min.
>
> Are there any settings in ONE that should be tuned?  I don't seem to recall
> having an issue with migrations when I was just running virt-manager/libvirt
> directly.
>
> The last VM I have to move is a pretty important one and it has 24GB RAM
> allocated.  So I think it would be easier/quicker to just shut the VM down,
> disable the host for maintenance, and redeploy it.  I'm hoping to avoid
> that.
>
> Thanks for any advice,
> gary
>
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
Javier Fontán Muiños
Developer
OpenNebula - The Open Source Toolkit for Data Center Virtualization
www.OpenNebula.org | @OpenNebula | github.com/jfontan
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] slow live migration of VM's

2013-11-25 Thread Gary S. Cuozzo
Thank you for the reply. The storage is shared storage via NFS, so no disk data 
should need to be transferred. Just the running VM. 

I was figuring just the running VM (RAM & state) needs to sync'd. 

The time to transfer wouldn't be such an issue if the VM's were functional 
during that time. The main issue is that the guests are completely unusable, so 
the migration may as well not be 'live'. 

Thanks, 
gary 


- Original Message -

From: "Ionut Popovici"  
To: users@lists.opennebula.org 
Sent: Monday, November 25, 2013 2:52:51 AM 
Subject: Re: [one-users] slow live migration of VM's 

1st20- 60Gb disk .. is preaty much .. 
2nd transfer is done via SCP that si a lil slower than any other transfer but 
is more safer. 
3nd It depends on READ on the host where are the VMS .. if they have other host 
working .. keep in mind that the machine share the IO of that datastore. On 
perfect Datastone it sould be around 6ms time to access data on HDD .. but on 
live datastore it my take a lil more. 
My test was starting nonpersistent image with size 50GB on a SSD drive it took 
around 4Mins and was not lunch via network, just copy via scp .. on same drive. 
On 11/25/2013 4:32 AM, Gary S. Cuozzo wrote: 



Hi all, 
I was just moving some VM's around as a prep for some server maintenance, via 
live migration. I moved 3 VM's which were all fairly small in size of 1GB - 3GB 
RAM and 20GB - 60GB disk. I did the migrations 1 at a time. I was surprised at 
how long the migrations took and also that each VM was pretty much unusable 
during the migration. SSH sessions timed out and websites were not able to be 
accessed. A VM with 3GB RAM allocation took around 10 minutes to migrate. 

Is this typical? Or do I maybe have some sort of issue on my network or config? 
We run a dedicated 1000Mbps management network for this type of thing, and all 
data storage access is via separate 4x 1000Mbps bonded channels. I would have 
expected 3GB of RAM data to be pretty quick. Probably < 1 min. 

Are there any settings in ONE that should be tuned? I don't seem to recall 
having an issue with migrations when I was just running virt-manager/libvirt 
directly. 

The last VM I have to move is a pretty important one and it has 24GB RAM 
allocated. So I think it would be easier/quicker to just shut the VM down, 
disable the host for maintenance, and redeploy it. I'm hoping to avoid that. 

Thanks for any advice, 
gary 




___
Users mailing list Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 




___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] datastore capacity stats

2013-11-24 Thread Gary S. Cuozzo
Hi All, 
I just added a monitor script for my custom datastore driver. When viewing the 
datastore in sunstone, if I click on the datastore to view the details, I see 
the correct information: 
Total   14.1TB 
Used142GB 
Free
14.1TB 

But the capacity column in the summary line indicates: 
0MB / 14.1TB (0%) 

I would expect to see the used indicate 142GB and whatever the % is. Any idea 
why that does not populate correctly? 

Cheers, 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] slow live migration of VM's

2013-11-24 Thread Gary S. Cuozzo
Hi all, 
I was just moving some VM's around as a prep for some server maintenance, via 
live migration. I moved 3 VM's which were all fairly small in size of 1GB - 3GB 
RAM and 20GB - 60GB disk. I did the migrations 1 at a time. I was surprised at 
how long the migrations took and also that each VM was pretty much unusable 
during the migration. SSH sessions timed out and websites were not able to be 
accessed. A VM with 3GB RAM allocation took around 10 minutes to migrate. 

Is this typical? Or do I maybe have some sort of issue on my network or config? 
We run a dedicated 1000Mbps management network for this type of thing, and all 
data storage access is via separate 4x 1000Mbps bonded channels. I would have 
expected 3GB of RAM data to be pretty quick. Probably < 1 min. 

Are there any settings in ONE that should be tuned? I don't seem to recall 
having an issue with migrations when I was just running virt-manager/libvirt 
directly. 

The last VM I have to move is a pretty important one and it has 24GB RAM 
allocated. So I think it would be easier/quicker to just shut the VM down, 
disable the host for maintenance, and redeploy it. I'm hoping to avoid that. 

Thanks for any advice, 
gary 


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] datastore log info missing from oned.log in v4.2

2013-11-23 Thread Gary S. Cuozzo
Hi All,
I finally got some time to get back to looking at this.  My original task was 
to get my custom drivers working with ONE 4.2.  I was able to do so.  It was 
related to a bug fix where the SRC & DST arguments to pre/postmigrate were 
swapped.  I simply had to swap them back.  The irony in it is that I was the 
one that suggested the pre/postmigrate feature request and reported that bug.  
;)

So, I'm happily able to live migrate again.

Oddly though, I am unable to get any log messages to show up in oned.log file.  
I'm guessing some of the common script libraries have gotten reorganized, so I 
need to dig in a bit more on this issue.

Cheers,
gary

- Original Message -----
From: "Gary S. Cuozzo" 
To: "Users OpenNebula" 
Sent: Friday, November 8, 2013 8:53:32 AM
Subject: Re: [one-users] datastore log info missing from oned.log in v4.2

Is this a new behavior?  I used to see any logged messages up to the level of 
the log mode in the oned.log file.  It seems like 'debug' mode should log 
regardless of the exit code of the script.  Even if I changed the logs from 
log_debug to log_error, still no output.  I'll try forcing exit 1 to see if 
that causes the logs to show up.

Also, I validated that when I first installed ONE 4.2, the log messages were 
there, but now are not.  I went back to my logs from a few weeks ago and see 
all the messages I would expect.  I did not have time to work on this, so the 
server sat for a few weeks.  Now I'm back to working on it and not seeing the 
messages.  Very odd.

Will let you know if setting exit 1 helps.

Thanks for the reply,
gary

- Original Message -
Normally the transfers are not logged when everything goes ok, that
is, the script exists with 0.

On Fri, Nov 8, 2013 at 1:17 AM, Gary S. Cuozzo  wrote:
> Hi All,
> I have recently upgraded our environment from 3.8.x to 4.2 version.  Running
> on Ubuntu 12.04LTS.  I have some custom datastore/tm drivers that I wrote
> which worked well in 3.8.  I am trying to troubleshoot something related to
> live migration and my log messages are not getting to /var/log/one/oned.log
> file.  I have logging set to debug in oned.conf.  I see debug level messages
> for other subsystems, such as [ReM] but nothing related to the datastore or
> tm.
>
> Just as a sanity check, I have another server that I did a clean install on
> and I'm seeing the same results.
>
> I also tested adding some log entries into the stock ONE drivers and I do
> not see anything logged.
>
> Any ideas?
>
> Thanks in advance,
> gary
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
Javier Fontán Muiños
Developer
OpenNebula - The Open Source Toolkit for Data Center Virtualization
www.OpenNebula.org | @OpenNebula | github.com/jfontan
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] datastore log info missing from oned.log in v4.2

2013-11-08 Thread Gary S. Cuozzo
Is this a new behavior?  I used to see any logged messages up to the level of 
the log mode in the oned.log file.  It seems like 'debug' mode should log 
regardless of the exit code of the script.  Even if I changed the logs from 
log_debug to log_error, still no output.  I'll try forcing exit 1 to see if 
that causes the logs to show up.

Also, I validated that when I first installed ONE 4.2, the log messages were 
there, but now are not.  I went back to my logs from a few weeks ago and see 
all the messages I would expect.  I did not have time to work on this, so the 
server sat for a few weeks.  Now I'm back to working on it and not seeing the 
messages.  Very odd.

Will let you know if setting exit 1 helps.

Thanks for the reply,
gary

- Original Message -
Normally the transfers are not logged when everything goes ok, that
is, the script exists with 0.

On Fri, Nov 8, 2013 at 1:17 AM, Gary S. Cuozzo  wrote:
> Hi All,
> I have recently upgraded our environment from 3.8.x to 4.2 version.  Running
> on Ubuntu 12.04LTS.  I have some custom datastore/tm drivers that I wrote
> which worked well in 3.8.  I am trying to troubleshoot something related to
> live migration and my log messages are not getting to /var/log/one/oned.log
> file.  I have logging set to debug in oned.conf.  I see debug level messages
> for other subsystems, such as [ReM] but nothing related to the datastore or
> tm.
>
> Just as a sanity check, I have another server that I did a clean install on
> and I'm seeing the same results.
>
> I also tested adding some log entries into the stock ONE drivers and I do
> not see anything logged.
>
> Any ideas?
>
> Thanks in advance,
> gary
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
Javier Fontán Muiños
Developer
OpenNebula - The Open Source Toolkit for Data Center Virtualization
www.OpenNebula.org | @OpenNebula | github.com/jfontan
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] datastore log info missing from oned.log in v4.2

2013-11-07 Thread Gary S. Cuozzo
Hi All, 
I have recently upgraded our environment from 3.8.x to 4.2 version. Running on 
Ubuntu 12.04LTS. I have some custom datastore/tm drivers that I wrote which 
worked well in 3.8. I am trying to troubleshoot something related to live 
migration and my log messages are not getting to /var/log/one/oned.log file. I 
have logging set to debug in oned.conf. I see debug level messages for other 
subsystems, such as [ReM] but nothing related to the datastore or tm. 

Just as a sanity check, I have another server that I did a clean install on and 
I'm seeing the same results. 

I also tested adding some log entries into the stock ONE drivers and I do not 
see anything logged. 

Any ideas? 

Thanks in advance, 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] default ownership for VM instances

2013-04-02 Thread Gary S. Cuozzo
Hi Carlos,
Thank you for the quick reply. Yes, a hook should work great for this. I'm not 
sure why I didn't think of that before!
Thanks again,
gary

- Original Message -



Hi,


I think this can be easily done with a hook [1]. You will need to trigger it 
each time a new VM is created:


VM_HOOK = [
name = "default_chmod",
on = "CREATE",
command = "default_chmod.rb",
arguments = "$ID $TEMPLATE" ]


And then create a small script (default_chmod.rb) that looks for the default 
uid and gid inside the vm template (second argument, it will be the xml base64 
enconded); and executes 'onevm chmod'.





Let me know if this works for you.


Regards


[1] http://opennebula.org/documentation:rel3.8:hooks



--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula


On Tue, Apr 2, 2013 at 4:40 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote:




Hello all,
Is there a way to have a VM user/group ownership (and potentially ACL's) be set 
based on values in the template?

Here is my use case:
Currently, all of our VM's are one of a kind. Each VM is based on a unique 
template and uses persistent images. It is the virtual equivalent of colocated 
servers where each customer has their own dedicated and customized servers. We 
basically manage the images & templates, but want to give users access to their 
vm for start/stop/vnc.

What I have right now is I create a user/group for each customer. The I set the 
ownership of the resources to be their user & group. When we instantiate the 
VM, we have to remember to set the ownership accordingly or it will not show up 
as a resource when they login to sunstone.

It would be ideal if there was a way to specify, in the template, the default 
user and group which should own the VM and the ACL's. It would also be nice if 
the default name of the VM could be set (Though I think there may already be a 
feature being added for this). This way, if we had to stop & recreate the VM 
instance (such as for adding more resources), we could do it without having to 
remember to set permissions manually.

Let me know what you think.

Cheers,
gary


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org





___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] default ownership for VM instances

2013-04-02 Thread Gary S. Cuozzo
Hello all, 
Is there a way to have a VM user/group ownership (and potentially ACL's) be set 
based on values in the template? 

Here is my use case: 
Currently, all of our VM's are one of a kind. Each VM is based on a unique 
template and uses persistent images. It is the virtual equivalent of colocated 
servers where each customer has their own dedicated and customized servers. We 
basically manage the images & templates, but want to give users access to their 
vm for start/stop/vnc. 

What I have right now is I create a user/group for each customer. The I set the 
ownership of the resources to be their user & group. When we instantiate the 
VM, we have to remember to set the ownership accordingly or it will not show up 
as a resource when they login to sunstone. 

It would be ideal if there was a way to specify, in the template, the default 
user and group which should own the VM and the ACL's. It would also be nice if 
the default name of the VM could be set (Though I think there may already be a 
feature being added for this). This way, if we had to stop & recreate the VM 
instance (such as for adding more resources), we could do it without having to 
remember to set permissions manually. 

Let me know what you think. 

Cheers, 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] grant access to sunstone vnc for user's vm

2013-04-02 Thread Gary S. Cuozzo
Sorry for the delay, I have created new feature request here: 
http://dev.opennebula.org/issues/1857 

Let me know if it needs any further clarification. 
Thanks! 
gary 

- Original Message -


Hi Gary, 


thanks for reporting your progress with this issue, I was trying to figure out 
how to help you but didn't know how, so I'm glad you figured it out :) 


By the way, the funcionality you were looking for is an interesting one. Would 
you be interested in creating a feature request in dev.opennebula.org detailing 
what you would expect by restricting the config options for regular users, etc? 


cheers, 
Jaime 



On Thu, Mar 28, 2013 at 4:41 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Ok, I figured it out. I traced through the code on the sunstone server and also 
did some debugging on the frontend javascript. I found that when I login as the 
user which doesn't work, the code would blow up when trying to get some 
configuration settings. I then remembered that I had turned off config tab in 
sunstone-plugins.yaml because I wanted to strip the interface down for regular 
users to limit what they can do. 

So, I turned that module back on and it's working now. Obviously that is a 
required module. :) 

Thanks, 
gary 








Did a bit more poking around. Tried all sorts of things with users, groups, 
acl's, no luck & no errors. Gonna see if I can take a look at the code and gain 
any knowledge there. I'm assuming it has to be something with my setup as I 
would expect lots of folks have many users accessing guests via websockets/vnc. 

Thanks, 
gary 






Ok, I looked at this a bit more. I killed the websocket server and started a 
new one via a shell so I can see the output. When I login as my main admin 
user, I can click on a vm and view the vnc console. I see this logged by 
websocket to stdout: 

1: 74.94.xxx.xx: SSL/TLS (wss://) WebSocket connection 
1: 74.94.xxx.xx: Version hybi-13, base64: 'True' 
1: 74.94.xxx.xx: Path: '/?token=nafe4fgtywxlqjnrf68m' 
1: connecting to: vmhost:6146 

I then use a separate browser session and login as my regular user. When I 
click the vnc icon, nothing is logged by the websocket server. Sunstone GUI 
reports a startvnc request is made. I see in the sunstone.log file this entry: 

Thu Mar 28 11:08:13 2013 [I]: 127.0.0.1 - - [28/Mar/2013 11:08:13] "POST 
/vm/245/startvnc HTTP/1.0" 200 48 0.0483 

Is there a way I can find out more specifically what happens during the 
/vm/245/startvnc request? 

Thanks, 
gary 








I am trying to give another user access to the VNC console to their VM in 
sunstone. Here is what I tried: 
* give the user a login using their email address as username & a password 
* create a unique group and add the new user to the group 
* set user/group ownership for image/template/vm to the new user & group 
* set permissions for user & group to "use" for each image/template/vm 

When I try to login as the new user, I can see the resources in sunstone. The 
"vnc access" icon for the VM is enabled. I can click it, but nothing at all 
happens. No window pops up, nothing in the log, no error. 

Can somebody let me know if I'm doing something wrong and what the proper 
process to give access to a user? 

I'm using ONE 3.8.1. 

Thanks in advance, 
gary 





___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 







-- 
Jaime Melis 
Project Engineer 
OpenNebula - The Open Source Toolkit for Cloud Computing 
www.OpenNebula.org | jme...@opennebula.org 
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] grant access to sunstone vnc for user's vm

2013-03-28 Thread Gary S. Cuozzo
Ok, I figured it out. I traced through the code on the sunstone server and also 
did some debugging on the frontend javascript. I found that when I login as the 
user which doesn't work, the code would blow up when trying to get some 
configuration settings. I then remembered that I had turned off config tab in 
sunstone-plugins.yaml because I wanted to strip the interface down for regular 
users to limit what they can do. 

So, I turned that module back on and it's working now. Obviously that is a 
required module. :) 

Thanks, 
gary 


- Original Message -



Did a bit more poking around. Tried all sorts of things with users, groups, 
acl's, no luck & no errors. Gonna see if I can take a look at the code and gain 
any knowledge there. I'm assuming it has to be something with my setup as I 
would expect lots of folks have many users accessing guests via websockets/vnc. 

Thanks, 
gary 


- Original Message -



Ok, I looked at this a bit more. I killed the websocket server and started a 
new one via a shell so I can see the output. When I login as my main admin 
user, I can click on a vm and view the vnc console. I see this logged by 
websocket to stdout: 

1: 74.94.xxx.xx: SSL/TLS (wss://) WebSocket connection 
1: 74.94.xxx.xx: Version hybi-13, base64: 'True' 
1: 74.94.xxx.xx: Path: '/?token=nafe4fgtywxlqjnrf68m' 
1: connecting to: vmhost:6146 

I then use a separate browser session and login as my regular user. When I 
click the vnc icon, nothing is logged by the websocket server. Sunstone GUI 
reports a startvnc request is made. I see in the sunstone.log file this entry: 

Thu Mar 28 11:08:13 2013 [I]: 127.0.0.1 - - [28/Mar/2013 11:08:13] "POST 
/vm/245/startvnc HTTP/1.0" 200 48 0.0483 

Is there a way I can find out more specifically what happens during the 
/vm/245/startvnc request? 

Thanks, 
gary 








I am trying to give another user access to the VNC console to their VM in 
sunstone. Here is what I tried: 
* give the user a login using their email address as username & a password 
* create a unique group and add the new user to the group 
* set user/group ownership for image/template/vm to the new user & group 
* set permissions for user & group to "use" for each image/template/vm 

When I try to login as the new user, I can see the resources in sunstone. The 
"vnc access" icon for the VM is enabled. I can click it, but nothing at all 
happens. No window pops up, nothing in the log, no error. 

Can somebody let me know if I'm doing something wrong and what the proper 
process to give access to a user? 

I'm using ONE 3.8.1. 

Thanks in advance, 
gary 





___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] grant access to sunstone vnc for user's vm

2013-03-28 Thread Gary S. Cuozzo
Did a bit more poking around. Tried all sorts of things with users, groups, 
acl's, no luck & no errors. Gonna see if I can take a look at the code and gain 
any knowledge there. I'm assuming it has to be something with my setup as I 
would expect lots of folks have many users accessing guests via websockets/vnc. 

Thanks, 
gary 


- Original Message -



Ok, I looked at this a bit more. I killed the websocket server and started a 
new one via a shell so I can see the output. When I login as my main admin 
user, I can click on a vm and view the vnc console. I see this logged by 
websocket to stdout: 

1: 74.94.xxx.xx: SSL/TLS (wss://) WebSocket connection 
1: 74.94.xxx.xx: Version hybi-13, base64: 'True' 
1: 74.94.xxx.xx: Path: '/?token=nafe4fgtywxlqjnrf68m' 
1: connecting to: vmhost:6146 

I then use a separate browser session and login as my regular user. When I 
click the vnc icon, nothing is logged by the websocket server. Sunstone GUI 
reports a startvnc request is made. I see in the sunstone.log file this entry: 

Thu Mar 28 11:08:13 2013 [I]: 127.0.0.1 - - [28/Mar/2013 11:08:13] "POST 
/vm/245/startvnc HTTP/1.0" 200 48 0.0483 

Is there a way I can find out more specifically what happens during the 
/vm/245/startvnc request? 

Thanks, 
gary 








I am trying to give another user access to the VNC console to their VM in 
sunstone. Here is what I tried: 
* give the user a login using their email address as username & a password 
* create a unique group and add the new user to the group 
* set user/group ownership for image/template/vm to the new user & group 
* set permissions for user & group to "use" for each image/template/vm 

When I try to login as the new user, I can see the resources in sunstone. The 
"vnc access" icon for the VM is enabled. I can click it, but nothing at all 
happens. No window pops up, nothing in the log, no error. 

Can somebody let me know if I'm doing something wrong and what the proper 
process to give access to a user? 

I'm using ONE 3.8.1. 

Thanks in advance, 
gary 





___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] grant access to sunstone vnc for user's vm

2013-03-28 Thread Gary S. Cuozzo
Ok, I looked at this a bit more. I killed the websocket server and started a 
new one via a shell so I can see the output. When I login as my main admin 
user, I can click on a vm and view the vnc console. I see this logged by 
websocket to stdout: 

1: 74.94.xxx.xx: SSL/TLS (wss://) WebSocket connection 
1: 74.94.xxx.xx: Version hybi-13, base64: 'True' 
1: 74.94.xxx.xx: Path: '/?token=nafe4fgtywxlqjnrf68m' 
1: connecting to: vmhost:6146 

I then use a separate browser session and login as my regular user. When I 
click the vnc icon, nothing is logged by the websocket server. Sunstone GUI 
reports a startvnc request is made. I see in the sunstone.log file this entry: 

Thu Mar 28 11:08:13 2013 [I]: 127.0.0.1 - - [28/Mar/2013 11:08:13] "POST 
/vm/245/startvnc HTTP/1.0" 200 48 0.0483 

Is there a way I can find out more specifically what happens during the 
/vm/245/startvnc request? 

Thanks, 
gary 








I am trying to give another user access to the VNC console to their VM in 
sunstone. Here is what I tried: 
* give the user a login using their email address as username & a password 
* create a unique group and add the new user to the group 
* set user/group ownership for image/template/vm to the new user & group 
* set permissions for user & group to "use" for each image/template/vm 

When I try to login as the new user, I can see the resources in sunstone. The 
"vnc access" icon for the VM is enabled. I can click it, but nothing at all 
happens. No window pops up, nothing in the log, no error. 

Can somebody let me know if I'm doing something wrong and what the proper 
process to give access to a user? 

I'm using ONE 3.8.1. 

Thanks in advance, 
gary 




___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Leftover checkpoint files after resume

2013-03-28 Thread Gary S. Cuozzo
That's excellent! Thank you for the reply. 





Hi Gary 






1. Can I safely remove them? I would think so, but want to double-check before 
I hose something up. 





Yes you can removed them. Once the VM is running again the file is not used 
anymore. In case you need to save the VM again a new checkpoint is generated 




2. Should they have been deleted once the vm was successfully resumed? 





At least this should be configurable in the driver file. Currently when you 
save a VM we move previous checkpoint files to checkpoint. in case you 
need them if something goes wrong. See [1], I've filled an issue to take a look 
at this [2] 


[1] 
https://github.com/OpenNebula/one/blob/master/src/vmm_mad/remotes/kvm/save#L25 


[2] http://dev.opennebula.org/issues/1841 




Thanks for the feedback! 


Ruben 








-- 
Ruben S. Montero, PhD 
Project co-Lead and Chief Architect 
OpenNebula - The Open Source Solution for Data Center Virtualization 
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula 
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] grant access to sunstone vnc for user's vm

2013-03-27 Thread Gary S. Cuozzo
Hello, 
I am trying to give another user access to the VNC console to their VM in 
sunstone. Here is what I tried: 
* give the user a login using their email address as username & a password 
* create a unique group and add the new user to the group 
* set user/group ownership for image/template/vm to the new user & group 
* set permissions for user & group to "use" for each image/template/vm 

When I try to login as the new user, I can see the resources in sunstone. The 
"vnc access" icon for the VM is enabled. I can click it, but nothing at all 
happens. No window pops up, nothing in the log, no error. 

Can somebody let me know if I'm doing something wrong and what the proper 
process to give access to a user? 

I'm using ONE 3.8.1. 

Thanks in advance, 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Leftover checkpoint files after resume

2013-03-27 Thread Gary S. Cuozzo
Hi all, 
Last night I had to do some maintenance on one of our storage servers which 
required it to be shutdown & rebooted. So, I did a report of any VM's which 
used that server as its backing store and suspended those vm's. I did my 
maintenance and resumed the vm's. It all went great. But, I noticed that all 
the checkpoint files are still in the datastore taking up around 60GB of space. 
I have a few questions: 
1. Can I safely remove them? I would think so, but want to double-check before 
I hose something up. 
2. Should they have been deleted once the vm was successfully resumed? 

I'm running ONE 3.8.1. I found some bug reports related to checkpoint files, 
but they were from older releases and seemed to be closed. 

Thanks for any info. 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VM creation error -failed to create domain

2013-03-08 Thread Gary S. Cuozzo
Ok, so it seems that kvm modules are loaded & ok. You also have permission 
denied errors in the log. Double check that the oneadmin user is in the correct 
group and also that the libvirt configs are set to the correct groups. 

- Original Message -

From: "prachi de"  
To: "Gary S. Cuozzo"  
Sent: Friday, March 8, 2013 12:34:01 AM 
Subject: Re: [one-users] VM creation error -failed to create domain 

Hi Gary, 


Thanks for the help 


I tried the commands you specified in your mail and the output is as follows 



lsmod |grep kvm 
kvm_intel 127736 0 
kvm 365556 1 kvm_intel 
also tried modprobe kvm . I have even tried following solutions 
1] Disabled apparmor and again restarted it. 
2] restarted libvirt service 
3] installd qemu 


Still the same error is persists. 


I tried to check dmesg but unable to find the problem by looking at the log. 
Plz help. 



On Thu, Mar 7, 2013 at 7:16 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Hi Prachi, 







Stucked with one problem while creating virtual machine using opennebula-3.6.0 
with Ubuntu-12.04 desktop machine with i686 architecture. 


Installed kvm on another machine with same configuration and the two machines 
are connected with each other through LAN. 


while creating virtual machine following error reported in /var/oned.log 





Thu Mar 7 12:47:51 2013 [ReM][D]: VirtualMachineDeploy method invoked 
Thu Mar 7 12:47:52 2013 [DiM][D]: Deploying VM 19 
Thu Mar 7 12:47:52 2013 [TM][D]: Message received: TRANSFER SUCCESS 19 - 


Thu Mar 7 12:47:52 2013 [ReM][D]: UserPoolInfo method invoked 
Thu Mar 7 12:47:52 2013 [VMM][D]: Message received: LOG I 19 ExitCode: 0 


Thu Mar 7 12:47:52 2013 [VMM][D]: Message received: LOG I 19 Successfully 
execute network driver operation: pre. 


Thu Mar 7 12:47:53 2013 [VMM][D]: Message received: LOG I 19 Command execution 
fail: cat << EOT | /var/tmp/one/vmm/kvm/deploy 
/srv/cloud/one/var//datastores/0/19/deploy$ 


Thu Mar 7 12:47:53 2013 [VMM][D]: Message received: LOG I 19 error: Failed to 
create domain from /srv/cloud/one/var//datastores/0/19/deployment.0 


Thu Mar 7 12:47:53 2013 [VMM][D]: Message received: LOG I 19 error: internal 
error process exited while connecting to monitor: Could not access KVM kernel 
module: Permi$ 


Thu Mar 7 12:47:53 2013 [VMM][D]: Message received: LOG I 19 failed to 
initialize KVM: Permission denied 


Thu Mar 7 12:47:53 2013 [VMM][D]: Message received: LOG I 19 No accelerator 
found! 


Plz help. 


It looks like you either don't have virtualization turned on in the bios of the 
computer, or the kvm module is not loaded. You can check if the module is 
loaded by: 
lsmod | grep kvm 

If the module is not loaded, you can try: 
modprobe kvm 

Then check to see if it loaded. If not, you should be able too see why via 
dmesg or other log file. 

Hope that helps, 
gary 







-- 


Prachi D. 




___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VM creation error -failed to create domain

2013-03-07 Thread Gary S. Cuozzo
Hi Prachi, 





Stucked with one problem while creating virtual machine using opennebula-3.6.0 
with Ubuntu-12.04 desktop machine with i686 architecture. 


Installed kvm on another machine with same configuration and the two machines 
are connected with each other through LAN. 


while creating virtual machine following error reported in /var/oned.log 





Thu Mar 7 12:47:51 2013 [ReM][D]: VirtualMachineDeploy method invoked 
Thu Mar 7 12:47:52 2013 [DiM][D]: Deploying VM 19 
Thu Mar 7 12:47:52 2013 [TM][D]: Message received: TRANSFER SUCCESS 19 - 


Thu Mar 7 12:47:52 2013 [ReM][D]: UserPoolInfo method invoked 
Thu Mar 7 12:47:52 2013 [VMM][D]: Message received: LOG I 19 ExitCode: 0 


Thu Mar 7 12:47:52 2013 [VMM][D]: Message received: LOG I 19 Successfully 
execute network driver operation: pre. 


Thu Mar 7 12:47:53 2013 [VMM][D]: Message received: LOG I 19 Command execution 
fail: cat << EOT | /var/tmp/one/vmm/kvm/deploy 
/srv/cloud/one/var//datastores/0/19/deploy$ 


Thu Mar 7 12:47:53 2013 [VMM][D]: Message received: LOG I 19 error: Failed to 
create domain from /srv/cloud/one/var//datastores/0/19/deployment.0 


Thu Mar 7 12:47:53 2013 [VMM][D]: Message received: LOG I 19 error: internal 
error process exited while connecting to monitor: Could not access KVM kernel 
module: Permi$ 


Thu Mar 7 12:47:53 2013 [VMM][D]: Message received: LOG I 19 failed to 
initialize KVM: Permission denied 


Thu Mar 7 12:47:53 2013 [VMM][D]: Message received: LOG I 19 No accelerator 
found! 


Plz help. 


It looks like you either don't have virtualization turned on in the bios of the 
computer, or the kvm module is not loaded. You can check if the module is 
loaded by: 
lsmod | grep kvm 

If the module is not loaded, you can try: 
modprobe kvm 

Then check to see if it loaded. If not, you should be able too see why via 
dmesg or other log file. 

Hope that helps, 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] question about best way to assign IP's to various users

2013-02-27 Thread Gary S. Cuozzo
Hi Simon,
Yes! This is a perfect solution for my use case. This is how we currently do it 
(adding the specific IP to the template) and having it also be "reserved" and 
showing the user even if the VM is not running would be awesome. That is one 
area where it's lacking right now is that if the VM gets shutdown, the IP now 
appears to be unused.

The workflow you specified is spot-on for how we'd like to work as we generally 
set the template up for our users and just let them use it.

Thanks very much. I'd love to see this implemented.
gary


- Original Message -

Sent: Wednesday, February 27, 2013 11:45:07 AM
Subject: Re: [one-users] question about best way to assign IP's to various users


I was thinking of adding an "IP reservation" feature that would mark IP as 
reserved for a given user. Then that user could have template that specifies 
which IP they want for each VM (when creating the VM / template). That would 
allow for VMs to be destroyed without the IP being returned to the available 
pool.


- Create a global VNET with all your IP.
- Mark IPs as reserved for a given user
- User create template /launch VM with their IP, IP changes to "in use" by the 
VM.
- When reserved IPs are released (by the user), they turn in "reserved" state 
for that user (instead of available for other users)
- As the Admin, you can add (or remove) IP reservation if the user needs more IP


Would this solve your issue?


Simon



On Wed, Feb 27, 2013 at 11:03 AM, Gary S. Cuozzo < g...@isgsoftware.net > wrote:




Thank you for the feedback.

It seems like with this approach, the users will get IP's assigned by ONE as 
they use IP's up to their quota. While the number of IP's they can use is 
important (quota), our use case is a bit different in that all public IP's are 
pre-assigned and static for any VM. The VM's are mostly web & email servers & 
other app servers. So they require properly configured forward & reverse DNS 
and generally don't change once they are established. We may allocate more IP's 
for a user to have, upon request, but they are always predetermined for each VM.

I thought by giving each user their own virtual network, I could control 
specifically which IP's their VM's could use and account for it globally in the 
"master" virtual network by putting a hold on them once they are assigned to a 
user's network.

I think as long as ONE doesn't care if the same IP could be part of 2 different 
virtual networks, this would work well for us. It would only be officially used 
by one network at any time.

Thanks again,
gary



Sent: Wednesday, February 27, 2013 5:48:55 AM
Subject: Re: [one-users] question about best way to assign IP's to various users



Hi,


Here is how I would do it:


Create a VNet as oneadmin, and grant your users permission to USE it. This can 
be done moving the vnet to the user's group (onevnet chgrp), changing the 
permissions (onevnet chmod), or using ACL rules (oneacl). See [1] for more 
information about all this.


Now you have a way to see which IPs are used, and by whom. To limit how many 
IPs can your users take from the vnet, set the NETWORK quota [2].


Note that you need to set the quota for each user or group individually, but 
the batchquota command will make this easier. In the upcoming 4.0 version you 
will be able to set the default quota, that will apply to everyone.


I hope this fits your scenario.
Regards


[1] http://opennebula.org/documentation:rel3.8:auth_overview
[2] http://opennebula.org/documentation:rel3.8:quota_auth

--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula


On Wed, Feb 27, 2013 at 8:45 AM, Gary S. Cuozzo < g...@isgsoftware.net > wrote:




Hello users,
I am trying to figure out a good way to manage assignments IP addresses to 
various users. We have a /22 of public IP addresses and I want to be able to 
give various users access to their IP's that we've allocated. I would also like 
to be able to see a global view of IP's in use.

What I was thinking is to have a master network defined and use it to simply 
"hold" IP's as they are assigned so that it's easy to just click and see what's 
used. Then, I would create each user their own networks which have each IP they 
have been allowed to use. If I assign them additional IP's, I would add them to 
their specific network and then mark them as "hold" in the master.

I this method ok, or am I off base? Is there a better way to accomplish what 
I'm looking for?

Thanks for any ideas,
gary


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org







Re: [one-users] question about best way to assign IP's to various users

2013-02-27 Thread Gary S. Cuozzo
Thank you for the feedback.

It seems like with this approach, the users will get IP's assigned by ONE as 
they use IP's up to their quota. While the number of IP's they can use is 
important (quota), our use case is a bit different in that all public IP's are 
pre-assigned and static for any VM. The VM's are mostly web & email servers & 
other app servers. So they require properly configured forward & reverse DNS 
and generally don't change once they are established. We may allocate more IP's 
for a user to have, upon request, but they are always predetermined for each VM.

I thought by giving each user their own virtual network, I could control 
specifically which IP's their VM's could use and account for it globally in the 
"master" virtual network by putting a hold on them once they are assigned to a 
user's network.

I think as long as ONE doesn't care if the same IP could be part of 2 different 
virtual networks, this would work well for us. It would only be officially used 
by one network at any time.

Thanks again,
gary

- Original Message -

Sent: Wednesday, February 27, 2013 5:48:55 AM
Subject: Re: [one-users] question about best way to assign IP's to various users

Hi,


Here is how I would do it:


Create a VNet as oneadmin, and grant your users permission to USE it. This can 
be done moving the vnet to the user's group (onevnet chgrp), changing the 
permissions (onevnet chmod), or using ACL rules (oneacl). See [1] for more 
information about all this.


Now you have a way to see which IPs are used, and by whom. To limit how many 
IPs can your users take from the vnet, set the NETWORK quota [2].


Note that you need to set the quota for each user or group individually, but 
the batchquota command will make this easier. In the upcoming 4.0 version you 
will be able to set the default quota, that will apply to everyone.


I hope this fits your scenario.
Regards


[1] http://opennebula.org/documentation:rel3.8:auth_overview
[2] http://opennebula.org/documentation:rel3.8:quota_auth

--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula


On Wed, Feb 27, 2013 at 8:45 AM, Gary S. Cuozzo < g...@isgsoftware.net > wrote:




Hello users,
I am trying to figure out a good way to manage assignments IP addresses to 
various users. We have a /22 of public IP addresses and I want to be able to 
give various users access to their IP's that we've allocated. I would also like 
to be able to see a global view of IP's in use.

What I was thinking is to have a master network defined and use it to simply 
"hold" IP's as they are assigned so that it's easy to just click and see what's 
used. Then, I would create each user their own networks which have each IP they 
have been allowed to use. If I assign them additional IP's, I would add them to 
their specific network and then mark them as "hold" in the master.

I this method ok, or am I off base? Is there a better way to accomplish what 
I'm looking for?

Thanks for any ideas,
gary


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org





___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] question about best way to assign IP's to various users

2013-02-26 Thread Gary S. Cuozzo
Hello users, 
I am trying to figure out a good way to manage assignments IP addresses to 
various users. We have a /22 of public IP addresses and I want to be able to 
give various users access to their IP's that we've allocated. I would also like 
to be able to see a global view of IP's in use. 

What I was thinking is to have a master network defined and use it to simply 
"hold" IP's as they are assigned so that it's easy to just click and see what's 
used. Then, I would create each user their own networks which have each IP they 
have been allowed to use. If I assign them additional IP's, I would add them to 
their specific network and then mark them as "hold" in the master. 

I this method ok, or am I off base? Is there a better way to accomplish what 
I'm looking for? 

Thanks for any ideas, 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VMs going into suspend

2013-02-25 Thread Gary S. Cuozzo
Hi Manish,
"No space left on device" indicates your disk is full.  I had this same issue 
when one of my storage servers filled up due to a bunch of huge snapshots 
taking up space unexpectedly.  I think you need to free up or add some space.

Hope that helps,
gary


- Original Message -
From: "Manish Sapariya" 
To: users@lists.opennebula.org
Sent: Thursday, February 21, 2013 1:49:31 AM
Subject: [one-users] VMs going into suspend

Hi,
I am running OpenNebula 2.9.80.
I have two nodes one with CentOS 5.6 and one with CentOS 6.3.

On the CentOS 6.3 host my all VMs frequently go suspended state
and I am failing to recover them.

When I looked at the libvirtd log it says

block I/O error in device 'drive-virtio-disk0': No space left on device

Google did not help me to get to conclusive solution.

Has anybody ran into similar problem?
Should I revert my node back to CentOS 5.6 and older version of
libvirt?

Thanks for any help.
-- 
Regards,
Manish
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] range specification in firewall port list

2012-12-29 Thread Gary S. Cuozzo
Sorry for the noise, I just noticed it on the firewall page. I missed it 
before. 

Just in case somebody comes across this post you specify the range by using a : 
to separate the low & high ranges. 

Thanks, 
gary 


- Original Message -

From: "Gary S. Cuozzo"  
To: "Users OpenNebula"  
Sent: Saturday, December 29, 2012 6:19:48 PM 
Subject: [one-users] range specification in firewall port list 


Hello, 
Is there a way to specify port ranges for the port whitelist in the network 
config? I would like to open a range of 200 ports for a vm. 

Thanks, 
gary 

___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] range specification in firewall port list

2012-12-29 Thread Gary S. Cuozzo
Hello, 
Is there a way to specify port ranges for the port whitelist in the network 
config? I would like to open a range of 200 ports for a vm. 

Thanks, 
gary 
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Access to DS for image in TM

2012-12-28 Thread Gary S. Cuozzo
Ruben, 
Thank you for the info. I was able to make some slight modifications to my TM 
driver. I tested my failover process and it works great. Simply change a few 
parameters in my DS template & redeploy the VM's on a different storage server. 

Looking again at the docs for storage subsystem integration, unless I am 
misinterpreting them it does not appear that the parameters are described 
correctly. For example, what I am seeing when I log arguments to the ln script 
is that arg3 is the VMID and arg4 is the DSID for the image. It appears the 
docs may need some updating. 

Thanks again, 
gary 


- Original Message -

From: "Gary S. Cuozzo"  
To: "Users OpenNebula"  
Sent: Thursday, December 27, 2012 9:13:39 AM 
Subject: Re: [one-users] Access to DS for image in TM 


Hello Ruben, 
I'm glad I asked. I must have missed something in the docs. I thought it was 
the system datastore of the remote host that was sent in. If it is the ID of 
the image DS, then I will be all set. 

Thanks! 
gary 


- Original Message -

From: "Ruben S. Montero"  
To: "Gary S. Cuozzo"  
Cc: "Users OpenNebula"  
Sent: Thursday, December 27, 2012 4:01:40 AM 
Subject: Re: [one-users] Access to DS for image in TM 


Hi Gary, 


The last parameter of the TM scripts is the DS_ID of the image, you could get 
the information by simply onedatastore show $DS_ID, and probably xpath.rb to 
get the SAN_HOST if you are working in a shell script. 


Note that DS_ID refers to the Datastore of the image, so for CLONE, LN ... 
would make sense, other operations may use the System DS as the DS_ID (e.g. 
DELETE). 


Is this good enough? 


Cheers 


Ruben 



On Wed, Dec 26, 2012 at 2:59 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Hello, 
Sorry this is sort of long, hope it makes sense... 

I have created customized DS & TM drivers for our NFS based storage servers. 
Each storage server is snap-shotted and replicated at least every hour to 
another server. I have a use case that I would like to be able to remap all 
images for a particular server quickly in the event of a failure, such as a 
hardware issue. 

In my DS driver, I have a parameter for SAN_HOST which is used to determine 
which storage server will be used to deploy the image. For my failure case, I 
would like to simply be able to edit the SAN_HOST parameter for the particular 
DS which failed and redeploy the vm's, having the images automatically go to 
the backup server and not have to remap them all. This would take the recovery 
process from several hours of manually remapping things to probably < 1 hour. 

The TM drivers don't get any information passed to them about the DS of the 
particular image they are working with. What I would like to do is be able to 
access the DS parameters for an image and use them to remap certain aspects of 
the image so the remapping will work. 

The obvious way I've thought of is to just use the image SOURCE and iterate 
through the image list in order to get a match, then go look up the associated 
DS. 

Is there any better way that I may be missing? My way seems a bit kludgy but I 
didn't think of any other way. 

Thanks in advance, 
gary 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 







-- 
Ruben S. Montero, PhD 
Project co-Lead and Chief Architect 
OpenNebula - The Open Source Solution for Data Center Virtualization 
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula 

___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Access to DS for image in TM

2012-12-27 Thread Gary S. Cuozzo
Hello Ruben, 
I'm glad I asked. I must have missed something in the docs. I thought it was 
the system datastore of the remote host that was sent in. If it is the ID of 
the image DS, then I will be all set. 

Thanks! 
gary 


- Original Message -

From: "Ruben S. Montero"  
To: "Gary S. Cuozzo"  
Cc: "Users OpenNebula"  
Sent: Thursday, December 27, 2012 4:01:40 AM 
Subject: Re: [one-users] Access to DS for image in TM 


Hi Gary, 


The last parameter of the TM scripts is the DS_ID of the image, you could get 
the information by simply onedatastore show $DS_ID, and probably xpath.rb to 
get the SAN_HOST if you are working in a shell script. 


Note that DS_ID refers to the Datastore of the image, so for CLONE, LN ... 
would make sense, other operations may use the System DS as the DS_ID (e.g. 
DELETE). 


Is this good enough? 


Cheers 


Ruben 



On Wed, Dec 26, 2012 at 2:59 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Hello, 
Sorry this is sort of long, hope it makes sense... 

I have created customized DS & TM drivers for our NFS based storage servers. 
Each storage server is snap-shotted and replicated at least every hour to 
another server. I have a use case that I would like to be able to remap all 
images for a particular server quickly in the event of a failure, such as a 
hardware issue. 

In my DS driver, I have a parameter for SAN_HOST which is used to determine 
which storage server will be used to deploy the image. For my failure case, I 
would like to simply be able to edit the SAN_HOST parameter for the particular 
DS which failed and redeploy the vm's, having the images automatically go to 
the backup server and not have to remap them all. This would take the recovery 
process from several hours of manually remapping things to probably < 1 hour. 

The TM drivers don't get any information passed to them about the DS of the 
particular image they are working with. What I would like to do is be able to 
access the DS parameters for an image and use them to remap certain aspects of 
the image so the remapping will work. 

The obvious way I've thought of is to just use the image SOURCE and iterate 
through the image list in order to get a match, then go look up the associated 
DS. 

Is there any better way that I may be missing? My way seems a bit kludgy but I 
didn't think of any other way. 

Thanks in advance, 
gary 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 







-- 
Ruben S. Montero, PhD 
Project co-Lead and Chief Architect 
OpenNebula - The Open Source Solution for Data Center Virtualization 
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula 
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Access to DS for image in TM

2012-12-26 Thread Gary S. Cuozzo
Hello, 
Sorry this is sort of long, hope it makes sense... 

I have created customized DS & TM drivers for our NFS based storage servers. 
Each storage server is snap-shotted and replicated at least every hour to 
another server. I have a use case that I would like to be able to remap all 
images for a particular server quickly in the event of a failure, such as a 
hardware issue. 

In my DS driver, I have a parameter for SAN_HOST which is used to determine 
which storage server will be used to deploy the image. For my failure case, I 
would like to simply be able to edit the SAN_HOST parameter for the particular 
DS which failed and redeploy the vm's, having the images automatically go to 
the backup server and not have to remap them all. This would take the recovery 
process from several hours of manually remapping things to probably < 1 hour. 

The TM drivers don't get any information passed to them about the DS of the 
particular image they are working with. What I would like to do is be able to 
access the DS parameters for an image and use them to remap certain aspects of 
the image so the remapping will work. 

The obvious way I've thought of is to just use the image SOURCE and iterate 
through the image list in order to get a match, then go look up the associated 
DS. 

Is there any better way that I may be missing? My way seems a bit kludgy but I 
didn't think of any other way. 

Thanks in advance, 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Two System Datafiles in the same cluster

2012-12-19 Thread Gary S. Cuozzo
Hi Oriol,
My drivers won't help you much as they are very specific to our ZFS based 
environment.  I think it should be pretty straight forward for you to adapt the 
ONE shared driver to support your needs.  That is what I used as a base for my 
drivers.

gary

- Original Message -
From: "Oriol Martí" 
To: users@lists.opennebula.org
Sent: Wednesday, December 19, 2012 3:20:31 AM
Subject: Re: [one-users] Two System Datafiles in the same cluster

OK, thanks Ruben,
this is just what I was asking for, I think for a future it would be a 
very good improvement to have more than one datasystem in one cluster 
and choose where you want to deploy the vm, by now If I have two 
different system datastores I will have to duplicate all my images and 
partition my public vnet, this is caused, if I'm not wrong because one 
image datastore and one vnet can be only associated to one cluster. No?
Gary, is possible to have your drivers? I think maybe it will be my 
solution by now.

Thanks to all,

;)riol

On 12/18/2012 11:12 PM, Ruben S. Montero wrote:
> Yes and simply to add to Gary's: there can be just one system DS per
> cluster. It you need to use a system DS based on local disks (e.g.
> TM_MAD=ssh) and other based on NFS (e.g. TM_MAD=shared) you need to
> split the hosts in different clusters and associated a different
> SYSTEM_DS to each one.
>
> Then allocate VMs to each cluster with REQUIREMENTS="CLUSTER=fast_ds" ...
>
> Hope it helps
>
> Ruben
>
> On Tue, Dec 18, 2012 at 4:28 PM, Gary S. Cuozzo  wrote:
>> Hi Oriol,
>> ONE already works the way you indicate you need it to.  When you create the 
>> image, you select the DS that is appropriate.  The system DS is just used to 
>> coordinate the deployment of the VM's.
>> gary
>>
>> - Original Message -
>> From: "Oriol Martí" 
>> To: "Gary S. Cuozzo" 
>> Cc: "Users OpenNebula" 
>> Sent: Tuesday, December 18, 2012 9:35:43 AM
>> Subject: Re: [one-users] Two System Datafiles in the same cluster
>>
>> Hi Gary,
>>
>> this could be a solution, for me the best it would be to specify the
>> system datastore which I want to put the VM.
>> Does anybody know if is possible to send a parameter to indicate this?,
>> or will it be a new one feature?
>> Gary, is possible to have your driver, maybe it will be my solution
>>
>> Thanks,
>>
>> ;)riol
>>
>> On 12/18/2012 03:09 PM, Gary S. Cuozzo wrote:
>>> Hi Oriol,
>>> I have deployed my own DS & TM drivers for my NFS & LocalFS datastores.  
>>> What they do is utilize mount points outside of the ONE directory and just 
>>> symlink from the system datastore into the appropriate filesystem.  For 
>>> example:
>>> /export/local/image1.img  (local disk mounted here)
>>> /export/san1/image2.img  (NFS mounted from SAN1)
>>> /export/san2/image3.img  (NFS mounted from SAN2)
>>>
>>> The drivers symlink into the system datastore to each of those images.
>>>
>>> This way, I am able to select the best of both worlds depending on the 
>>> application.  Shared storage with live migrations, etc.  Or, local storage 
>>> with high performance disk I/O.
>>>
>>> Thanks,
>>> gary
>>>
>>> - Original Message -
>>> From: "Oriol Martí" 
>>> To: "Gary S. Cuozzo" 
>>> Cc: "Users OpenNebula" 
>>> Sent: Tuesday, December 18, 2012 4:09:32 AM
>>> Subject: Re: [one-users] Two System Datafiles in the same cluster
>>>
>>> Hi Gary,
>>>
>>> I've read the docs, but I think I'm not understanding something, when
>>> you create vm's all the images are copied in the
>>> /var/lib/one/datastores/0, then all your virtual machines are running
>>> from an NFS system datastore and it's impossible to run the virtual
>>> machines from local storage
>>> Have you tried to create virtual machines from different datastores and
>>> see if the disk speed is really different?
>>>
>>> On 12/17/2012 06:37 PM, Gary S. Cuozzo wrote:
>>>> I do exactly what you reference below, use NFS for shared storage and also 
>>>> localfs for high performance local storage.  But, I only have 1 system 
>>>> datastore.  When you use shared storage such as NFS or iSCSI, the system 
>>>> datastore also has to be shared.  I think the system datastore is 
>>>> typically datastore ID 0.  That's what it is in my system as it's created 
>>>> by ONE.
>>>

Re: [one-users] New contextualization packages

2012-12-18 Thread Gary S. Cuozzo
See inline comments...

> Thanks for the feedback, I think we can add some of those
> modifications to future versions of the packages. Some commends
> inline.
> 
> On Mon, Dec 17, 2012 at 3:15 AM, Gary S. Cuozzo
>  wrote:
> > Thank you for these packages.  They work great and are an excellent
> > base.  I used both deb & rpm.
> >
> > I customized them to add some features:
> > * setting entries in /etc/hosts file
> 
> I also think this is very interesting, alongside hostname setting.
> Are
> you setting the hole file or do you create that one programatically?
> For example, automatically setting localhost plus eth0 hostname/ip
> and
> adding some extra hosts.

I am actually cat'ing the file through grep -v to strip out any stuff already 
there and then appending appropriate entry based on the context variables.



> > * modified for support of resolvconf package in ubuntu 12.04
> 
> We did not support resolvconf package for two reasons:
>   * From the documentation it seems that resolvconf package use is
> only encouraged for complex dns configurations. Editing
> /etc/resolv.conf is still the way to go for normal dns
> configurations.
>   * Using resolvconf package will make the scripts incompatible with
> other distros and I wanted it to be as distro agnostic as possible


Totally agree.  I don't care for resolvconf myself but it is default in newer 
Ubuntu's so I thought I would support it.  I have a different script for Ubuntu 
than I do for CentOS.  


> > * added HOSTNAME context variable
> > * added DOMAINNAME variable
> > * added optional ADMIN_USERNAME, ADMIN_PASSWORD, ADMIN_SSH_PUB_KEY
> > to support creation of an admin account only if it doesn't already
> > exist.
> 
> These parameters also seem very useful to me. Do you already have
> scripts in place to do this?

Yes.  Creating the user account with password differs between CentOS and 
Ubuntu, so I have different scripts for each.  CentOS supports passwd command 
with reading from stdin.  Ubuntu only supports passing in encrypted password.  
I'm happy to share them if you let me know how best to do so.  They are quick & 
dirty but you can easily clean them up to make them acceptable to include with 
ONE.



> > I find when I clone a VM, on the first boot, it still has some of
> > the old "template" vm settings because the context happens too
> > late in the boot process.  I plan to add a feature where the last
> > context script will reboot the system if it was the first time
> > booting.  This lets things like the hostname take effect.
> 
> I have been also thinking a lot about this. I decided to make the
> scripts start at that time because we may use the contextualization
> to
> start some services that need the machine to be fully configured. One
> possible solution is having two set of configuration scripts, one
> that
> starts early in the boot process (for network, hostname and such) and
> another list for late configuration or service startup scripts. What
> do you think about it?

I think this is a great idea.  Honestly I don't know enough about the details 
of the upstart stuff, but I think having an easy way to make scripts that hook 
into various points in the boot process would be quite nice.  Even if it were 
as simple as having some sub-directories under the one-context.d directory that 
were related to the stages of boot.  Sort of a context-based rc#.d directories.


Thanks,
gary



> 
> Cheers
> 
> >
> > Thanks again,
> > gary
> >
> > - Original Message -
> > From: "Javier Fontan" 
> > To: users@lists.opennebula.org
> > Sent: Thursday, October 18, 2012 5:21:49 AM
> > Subject: [one-users] New contextualization packages
> >
> > Hello,
> >
> > We have prepared new contextualization packages and made them more
> > modular so you can also add your own tweaks. Here you have an
> > introductory post with the new features and usage:
> >
> > http://blog.opennebula.org/?p=3584
> >
> > The links to download the packages are in the documentation (
> > http://opennebula.org/documentation:rel3.8:context_overview ).
> >
> > I hope you will find it useful.
> >
> > Cheers
> >
> >
> > --
> > Javier Fontán Muiños
> > Project Engineer
> > OpenNebula - The Open Source Toolkit for Data Center Virtualization
> > www.OpenNebula.org | jfon...@opennebula.org | @OpenNebula
> > ___
> > Users mailing list
> > Users@lists.opennebula.org
> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> > 

Re: [one-users] Two System Datafiles in the same cluster

2012-12-18 Thread Gary S. Cuozzo
Hi Oriol,
ONE already works the way you indicate you need it to.  When you create the 
image, you select the DS that is appropriate.  The system DS is just used to 
coordinate the deployment of the VM's.
gary

- Original Message -
From: "Oriol Martí" 
To: "Gary S. Cuozzo" 
Cc: "Users OpenNebula" 
Sent: Tuesday, December 18, 2012 9:35:43 AM
Subject: Re: [one-users] Two System Datafiles in the same cluster

Hi Gary,

this could be a solution, for me the best it would be to specify the 
system datastore which I want to put the VM.
Does anybody know if is possible to send a parameter to indicate this?, 
or will it be a new one feature?
Gary, is possible to have your driver, maybe it will be my solution

Thanks,

;)riol

On 12/18/2012 03:09 PM, Gary S. Cuozzo wrote:
> Hi Oriol,
> I have deployed my own DS & TM drivers for my NFS & LocalFS datastores.  What 
> they do is utilize mount points outside of the ONE directory and just symlink 
> from the system datastore into the appropriate filesystem.  For example:
> /export/local/image1.img  (local disk mounted here)
> /export/san1/image2.img  (NFS mounted from SAN1)
> /export/san2/image3.img  (NFS mounted from SAN2)
>
> The drivers symlink into the system datastore to each of those images.
>
> This way, I am able to select the best of both worlds depending on the 
> application.  Shared storage with live migrations, etc.  Or, local storage 
> with high performance disk I/O.
>
> Thanks,
> gary
>
> - Original Message -
> From: "Oriol Martí" 
> To: "Gary S. Cuozzo" 
> Cc: "Users OpenNebula" 
> Sent: Tuesday, December 18, 2012 4:09:32 AM
> Subject: Re: [one-users] Two System Datafiles in the same cluster
>
> Hi Gary,
>
> I've read the docs, but I think I'm not understanding something, when
> you create vm's all the images are copied in the
> /var/lib/one/datastores/0, then all your virtual machines are running
> from an NFS system datastore and it's impossible to run the virtual
> machines from local storage
> Have you tried to create virtual machines from different datastores and
> see if the disk speed is really different?
>
> On 12/17/2012 06:37 PM, Gary S. Cuozzo wrote:
>> I do exactly what you reference below, use NFS for shared storage and also 
>> localfs for high performance local storage.  But, I only have 1 system 
>> datastore.  When you use shared storage such as NFS or iSCSI, the system 
>> datastore also has to be shared.  I think the system datastore is typically 
>> datastore ID 0.  That's what it is in my system as it's created by ONE.
>>
>> In your case, you will need to share the system DS to each host in the 
>> cluster.  For example, I have /var/lib/one/datastores/0 NFS exported from 
>> ONE controller to each vm host and mounted there in the same place.
>>
>> Then, you just use each other DS as you wish and ONE handles copying, 
>> linking, etc as needed.
>>
>> Hope that makes some sense.  Read up on the storage docs, they are very good 
>> as describing all this.
>>
>> gary
>>
>>
>> - Original Message -
>> From: "Oriol Martí" 
>> To: "Gary S. Cuozzo" 
>> Cc: "Users OpenNebula" 
>> Sent: Monday, December 17, 2012 12:17:47 PM
>> Subject: Re: [one-users] Two System Datafiles in the same cluster
>>
>> Hi Gary,
>>
>> thank you for your fast response,
>> my idea is to have an NFS and a local datastore, which are system
>> datastores, to store the images of the running vms. Then you can create
>> vms with faster disk (local) or slower disk (NFS). I've tried to have
>> two systems datastores in the same cluster and it's possible.
>>From what I understood, when you create a vm, the image is copied from
>> the image datastore to the default system datastore, which you define in
>> the cluster as SYSTEM_DS. I don't know if its possible to create a vm in
>> a different datastore than the cluster's default.
>>
>> I don't know if I'm missing something about the copy process or
>> something Basically I would like to decide, when I create a vm, to
>> give it faster or slower disk depending on what I need. In my point of
>> view I think that if you use image datastores faster, you only will take
>> advantage of the speed when is copying, one time copied it's going to be
>> slower if its copied in a slower NFS datastore.
>>
>> Best regards,
>>
>>
>> On 12/17/2012 04:42 PM, Gary S. Cuozzo wrote:
>>> Hello Oriol,
>>> I don't think you can have m

Re: [one-users] Two System Datafiles in the same cluster

2012-12-18 Thread Gary S. Cuozzo
Hi Oriol,
I have deployed my own DS & TM drivers for my NFS & LocalFS datastores.  What 
they do is utilize mount points outside of the ONE directory and just symlink 
from the system datastore into the appropriate filesystem.  For example:
/export/local/image1.img  (local disk mounted here)
/export/san1/image2.img  (NFS mounted from SAN1)
/export/san2/image3.img  (NFS mounted from SAN2)

The drivers symlink into the system datastore to each of those images.

This way, I am able to select the best of both worlds depending on the 
application.  Shared storage with live migrations, etc.  Or, local storage with 
high performance disk I/O.

Thanks,
gary

- Original Message -
From: "Oriol Martí" 
To: "Gary S. Cuozzo" 
Cc: "Users OpenNebula" 
Sent: Tuesday, December 18, 2012 4:09:32 AM
Subject: Re: [one-users] Two System Datafiles in the same cluster

Hi Gary,

I've read the docs, but I think I'm not understanding something, when 
you create vm's all the images are copied in the 
/var/lib/one/datastores/0, then all your virtual machines are running 
from an NFS system datastore and it's impossible to run the virtual 
machines from local storage
Have you tried to create virtual machines from different datastores and 
see if the disk speed is really different?

On 12/17/2012 06:37 PM, Gary S. Cuozzo wrote:
> I do exactly what you reference below, use NFS for shared storage and also 
> localfs for high performance local storage.  But, I only have 1 system 
> datastore.  When you use shared storage such as NFS or iSCSI, the system 
> datastore also has to be shared.  I think the system datastore is typically 
> datastore ID 0.  That's what it is in my system as it's created by ONE.
>
> In your case, you will need to share the system DS to each host in the 
> cluster.  For example, I have /var/lib/one/datastores/0 NFS exported from ONE 
> controller to each vm host and mounted there in the same place.
>
> Then, you just use each other DS as you wish and ONE handles copying, 
> linking, etc as needed.
>
> Hope that makes some sense.  Read up on the storage docs, they are very good 
> as describing all this.
>
> gary
>
>
> - Original Message -
> From: "Oriol Martí" 
> To: "Gary S. Cuozzo" 
> Cc: "Users OpenNebula" 
> Sent: Monday, December 17, 2012 12:17:47 PM
> Subject: Re: [one-users] Two System Datafiles in the same cluster
>
> Hi Gary,
>
> thank you for your fast response,
> my idea is to have an NFS and a local datastore, which are system
> datastores, to store the images of the running vms. Then you can create
> vms with faster disk (local) or slower disk (NFS). I've tried to have
> two systems datastores in the same cluster and it's possible.
>   From what I understood, when you create a vm, the image is copied from
> the image datastore to the default system datastore, which you define in
> the cluster as SYSTEM_DS. I don't know if its possible to create a vm in
> a different datastore than the cluster's default.
>
> I don't know if I'm missing something about the copy process or
> something Basically I would like to decide, when I create a vm, to
> give it faster or slower disk depending on what I need. In my point of
> view I think that if you use image datastores faster, you only will take
> advantage of the speed when is copying, one time copied it's going to be
> slower if its copied in a slower NFS datastore.
>
> Best regards,
>
>
> On 12/17/2012 04:42 PM, Gary S. Cuozzo wrote:
>> Hello Oriol,
>> I don't think you can have more than one system datastore, but I could be 
>> wrong about that.  But, unless I misunderstand your goal, I don't think you 
>> need to have more than one.  You should simply be able to define 2 
>> datastores and associate each with the cluster.  I have a similar setup to 
>> what you describe, I have a cluster that has 2 different NFS datastores, a 
>> local file datastore, and an iSCSI datastore.  I can use images from any/all 
>> datastores in a single vm (though I actually don't).
>>
>> Hope that helps,
>> gary
>>
>>
>> - Original Message -
>> From: "Oriol Martí" 
>> To: users@lists.opennebula.org
>> Sent: Monday, December 17, 2012 9:53:43 AM
>> Subject: [one-users] Two System Datafiles in the same cluster
>>
>> Hi,
>>
>> somebody knows if it is possible to have two system datastores in a cluster 
>> and create VM in the same host but in different datastores?
>> I've my default SYSTEM_DS="101" in the cluster, I've trie

Re: [one-users] Two System Datafiles in the same cluster

2012-12-17 Thread Gary S. Cuozzo
I do exactly what you reference below, use NFS for shared storage and also 
localfs for high performance local storage.  But, I only have 1 system 
datastore.  When you use shared storage such as NFS or iSCSI, the system 
datastore also has to be shared.  I think the system datastore is typically 
datastore ID 0.  That's what it is in my system as it's created by ONE.

In your case, you will need to share the system DS to each host in the cluster. 
 For example, I have /var/lib/one/datastores/0 NFS exported from ONE controller 
to each vm host and mounted there in the same place.

Then, you just use each other DS as you wish and ONE handles copying, linking, 
etc as needed.

Hope that makes some sense.  Read up on the storage docs, they are very good as 
describing all this.

gary


- Original Message -
From: "Oriol Martí" 
To: "Gary S. Cuozzo" 
Cc: "Users OpenNebula" 
Sent: Monday, December 17, 2012 12:17:47 PM
Subject: Re: [one-users] Two System Datafiles in the same cluster

Hi Gary,

thank you for your fast response,
my idea is to have an NFS and a local datastore, which are system 
datastores, to store the images of the running vms. Then you can create 
vms with faster disk (local) or slower disk (NFS). I've tried to have 
two systems datastores in the same cluster and it's possible.
 From what I understood, when you create a vm, the image is copied from 
the image datastore to the default system datastore, which you define in 
the cluster as SYSTEM_DS. I don't know if its possible to create a vm in 
a different datastore than the cluster's default.

I don't know if I'm missing something about the copy process or 
something Basically I would like to decide, when I create a vm, to 
give it faster or slower disk depending on what I need. In my point of 
view I think that if you use image datastores faster, you only will take 
advantage of the speed when is copying, one time copied it's going to be 
slower if its copied in a slower NFS datastore.

Best regards,


On 12/17/2012 04:42 PM, Gary S. Cuozzo wrote:
> Hello Oriol,
> I don't think you can have more than one system datastore, but I could be 
> wrong about that.  But, unless I misunderstand your goal, I don't think you 
> need to have more than one.  You should simply be able to define 2 datastores 
> and associate each with the cluster.  I have a similar setup to what you 
> describe, I have a cluster that has 2 different NFS datastores, a local file 
> datastore, and an iSCSI datastore.  I can use images from any/all datastores 
> in a single vm (though I actually don't).
>
> Hope that helps,
> gary
>
>
> - Original Message -
> From: "Oriol Martí" 
> To: users@lists.opennebula.org
> Sent: Monday, December 17, 2012 9:53:43 AM
> Subject: [one-users] Two System Datafiles in the same cluster
>
> Hi,
>
> somebody knows if it is possible to have two system datastores in a cluster 
> and create VM in the same host but in different datastores?
> I've my default SYSTEM_DS="101" in the cluster, I've tried to add to the MV 
> template to change the datastore which the vm is created:
> REQUIREMENTS= "DATASTORE_ID = \"103\""
> (I don't know if this is correct)
>
> But the VM does not boot, and is in pending state. Somebody know if is 
> possible or I have to create two clusters one with the datastore 101 and 
> another one with the datastore 103?
>
> My idea is to have one cluster with one NFS datastore and another local-file 
> datastore.
>
> Thanks!
>


-- 

..
  __
 / /  Oriol Martí Bonvehí
   C E / S / C A  Administrador de Sistemes
   /_/Centre de Supercomputació de Catalunya

   Gran Capità, 2-4 (Edifici Nexus) · 08034 Barcelona
   T. 93 551 6212  · F.  93 205 6979 · oma...@cesca.cat
..

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Two System Datafiles in the same cluster

2012-12-17 Thread Gary S. Cuozzo
Hello Oriol,
I don't think you can have more than one system datastore, but I could be wrong 
about that.  But, unless I misunderstand your goal, I don't think you need to 
have more than one.  You should simply be able to define 2 datastores and 
associate each with the cluster.  I have a similar setup to what you describe, 
I have a cluster that has 2 different NFS datastores, a local file datastore, 
and an iSCSI datastore.  I can use images from any/all datastores in a single 
vm (though I actually don't).

Hope that helps,
gary


- Original Message -
From: "Oriol Martí" 
To: users@lists.opennebula.org
Sent: Monday, December 17, 2012 9:53:43 AM
Subject: [one-users] Two System Datafiles in the same cluster

Hi,

somebody knows if it is possible to have two system datastores in a cluster and 
create VM in the same host but in different datastores?
I've my default SYSTEM_DS="101" in the cluster, I've tried to add to the MV 
template to change the datastore which the vm is created:
REQUIREMENTS= "DATASTORE_ID = \"103\""
(I don't know if this is correct)
  
But the VM does not boot, and is in pending state. Somebody know if is possible 
or I have to create two clusters one with the datastore 101 and another one 
with the datastore 103?

My idea is to have one cluster with one NFS datastore and another local-file 
datastore.

Thanks!

-- 

..
  __
 / /  Oriol Martí Bonvehí
   C E / S / C A  Administrador de Sistemes
   /_/Centre de Supercomputació de Catalunya

   Gran Capità, 2-4 (Edifici Nexus) · 08034 Barcelona
   T. 93 551 6212  · F.  93 205 6979 · oma...@cesca.cat
..

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] sunstone bug

2012-12-17 Thread Gary S. Cuozzo
I'm running 3.8.1
gary

- Original Message -
From: "Daniel Molina" 
To: "Simon Boulet" , "Gary S. Cuozzo" 

Cc: "Users OpenNebula" 
Sent: Monday, December 17, 2012 6:10:02 AM
Subject: Re: [one-users] sunstone bug

Hi Gary and Simon,

On 16 December 2012 20:56, Simon Boulet  wrote:
> Similar issue when deleting images from Sunstone, the confirmation message
> is showing: "This will delete the selected VMs from the database Do you want
> to proceed?"
>
> Should we open a bug for this?
>
> Simon
>
>
> On Sat, Dec 15, 2012 at 2:41 PM, Gary S. Cuozzo 
> wrote:
>>
>> Hello,
>> When deleting a virtual network, the warning message says:
>>   This will initiate the shutdown process in the selected VMs
>>   Do you want to proceed?


What OpenNebula version are you using?

Cheers

-- 
Daniel Molina
Project Engineer
OpenNebula - The Open Source Solution for Data Center Virtualization
www.OpenNebula.org | dmol...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] New contextualization packages

2012-12-16 Thread Gary S. Cuozzo
Thank you for these packages.  They work great and are an excellent base.  I 
used both deb & rpm.

I customized them to add some features:
* setting entries in /etc/hosts file
* modified for support of resolvconf package in ubuntu 12.04
* added HOSTNAME context variable
* added DOMAINNAME variable
* added optional ADMIN_USERNAME, ADMIN_PASSWORD, ADMIN_SSH_PUB_KEY to support 
creation of an admin account only if it doesn't already exist.

I find when I clone a VM, on the first boot, it still has some of the old 
"template" vm settings because the context happens too late in the boot 
process.  I plan to add a feature where the last context script will reboot the 
system if it was the first time booting.  This lets things like the hostname 
take effect.

Thanks again,
gary

- Original Message -
From: "Javier Fontan" 
To: users@lists.opennebula.org
Sent: Thursday, October 18, 2012 5:21:49 AM
Subject: [one-users] New contextualization packages

Hello,

We have prepared new contextualization packages and made them more
modular so you can also add your own tweaks. Here you have an
introductory post with the new features and usage:

http://blog.opennebula.org/?p=3584

The links to download the packages are in the documentation (
http://opennebula.org/documentation:rel3.8:context_overview ).

I hope you will find it useful.

Cheers


-- 
Javier Fontán Muiños
Project Engineer
OpenNebula - The Open Source Toolkit for Data Center Virtualization
www.OpenNebula.org | jfon...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Sunstone Bandwidth

2012-12-16 Thread Gary S. Cuozzo
I see this also, using Ubuntu 12.04 with KVM, but if I hit the refresh it goes 
to reasonable numbers.

- Original Message -

From: "Rodolfo Conte Brufatto" 
To: "André Monteiro" 
Cc: "users" 
Sent: Sunday, December 16, 2012 5:37:45 PM
Subject: Re: [one-users] Sunstone Bandwidth

Same here, using Ubuntu 12.04 LTS


On Sat, Dec 15, 2012 at 10:16 AM, André Monteiro < andre.mont...@gmail.com > 
wrote:



Hello,

I have this problem also, using KVM with SL6.3.


--
André Monteiro







On Sat, Dec 15, 2012 at 12:08 PM, Ricardo Duarte < rjt...@hotmail.com > wrote:





Hi,

I have this problem too, with KVM and CentOS 6.3, using the default monitoring 
intervals.


Regards,
Ricardo



> Date: Fri, 14 Dec 2012 15:54:59 +0100
> From: jfon...@opennebula.org
> To: shank15...@gmail.com
> CC: users@lists.opennebula.org
> Subject: Re: [one-users] Sunstone Bandwidth


>
> Those values seem pretty high, indeed. What is the hypervisor you are using?
>
> I've just made a test with kvm. Set the monitoring interval to 30 and
> fired up a download in a vm limited to 10Kb/s. The values are pretty
> similar to the 10Kb/s limit.
>
>
> On Thu, Dec 13, 2012 at 12:28 AM, Shankhadeep Shome
> < shank15...@gmail.com > wrote:
> > The bandwidth reported in Sunstone cant be right.. I see 15GB/s dl and 
> > 27GB/s upload at th emoment and this is usually the case, sometimes larger
> > numbers or smaller too. In any case doesn't seem even close to accurate.
> > Anybody else see this behavior? I'm using the dummy net driver.
> >
> > Shankhadeep
> >
> > ___
> > Users mailing list
> > Users@lists.opennebula.org
> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> >
>
>
>
> --
> Javier Fontán Muiños
> Project Engineer
> OpenNebula - The Open Source Toolkit for Data Center Virtualization
> www.OpenNebula.org | jfon...@opennebula.org | @OpenNebula
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org





___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org







--
Have you tried turning it off and on again?

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] contextualization with interface aliases

2012-12-16 Thread Gary S. Cuozzo
Hello, 
I am contextualizing vms for both ubuntu 12.04 and centos 6.3 and would like to 
support multiple IP addresses per interface via interface aliasing. For 
example: 
eth0 - 192.168.1.1 
eth0:1 - 192.168.1.2 
etc. 

I'm currently using the context packages from ONE. Is there a way to specify 
multiple IP's for an interface? If so, is it done via the network definition in 
the vm template? or in the context variables? 

Thanks, 
gary 
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] sunstone bug

2012-12-15 Thread Gary S. Cuozzo
Hello, 
When deleting a virtual network, the warning message says: 

This will initiate the shutdown process in the selected VMs 
Do you want to proceed? 
Thanks, 
gary 
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] sunstone gui suggestion #2

2012-12-14 Thread Gary S. Cuozzo
Hello, 
When you instantiate a template via the templates tab, you end up with vm's 
which are just named according to the vmid. When you create a new vm using the 
virtual machines tab, you are able to provide a name to the vm and select a 
template. 

For instantiating via the templates tab, I think it would be nice to have a 
feature where the vm could get named according to the template name with an 
ordinal or some other index to avoid name clashes when multiple vm's are 
instantiated from a single template. 

For example, a template named "http_server " could get instantiated as 
"http-1", "http-2", etc. 

Let me know what you think, 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] deployment of certain vm's to different hosts

2012-12-14 Thread Gary S. Cuozzo
Hi Ruben,
Yes.  #3 never actually occurred to me, but would also be a nice feature.

Thanks,
gary

- Original Message -
From: "Ruben S. Montero" 
To: "Shankhadeep Shome" 
Cc: "Gary S. Cuozzo" , "Users OpenNebula" 
, "Carlos Jiménez" , 
"Jota" 
Sent: Friday, December 14, 2012 7:31:20 AM
Subject: Re: [one-users] deployment of certain vm's to different hosts

Hi,

Let me try to sum up your requirement guys:

1.- Affinity seems to be needed to express easily. IDs as proposed are
hard to use.
2.- Scheduling could be defined per VM group (e.g. collapse the group
in a host, spread the group, none)
3.- VM groups link VMs, one could perform simultaneous operations on
the VMs (migrate the group, stop the group...)

Is this what you are asking for?

Ruben



On Wed, Dec 12, 2012 at 11:52 PM, Shankhadeep Shome
 wrote:
> sorry meant one or more, not two
>
>
> On Wed, Dec 12, 2012 at 5:50 PM, Shankhadeep Shome 
> wrote:
>>
>> Yea I meant you need the cluster mechanism to be extended to general group
>> scheduling, basically you can pick and choose what you want to exclude and
>> include in a group and run the scheduler against that. Currently (group
>> scheduling)  defines a group as two mutually exclusive clusters with
>> different hosts and storage.
>>
>>
>> On Wed, Dec 12, 2012 at 9:17 AM, Gary S. Cuozzo 
>> wrote:
>>>
>>> I could do it via a separate cluster, but it would leave my hosts very
>>> under utilized.  I think I would also have to create and manage additional
>>> datastores and other resources for the new cluster.  My preference would be
>>> to be able to configure the scheduler.
>>>
>>> 
>>> From: "Shankhadeep Shome" 
>>>
>>> To: "Gary S. Cuozzo" 
>>> Sent: Wednesday, December 12, 2012 8:46:55 AM
>>>
>>> Subject: Re: [one-users] deployment of certain vm's to different hosts
>>>
>>> I think what you want is the extension of the cluster mechanism that
>>> already exists in opennebula to be extended so that a cluster is a
>>> scheduling domain, not simply a set of {hosts, networks, storage}
>>>
>>>
>>> On Tue, Dec 11, 2012 at 4:59 PM, Gary S. Cuozzo 
>>> wrote:
>>>>
>>>> Oh nice.  So will the "REQUIREMENTS" be able to be based on any
>>>> attribute of the template?  For example, could I have an attribute called
>>>> "VM_GROUP" in the template and have a value such as "HTTP_1" in various
>>>> templates.  Then, each vm with VM_GROUP HTTP_1 would get deployed to
>>>> different hosts?
>>>>
>>>> Just trying to make sure I'm interpreting the description correctly.
>>>> The example shows things based on vm id's, but I think it would be easier 
>>>> to
>>>> just be able to make any sort of generic group(s) to base things on.
>>>> gary
>>>>
>>>> 
>>>> From: "Jaime Melis" 
>>>> To: "Gary S. Cuozzo" 
>>>> Cc: "Users OpenNebula" 
>>>> Sent: Tuesday, December 11, 2012 4:41:50 PM
>>>>
>>>> Subject: Re: [one-users] deployment of certain vm's to different hosts
>>>>
>>>> Hi,
>>>>
>>>> Yes, you guys really hit the spot! take a look at this:
>>>> http://dev.opennebula.org/issues/1675
>>>>
>>>> Basically, for the next OpenNebula release there will be a default probe
>>>> with all the running vms, so you can create a REQUIREMENTS directive just
>>>> like Mark described.
>>>>
>>>> cheers,
>>>> Jaime
>>>>
>>>>
>>>> On Tue, Dec 11, 2012 at 10:36 PM, Gary S. Cuozzo 
>>>> wrote:
>>>>>
>>>>> Interesting.  You can get a list of guests on a host using "virsh
>>>>> list", which would simplify your idea.  Hopefully I'll have some time to
>>>>> mess around with ideas this week.  I just wanted to make sure there was no
>>>>> feature already in ONE that I didn't know about.
>>>>>
>>>>> Let me know if you go further with your ideas and I'll do the same.
>>>>>
>>>>> Cheers,
>>>>> gary
>>>>>
>>>>> 
>>>>> From: "Mark Wagner" 
>>>>> To: users@lists.openne

Re: [one-users] TM question about rescheduling a vm

2012-12-12 Thread Gary S. Cuozzo
1.  I swear I checked the sched.conf, but LIVE_RESCHEDS was 0.  Sorry about 
that.  I tested again and it seemed to work fine other than I think I found a 
bug in my TM scripts.  The VM I tried to migrate has 3 disks, so the pre/post 
scripts got called 3x.  Since my scripts iterate through all disks on the first 
call, I have to add a bit of logic to not failed the next time through just 
because the filesystem has already been unmounted or mounted.

2.  Yep.  I have modified the system datastore pre/post scripts to call into my 
custom TM's.  The TM's then go through the template, check for disks they 
manage, and handle setup/teardown appropriately.  It works very well.

Thanks,
gary


- Original Message -
From: "Ruben S. Montero" 
To: "Gary S. Cuozzo" 
Cc: "Users OpenNebula" 
Sent: Wednesday, December 12, 2012 4:44:46 PM
Subject: Re: [one-users] TM question about rescheduling a vm

Hi,

There are two things:

1.- Check that LIVE_RESCHEDS in sched.conf is set to 1, as pointed by Carlos.

2.- The pre/post migrate scripts are those of the system datastore,
you have to copy the ZFS scripts to the system datastore directory.
This may seems to be odd, but note that you are operating over objects in the
system datastore and it may contain disks from different images
datastores, so in general, the pre/post script need to deal with that
situations...


Cheers

Ruben

On Wed, Dec 12, 2012 at 4:51 PM, Gary S. Cuozzo  wrote:
> Odd, that's not the behavior I'm seeing.  When I issue a "onevm resched
> " I am seeing the VM get suspended, a checkpoint file created, then the
> scheduler attempts to migrate the VM.  The pre/postmigrate scripts never get
> called for my TM driver, my NFS mount points don't get created on the target
> host, and the migration fails.  I then have to delete the VM and reschedule
> from scratch.
>
> I just verified this using a VM with a single, persistent, disk in my NFS
> datastore.  The flow I see in the logs is:
> 1.  VM gets flagged for reschedule
> 2.  VM gets saved:  "Successfully execute virtualization driver operation:
> save."
> 3.  VM gets cleaned:  "Successfully execute network driver operation:
> clean."
> 4.  ONE looks like it stages the VM on the target host:  "Successfully
> execute network driver operation: pre."
> 5.  VM fails to restore:  "Command execution fail:
> /var/tmp/one/vmm/kvm/restore /var/lib/one//datastores/0/72/checkpoint"
> "Command "virsh --connect qemu:///system restore
> /var/lib/one//datastores/0/72/checkpoint" failed: error: Failed to restore
> domain from /var/lib/one//datastores/0/72/checkpoint"
> "error: Unable to allow access for disk path
> /var/lib/one//datastores/0/72/disk.0: No such file or directory"
>
> At this point, the VM is failed state and I have to resubmit it.
>
> I am able to live migrate this VM just fine and expected that rescheduling
> it should also have done a live migration.  For some reason, it is doing a
> plain migration.
>
> Either way, is there a reason the TM pre/post migrate script is not getting
> called as it would for a live migration?  It seems like I either have
> something misconfigured, or there is a bug.  Either way I would expect the
> pre/postmigrate scripts to be called.
>
>
> Thanks for any help,
> gary
>
> 
> From: "Carlos Martín Sánchez" 
> To: "Gary S. Cuozzo" 
> Cc: users@lists.opennebula.org
> Sent: Wednesday, December 12, 2012 10:07:12 AM
> Subject: Re: [one-users] TM question about rescheduling a vm
>
>
> Hi,
>
> There is no 'resched' driver action, the command just marks the VM to be
> rescheduled. The scheduler then migrates or live-migrates these VMs
> depending on the LIVE_RESCHED attribute of sched.conf [1]
>
> Regards
>
> [1] http://opennebula.org/documentation:rel3.8:schg
> --
> Carlos Martín, MSc
> Project Engineer
> OpenNebula - The Open-source Solution for Data Center Virtualization
> www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula
>
>
>
> On Tue, Dec 11, 2012 at 8:32 PM, Gary S. Cuozzo 
> wrote:
>>
>> Hello,
>> I have developed my own TM driver for a NFS based datastore using ZFS.  It
>> works well so far, except in the case of using the "onevm resched "
>> command.
>>
>> The failure happens because my TM relies on pre/postmigrate scripts to
>> mount/umount ZFS datastores on the hosts as vm's move around.  In the case
>> of the resched command, the pre/postmigrate scripts don't get called, so the
>> vm cannot start on the target host because the disk images are not
>> available.
>>
>> In my use

Re: [one-users] attach/detach disks

2012-12-12 Thread Gary S. Cuozzo
Best I can tell what I'm seeing could be the same issue.
Thanks,
gary

- Original Message -
From: "Ruben S. Montero" 
To: "Gary S. Cuozzo" 
Cc: "Users OpenNebula" 
Sent: Wednesday, December 12, 2012 4:39:19 PM
Subject: Re: [one-users] attach/detach disks

I guess this is a known issue, already fixed for the next release

Could you check if you are being hit by this one?:

http://dev.opennebula.org/issues/1635

Cheers

Ruben

On Wed, Dec 12, 2012 at 10:25 PM, Gary S. Cuozzo  wrote:
> Hello,
> I just started trying out attaching & detaching disks to running vm's.  I am
> able both attach and detach volatile disks.  If I create an image in my
> datastore and attach it, all is fine.  When I try to detach it, I get an
> error:
>   VirtualMachineDetach result FAILURE [VirtualMachineDetach] Could not
> detach disk with DISK_ID 2, it does not exist.
>
> Then I end up with a dangling symlink on the host and future attaches fail.
>
> Any ideas what may be wrong?
>
> Thanks,
> gary
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - The Open Source Solution for Data Center Virtualization
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] attach/detach disks

2012-12-12 Thread Gary S. Cuozzo
Hello, 
I just started trying out attaching & detaching disks to running vm's. I am 
able both attach and detach volatile disks. If I create an image in my 
datastore and attach it, all is fine. When I try to detach it, I get an error: 
VirtualMachineDetach result FAILURE [VirtualMachineDetach] Could not detach 
disk with DISK_ID 2, it does not exist. 

Then I end up with a dangling symlink on the host and future attaches fail. 

Any ideas what may be wrong? 

Thanks , 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] TM question about rescheduling a vm

2012-12-12 Thread Gary S. Cuozzo
Odd, that's not the behavior I'm seeing. When I issue a "onevm resched " I 
am seeing the VM get suspended, a checkpoint file created, then the scheduler 
attempts to migrate the VM. The pre/postmigrate scripts never get called for my 
TM driver, my NFS mount points don't get created on the target host, and the 
migration fails. I then have to delete the VM and reschedule from scratch.

I just verified this using a VM with a single, persistent, disk in my NFS 
datastore. The flow I see in the logs is:
1. VM gets flagged for reschedule
2. VM gets saved: " Successfully execute virtualization driver operation: save."
3. VM gets cleaned: " Successfully execute network driver operation: clean."
4. ONE looks like it stages the VM on the target host: " Successfully execute 
network driver operation: pre."
5. VM fails to restore: " Command execution fail: /var/tmp/one/vmm/kvm/restore 
/var/lib/one//datastores/0/72/checkpoint"
" Command "virsh --connect qemu:///system restore 
/var/lib/one//datastores/0/72/checkpoint" failed: error: Failed to restore 
domain from /var/lib/one//datastores/0/72/checkpoint"
" error: Unable to allow access for disk path 
/var/lib/one//datastores/0/72/disk.0: No such file or directory"

At this point, the VM is failed state and I have to resubmit it.

I am able to live migrate this VM just fine and expected that rescheduling it 
should also have done a live migration. For some reason, it is doing a plain 
migration.

Either way, is there a reason the TM pre/post migrate script is not getting 
called as it would for a live migration? It seems like I either have something 
misconfigured, or there is a bug. Either way I would expect the pre/postmigrate 
scripts to be called.

Thanks for any help,
gary

- Original Message -

From: "Carlos Martín Sánchez" 
To: "Gary S. Cuozzo" 
Cc: users@lists.opennebula.org
Sent: Wednesday, December 12, 2012 10:07:12 AM
Subject: Re: [one-users] TM question about rescheduling a vm

Hi,


There is no 'resched' driver action, the command just marks the VM to be 
rescheduled. The scheduler then migrates or live-migrates these VMs depending 
on the LIVE_RESCHED attribute of sched.conf [1]


Regards


[1] http://opennebula.org/documentation:rel3.8:schg

--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula



On Tue, Dec 11, 2012 at 8:32 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote:




Hello,
I have developed my own TM driver for a NFS based datastore using ZFS. It works 
well so far, except in the case of using the "onevm resched " command.

The failure happens because my TM relies on pre/postmigrate scripts to 
mount/umount ZFS datastores on the hosts as vm's move around. In the case of 
the resched command, the pre/postmigrate scripts don't get called, so the vm 
cannot start on the target host because the disk images are not available. 

In my use case, there isn't much difference between the live migration and the 
saveas/checkpoint way the resched works. Can the pre/postmigrate scripts be 
called for the resched? If not, is there some other way I can get them called 
so I can setup/teardown my nfs mounts?

Thanks for any help,
gary


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org





___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] deployment of certain vm's to different hosts

2012-12-12 Thread Gary S. Cuozzo
I could do it via a separate cluster, but it would leave my hosts very under 
utilized. I think I would also have to create and manage additional datastores 
and other resources for the new cluster. My preference would be to be able to 
configure the scheduler. 

- Original Message -

From: "Shankhadeep Shome"  
To: "Gary S. Cuozzo"  
Sent: Wednesday, December 12, 2012 8:46:55 AM 
Subject: Re: [one-users] deployment of certain vm's to different hosts 

I think what you want is the extension of the cluster mechanism that already 
exists in opennebula to be extended so that a cluster is a scheduling domain, 
not simply a set of {hosts, networks, storage} 



On Tue, Dec 11, 2012 at 4:59 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Oh nice. So will the "REQUIREMENTS" be able to be based on any attribute of the 
template? For example, could I have an attribute called "VM_GROUP" in the 
template and have a value such as "HTTP_1" in various templates. Then, each vm 
with VM_GROUP HTTP_1 would get deployed to different hosts? 

Just trying to make sure I'm interpreting the description correctly. The 
example shows things based on vm id's, but I think it would be easier to just 
be able to make any sort of generic group(s) to base things on. 
gary 



From: "Jaime Melis" < jme...@opennebula.org > 
To: "Gary S. Cuozzo" < g...@isgsoftware.net > 
Cc: "Users OpenNebula" < users@lists.opennebula.org > 
Sent: Tuesday, December 11, 2012 4:41:50 PM 


Subject: Re: [one-users] deployment of certain vm's to different hosts 

Hi, 


Yes, you guys really hit the spot! take a look at this: 
http://dev.opennebula.org/issues/1675 



Basically, for the next OpenNebula release there will be a default probe with 
all the running vms, so you can create a REQUIREMENTS directive just like Mark 
described. 


cheers, 
Jaime 



On Tue, Dec 11, 2012 at 10:36 PM, Gary S. Cuozzo < g...@isgsoftware.net > 
wrote: 




Interesting. You can get a list of guests on a host using "virsh list", which 
would simplify your idea. Hopefully I'll have some time to mess around with 
ideas this week. I just wanted to make sure there was no feature already in ONE 
that I didn't know about. 

Let me know if you go further with your ideas and I'll do the same. 

Cheers, 
gary 



From: "Mark Wagner" < mwag...@intelius.com > 
To: users@lists.opennebula.org 
Sent: Tuesday, December 11, 2012 4:18:29 PM 
Subject: Re: [one-users] deployment of certain vm's to different hosts 



I'm interested in this idea as well. Without modifying the ONE code here is how 
it could be accomplished by extending the IM MAD and using the "REQUIREMENTS" 
section of a VM template to exclude a host that is already running a guest of 
the same type. 

How to go about it is the somewhat kludgey part. The IM MAD scripts are run on 
the host; as far as I can tell from the host it is impossible to easily see the 
names of the guests running on it. The best idea I could think of would be to 
ssh back to the FE and run "onevm list | grep $HOST" and process that. 

Next, how do you group the guests? Assuming the guests have the naming scheme 
of NNN you could group them by removing the NNN. Another way to do it 
would be to do a onevm show on each guest but that may be too resource 
intensive. 

Putting this all together, here is the IM MAD script: 

#!/bin/sh 

VM_HOST=`hostname` 
FE=onefe 

GUEST_TYPES="`ssh $FE onevm list --list NAME,HOST | grep $VM_HOST | awk '{ 
print $1 }' | sed 's/[0-9]*$//g' | sort -u`" 
GUEST_TYPES=":`echo $GUEST_TYPES | sed 's/ /:/g'`:" 

echo "GUEST_TYPES=\"$GUEST_TYPES\"" 

For example type "ntp" you'd put in the template 

REQUIREMENTS = "GUEST_TYPES != \"*:ntp:*\"" 

(I haven't tested this at all). 

On 12/11/2012 10:47 AM, Gary S. Cuozzo wrote: 



Hello, 
We have certain vm's which are related and provide redundancy and/or failover 
to each other. Ideally, we'd like to avoid the situation where these vm's run 
on the same host machine. Is there a way to configure the scheduler so it would 
know to rule out a host if another related vm was already running there? 

If there is no feature in ONE already, what I'm thinking is having an attribute 
that can be set in the vm template which could be used to group vm's together. 
Then, the scheduler can just check to see if a vm that is part of the same 
group(s) is already running and rule that host out if there is. It could also 
optionally override the attribute if it could not find a 'unique' host to 
deploy on (for example, multiple hosts are unavailable and there are not enough 
resource to deploy to uniqu

Re: [one-users] deployment of certain vm's to different hosts

2012-12-11 Thread Gary S. Cuozzo
Oh nice. So will the "REQUIREMENTS" be able to be based on any attribute of the 
template? For example, could I have an attribute called "VM_GROUP" in the 
template and have a value such as "HTTP_1" in various templates. Then, each vm 
with VM_GROUP HTTP_1 would get deployed to different hosts? 

Just trying to make sure I'm interpreting the description correctly. The 
example shows things based on vm id's, but I think it would be easier to just 
be able to make any sort of generic group(s) to base things on. 
gary 

- Original Message -----

From: "Jaime Melis"  
To: "Gary S. Cuozzo"  
Cc: "Users OpenNebula"  
Sent: Tuesday, December 11, 2012 4:41:50 PM 
Subject: Re: [one-users] deployment of certain vm's to different hosts 

Hi, 


Yes, you guys really hit the spot! take a look at this: 
http://dev.opennebula.org/issues/1675 



Basically, for the next OpenNebula release there will be a default probe with 
all the running vms, so you can create a REQUIREMENTS directive just like Mark 
described. 


cheers, 
Jaime 



On Tue, Dec 11, 2012 at 10:36 PM, Gary S. Cuozzo < g...@isgsoftware.net > 
wrote: 




Interesting. You can get a list of guests on a host using "virsh list", which 
would simplify your idea. Hopefully I'll have some time to mess around with 
ideas this week. I just wanted to make sure there was no feature already in ONE 
that I didn't know about. 

Let me know if you go further with your ideas and I'll do the same. 

Cheers, 
gary 



From: "Mark Wagner" < mwag...@intelius.com > 
To: users@lists.opennebula.org 
Sent: Tuesday, December 11, 2012 4:18:29 PM 
Subject: Re: [one-users] deployment of certain vm's to different hosts 



I'm interested in this idea as well. Without modifying the ONE code here is how 
it could be accomplished by extending the IM MAD and using the "REQUIREMENTS" 
section of a VM template to exclude a host that is already running a guest of 
the same type. 

How to go about it is the somewhat kludgey part. The IM MAD scripts are run on 
the host; as far as I can tell from the host it is impossible to easily see the 
names of the guests running on it. The best idea I could think of would be to 
ssh back to the FE and run "onevm list | grep $HOST" and process that. 

Next, how do you group the guests? Assuming the guests have the naming scheme 
of NNN you could group them by removing the NNN. Another way to do it 
would be to do a onevm show on each guest but that may be too resource 
intensive. 

Putting this all together, here is the IM MAD script: 

#!/bin/sh 

VM_HOST=`hostname` 
FE=onefe 

GUEST_TYPES="`ssh $FE onevm list --list NAME,HOST | grep $VM_HOST | awk '{ 
print $1 }' | sed 's/[0-9]*$//g' | sort -u`" 
GUEST_TYPES=":`echo $GUEST_TYPES | sed 's/ /:/g'`:" 

echo "GUEST_TYPES=\"$GUEST_TYPES\"" 

For example type "ntp" you'd put in the template 

REQUIREMENTS = "GUEST_TYPES != \"*:ntp:*\"" 

(I haven't tested this at all). 

On 12/11/2012 10:47 AM, Gary S. Cuozzo wrote: 



Hello, 
We have certain vm's which are related and provide redundancy and/or failover 
to each other. Ideally, we'd like to avoid the situation where these vm's run 
on the same host machine. Is there a way to configure the scheduler so it would 
know to rule out a host if another related vm was already running there? 

If there is no feature in ONE already, what I'm thinking is having an attribute 
that can be set in the vm template which could be used to group vm's together. 
Then, the scheduler can just check to see if a vm that is part of the same 
group(s) is already running and rule that host out if there is. It could also 
optionally override the attribute if it could not find a 'unique' host to 
deploy on (for example, multiple hosts are unavailable and there are not enough 
resource to deploy to unique hosts). 

Please let me know if a feature already exists or if my idea is worth pursuing. 

Cheers, 
gary 



___
Users mailing list Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 


-- 
Mark Wagner | mwag...@intelius.com System Administrator | Intelius Inc. 
___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 







-- 
Jaime Melis 
Project Engineer 
OpenNebula - The Open Source Toolkit for Cloud Computing 
www.OpenNebula.org | jme...@opennebula.org 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] deployment of certain vm's to different hosts

2012-12-11 Thread Gary S. Cuozzo
Interesting. You can get a list of guests on a host using "virsh list", which 
would simplify your idea. Hopefully I'll have some time to mess around with 
ideas this week. I just wanted to make sure there was no feature already in ONE 
that I didn't know about. 

Let me know if you go further with your ideas and I'll do the same. 

Cheers, 
gary 

- Original Message -

From: "Mark Wagner"  
To: users@lists.opennebula.org 
Sent: Tuesday, December 11, 2012 4:18:29 PM 
Subject: Re: [one-users] deployment of certain vm's to different hosts 

I'm interested in this idea as well. Without modifying the ONE code here is how 
it could be accomplished by extending the IM MAD and using the "REQUIREMENTS" 
section of a VM template to exclude a host that is already running a guest of 
the same type. 

How to go about it is the somewhat kludgey part. The IM MAD scripts are run on 
the host; as far as I can tell from the host it is impossible to easily see the 
names of the guests running on it. The best idea I could think of would be to 
ssh back to the FE and run "onevm list | grep $HOST" and process that. 

Next, how do you group the guests? Assuming the guests have the naming scheme 
of NNN you could group them by removing the NNN. Another way to do it 
would be to do a onevm show on each guest but that may be too resource 
intensive. 

Putting this all together, here is the IM MAD script: 

#!/bin/sh 

VM_HOST=`hostname` 
FE=onefe 

GUEST_TYPES="`ssh $FE onevm list --list NAME,HOST | grep $VM_HOST | awk '{ 
print $1 }' | sed 's/[0-9]*$//g' | sort -u`" 
GUEST_TYPES=":`echo $GUEST_TYPES | sed 's/ /:/g'`:" 

echo "GUEST_TYPES=\"$GUEST_TYPES\"" 

For example type "ntp" you'd put in the template 

REQUIREMENTS = "GUEST_TYPES != \"*:ntp:*\"" 

(I haven't tested this at all). 

On 12/11/2012 10:47 AM, Gary S. Cuozzo wrote: 



Hello, 
We have certain vm's which are related and provide redundancy and/or failover 
to each other. Ideally, we'd like to avoid the situation where these vm's run 
on the same host machine. Is there a way to configure the scheduler so it would 
know to rule out a host if another related vm was already running there? 

If there is no feature in ONE already, what I'm thinking is having an attribute 
that can be set in the vm template which could be used to group vm's together. 
Then, the scheduler can just check to see if a vm that is part of the same 
group(s) is already running and rule that host out if there is. It could also 
optionally override the attribute if it could not find a 'unique' host to 
deploy on (for example, multiple hosts are unavailable and there are not enough 
resource to deploy to unique hosts). 

Please let me know if a feature already exists or if my idea is worth pursuing. 

Cheers, 
gary 



___
Users mailing list Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 


-- 
Mark Wagner | mwag...@intelius.com System Administrator | Intelius Inc. 
___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] TM question about rescheduling a vm

2012-12-11 Thread Gary S. Cuozzo
Hello, 
I have developed my own TM driver for a NFS based datastore using ZFS. It works 
well so far, except in the case of using the "onevm resched " command. 

The failure happens because my TM relies on pre/postmigrate scripts to 
mount/umount ZFS datastores on the hosts as vm's move around. In the case of 
the resched command, the pre/postmigrate scripts don't get called, so the vm 
cannot start on the target host because the disk images are not available. 

In my use case, there isn't much difference between the live migration and the 
saveas/checkpoint way the resched works. Can the pre/postmigrate scripts be 
called for the resched? If not, is there some other way I can get them called 
so I can setup/teardown my nfs mounts? 

Thanks for any help, 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] deployment of certain vm's to different hosts

2012-12-11 Thread Gary S. Cuozzo
Hello, 
We have certain vm's which are related and provide redundancy and/or failover 
to each other. Ideally, we'd like to avoid the situation where these vm's run 
on the same host machine. Is there a way to configure the scheduler so it would 
know to rule out a host if another related vm was already running there? 

If there is no feature in ONE already, what I'm thinking is having an attribute 
that can be set in the vm template which could be used to group vm's together. 
Then, the scheduler can just check to see if a vm that is part of the same 
group(s) is already running and rule that host out if there is. It could also 
optionally override the attribute if it could not find a 'unique' host to 
deploy on (for example, multiple hosts are unavailable and there are not enough 
resource to deploy to unique hosts). 

Please let me know if a feature already exists or if my idea is worth pursuing. 

Cheers, 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] vm instantiation failed

2012-12-07 Thread Gary S. Cuozzo
Hello,
>From the log: " Host key verification failed"
Make sure you ssh as oneadmin user to the host first so that you can accept the 
key and save it on the initial attempt.
Hope that helps,
gary

- Original Message -

From: "Marc Reilly" 
To: "Carlos Martín Sánchez" 
Cc: users@lists.opennebula.org
Sent: Friday, December 7, 2012 7:02:56 AM
Subject: Re: [one-users] vm instantiation failed



Hi
Thanks for the reply, I have changed the datastore to ssh . I have tried the 
ttyl image provided by Opennebula and a template/image of my own but still no 
joy. It is still returning the same error:

Fri Dec 7 08:32:08 2012 [DiM][I]: New VM state is ACTIVE.
Fri Dec 7 08:32:08 2012 [LCM][I]: New VM state is PROLOG.
Fri Dec 7 08:32:08 2012 [VM][I]: Virtual Machine has no context
Fri Dec 7 08:32:10 2012 [TM][I]: Command execution fail: 
/var/lib/one/remotes/tm/ssh/clone 
opennebula-host:/var/lib/one/datastores/1/deb07f5dc7b760c79f83a7e27f689331 
xenhost2:/var/lib/one/datastores/0/11/disk.0
Fri Dec 7 08:32:10 2012 [TM][I]: clone: Cloning 
opennebula-host:/var/lib/one/datastores/1/deb07f5dc7b760c79f83a7e27f689331 in 
/var/lib/one/datastores/0/11/disk.0
Fri Dec 7 08:32:10 2012 [TM][E]: clone: Command "scp -r 
opennebula-host:/var/lib/one/datastores/1/deb07f5dc7b760c79f83a7e27f689331 
xenhost2:/var/lib/one/datastores/0/11/disk.0" failed: Host key verification 
failed.
Fri Dec 7 08:32:10 2012 [TM][E]: Error copying 
opennebula-host:/var/lib/one/datastores/1/deb07f5dc7b760c79f83a7e27f689331 to 
xenhost2:/var/lib/one/datastores/0/11/disk.0
Fri Dec 7 08:32:10 2012 [TM][I]: ExitCode: 1
Fri Dec 7 08:32:10 2012 [TM][E]: Error executing image transfer script: Error 
copying 
opennebula-host:/var/lib/one/datastores/1/deb07f5dc7b760c79f83a7e27f689331 to 
xenhost2:/var/lib/one/datastores/0/11/disk.0
Fri Dec 7 08:32:10 2012 [DiM][I]: New VM state is FAILED

Could it be an error with my Xen configuration?
Thanks
Marc

From: Carlos Martín Sánchez [mailto:cmar...@opennebula.org]
Sent: 07 December 2012 10:41
To: Marc Reilly
Cc: users@lists.opennebula.org
Subject: Re: [one-users] vm instantiation failed

Hi,



Are you sure you are using tm_ssh in all your datastores? doesn't look like it:



Command execution fail: /var/lib/one/remotes/tm/ shared /



Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization

www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula




On Thu, Dec 6, 2012 at 10:48 PM, Marc Reilly < marc.rei...@student.ncirl.ie > 
wrote:
Command execution fail: /var/lib/one/remotes/tm/shared/clone

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] sunstone gui suggestion

2012-12-06 Thread Gary S. Cuozzo
Hello, 
As I've been using the Sunstone GUI more and also dealing with more VM's, 
images, etc. in my testing, I have what I think would be a nice usability 
improvement, especially for smaller monitors like on my laptop. On resource 
screens, such as images/templates/vms, when you click an item to view the 
details you end up with a vertical scroll bar in the upper half of the screen. 
But the scroll bar scrolls the entire upper half, including the column heading 
and the action buttons. I think it would be more usable if the scrollbar only 
scrolled the table rows and not the column headings and action buttons. 

The same approach could help on the bottom half of the screen where you could 
scroll the screen for a long log file, but still have the tabs to navigate to 
other info. 

I find myself doing a lot of scrolling back & forth and I think this would 
help. What do you think? 

gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VM in unknown state never gets updated

2012-12-06 Thread Gary S. Cuozzo
I suppose exiting with 0 would work.  It would be nice though to be able to 
know the outcome of the post-migrate.  In my case it means that I have either 
NFS mounts or iSCSI connections to clean up manually.  Would a compromise work 
where the script can return a non-zero exit code but ONE would still view the 
migration as "successful" because the failure happened after the actual 
hypervisor migration step?

- Original Message -
From: "Ruben S. Montero" 
To: "Gary S. Cuozzo" 
Cc: users@lists.opennebula.org
Sent: Thursday, December 6, 2012 2:09:46 PM
Subject: Re: [one-users] VM in unknown state never gets updated

Hi

I think this is the expected behavior, the problem here is the drivers
"lying" to OpenNebula. This what happens:

1.- Users issue a live migration. The live migration fails (driver exits 1)
2.- OpenNebula detects a failed migration and assumes the VM is
RUNNING in the original host
3.- OpenNebula polls the VM and it can find it, so it move it to the
unknown state
4.- OpenNebula keeps polling the VM and looking for it in the initial host

There is no way for OpenNebula to know that the VM has leaved the
host. Probably the only way is changing 2, and assume that the live
migration succeeded. However in a normal (failure) situation when the
hypervisor fails the live-migration the VM keeps running in the
original host.

So what happens if post-migrate fail? I'd say that given that the VM
is running on the new host it does not make sense to rollback the
operation and I think we should always exit 0 the post-migrate
script...

What do you think?

Ruben


On Thu, Dec 6, 2012 at 4:19 AM, Gary S. Cuozzo  wrote:
> As a note, I manually migrated the VM back to the original host and ONE did
> pick up on it and put it back to the running state so that I can manage it
> again.  I think it would be good though if ONE could find it on any host and
> not just the host it originally ran on.
> gary
>
> 
> From: "Gary S. Cuozzo" 
> To: users@lists.opennebula.org
> Sent: Wednesday, December 5, 2012 10:04:34 PM
> Subject: [one-users] VM in unknown state never gets updated
>
>
> Hello,
> I'm testing my TM driver and using the pre/postmigrate scripts in 3.8.1.
> Due to a bug in the postmigrate script, I have a VM that successfully
> migrated from one host to the other, but since the postmigrate returned
> non-zero status, the migration was considered to be failed by ONE and the VM
> was put into the UNKOWN state.  I figured ONE would eventually poll the
> hosts and realize that the VM was running and update the state of the VM as
> well as the host it's running on.  It never picked up on it though.
>
> It seems like ONE should be able to recover from this since it should be
> able to poll the hosts and find out that it is running on one of them.  Is
> there some way to tell ONE to update the state?  Or tell it to scan for it?
>
> Since it's just a test VM I can just delete & recreate it, but I think this
> is a use case that ONE should be able to handle.
>
> Any thoughts?
>
> Cheers,
> gary
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - The Open Source Solution for Data Center Virtualization
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VM in unknown state never gets updated

2012-12-05 Thread Gary S. Cuozzo
As a note, I manually migrated the VM back to the original host and ONE did 
pick up on it and put it back to the running state so that I can manage it 
again. I think it would be good though if ONE could find it on any host and not 
just the host it originally ran on. 
gary 

- Original Message -

From: "Gary S. Cuozzo"  
To: users@lists.opennebula.org 
Sent: Wednesday, December 5, 2012 10:04:34 PM 
Subject: [one-users] VM in unknown state never gets updated 


Hello, 
I'm testing my TM driver and using the pre/postmigrate scripts in 3.8.1. Due to 
a bug in the postmigrate script, I have a VM that successfully migrated from 
one host to the other, but since the postmigrate returned non-zero status, the 
migration was considered to be failed by ONE and the VM was put into the UNKOWN 
state. I figured ONE would eventually poll the hosts and realize that the VM 
was running and update the state of the VM as well as the host it's running on. 
It never picked up on it though. 

It seems like ONE should be able to recover from this since it should be able 
to poll the hosts and find out that it is running on one of them. Is there some 
way to tell ONE to update the state? Or tell it to scan for it? 

Since it's just a test VM I can just delete & recreate it, but I think this is 
a use case that ONE should be able to handle. 

Any thoughts? 

Cheers, 
gary 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] VM in unknown state never gets updated

2012-12-05 Thread Gary S. Cuozzo
Hello, 
I'm testing my TM driver and using the pre/postmigrate scripts in 3.8.1. Due to 
a bug in the postmigrate script, I have a VM that successfully migrated from 
one host to the other, but since the postmigrate returned non-zero status, the 
migration was considered to be failed by ONE and the VM was put into the UNKOWN 
state. I figured ONE would eventually poll the hosts and realize that the VM 
was running and update the state of the VM as well as the host it's running on. 
It never picked up on it though. 

It seems like ONE should be able to recover from this since it should be able 
to poll the hosts and find out that it is running on one of them. Is there some 
way to tell ONE to update the state? Or tell it to scan for it? 

Since it's just a test VM I can just delete & recreate it, but I think this is 
a use case that ONE should be able to handle. 

Any thoughts? 

Cheers, 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] iSCSI recipe

2012-12-04 Thread Gary S. Cuozzo
It does appear that the SRC & DEST arguments are reversed.

- Original Message -
From: "Ruben S. Montero" 
To: "Gary S. Cuozzo" 
Cc: users@lists.opennebula.org
Sent: Monday, December 3, 2012 5:35:47 AM
Subject: Re: [one-users] iSCSI recipe

Hi

Yes the pre-post scripts are those of the system datastore. In fact,
you are dealing with disk images inside the system datastore. I agree
this can be misleading, but think that in general you may have images
from different datastores that may need different things.

So the pre-post scripts are those of the system data-store, but you do
not follow the system datastore type (i.e. iSCSI things do fit in the
shared pre/post scripts). Also note that you should use the VM
template to iterate through the disks and perform the pre-post
operations over them.

About SRC/DST arguments, do you refer to the src and dst hosts, right?
is that a bug that we need to address?

Cheers and thank you for your great feedback!

Ruben



On Sun, Dec 2, 2012 at 3:55 PM, Gary S. Cuozzo  wrote:
> Thanks Laurent, I'll take the same route.  Also thanks for the SRC & DST tip.
>
> - Original Message -----
> From: "Laurent Grawet" 
> To: "Gary S. Cuozzo" 
> Cc: "Ruben S. Montero" , "christopher barry" 
> , users@lists.opennebula.org
> Sent: Sunday, December 2, 2012 5:53:01 AM
> Subject: Re: [one-users] iSCSI recipe
>
> Hi Gary,
>
> On 02/12/12 07:44, Gary S. Cuozzo wrote:
>> Hi Ruben,
>> I'm finally back to work on my opennebula project.  I just upgraded to 3.8.1 
>> and starting to modify my drivers to use the new pre/postmigrate scripts.  I 
>> see they only get called for the TM of the system datastore ("shared" in my 
>> case).  Is there a reason that the pre/post scripts can't be called on the 
>> TM for the driver of the particular disk that is being migrated?  I have 2 
>> different drivers, a ZFS-iSCSI and a ZFS-NFS driver that both need to 
>> perform tasks before & after migration.  If one called the pre/post script 
>> for the specific TM, I could put all the logic together for each driver.
>>
>> I guess what I could do for now is modify the shared driver to check the TM 
>> of the disk and then call the appropriate driver script from there.
>>
>> What do you think?
>>
>> Thanks for adding the feature.
>> gary
>
> There is a workaround for this. I have put the pre/post scripts inside
> my driver directory. I call them from the pre/post scripts of the system
> datastore. Then I check TM_MAD inside my scripts using DS_ID.
> One thing I have noticed, SRC and DST args are swapped.
>
> Cheers,
>
> Laurent
>
>>
>>
>> - Original Message -
>> From: "Ruben S. Montero" 
>> To: "Laurent Grawet" 
>> Cc: "Gary S. Cuozzo" , "christopher barry" 
>> , users@lists.opennebula.org
>> Sent: Wednesday, October 31, 2012 6:41:44 PM
>> Subject: Re: [one-users] iSCSI recipe
>>
>> Hi
>>
>> Please note that the post/pre-migrate scripts are those from the
>> system datastore. Probably, they should be put in shared or ssh dir
>> and not in iscsi.
>>
>>
>> The description of the arguments are in here:
>> http://opennebula.org/documentation:rel3.8:sd
>>
>> If you need to iterate over the disks in the template, you may be need
>> to use ruby or pythonn for this script. Or, if you assume that there
>> are no more than,let say 5 disks,you can get the first 5 diks using
>> VM/TEMPLATE/DISK[DISK_ID=0] and so on and work on them if they are
>> defined
>>
>>
>> Cheers
>>
>>
>> Ruben
>>
>>
>>> On Fri, Aug 17, 2012 at 9:43 AM, Gary S. Cuozzo 
>>> wrote:
>>>>> Excellent, thanks.
>>>>>
>>>>> 
>>>>> From: "Ruben S. Montero" 
>>>>> To: "Gary S. Cuozzo" 
>>>>> Cc: users@lists.opennebula.org
>>>>> Sent: Thursday, August 16, 2012 7:19:23 PM
>>>>>
>>>>> Subject: Re: [one-users] iSCSI recipe
>>>>>
>>>>> Ok
>>>>>
>>>>> Know I see the point. We are doing something similar for 
>>>>> attaching/detaching
>>>>> an specific script is executed before and after the hypervisor. So I guess
>>>>> this could be added. I've filled an issue for this and plan it in future
>>>>> versions
>>>>>
>>>>> http://dev.opennebula.org/issues/1419
>>>>>
>&g

Re: [one-users] iSCSI recipe

2012-12-02 Thread Gary S. Cuozzo
Thanks Laurent, I'll take the same route.  Also thanks for the SRC & DST tip.

- Original Message -
From: "Laurent Grawet" 
To: "Gary S. Cuozzo" 
Cc: "Ruben S. Montero" , "christopher barry" 
, users@lists.opennebula.org
Sent: Sunday, December 2, 2012 5:53:01 AM
Subject: Re: [one-users] iSCSI recipe

Hi Gary,

On 02/12/12 07:44, Gary S. Cuozzo wrote:
> Hi Ruben,
> I'm finally back to work on my opennebula project.  I just upgraded to 3.8.1 
> and starting to modify my drivers to use the new pre/postmigrate scripts.  I 
> see they only get called for the TM of the system datastore ("shared" in my 
> case).  Is there a reason that the pre/post scripts can't be called on the TM 
> for the driver of the particular disk that is being migrated?  I have 2 
> different drivers, a ZFS-iSCSI and a ZFS-NFS driver that both need to perform 
> tasks before & after migration.  If one called the pre/post script for the 
> specific TM, I could put all the logic together for each driver.
>
> I guess what I could do for now is modify the shared driver to check the TM 
> of the disk and then call the appropriate driver script from there.
>
> What do you think?
>
> Thanks for adding the feature.
> gary

There is a workaround for this. I have put the pre/post scripts inside
my driver directory. I call them from the pre/post scripts of the system
datastore. Then I check TM_MAD inside my scripts using DS_ID.
One thing I have noticed, SRC and DST args are swapped.

Cheers,

Laurent

>
>
> - Original Message -
> From: "Ruben S. Montero" 
> To: "Laurent Grawet" 
> Cc: "Gary S. Cuozzo" , "christopher barry" 
> , users@lists.opennebula.org
> Sent: Wednesday, October 31, 2012 6:41:44 PM
> Subject: Re: [one-users] iSCSI recipe
>
> Hi
>
> Please note that the post/pre-migrate scripts are those from the
> system datastore. Probably, they should be put in shared or ssh dir
> and not in iscsi.
>
>
> The description of the arguments are in here:
> http://opennebula.org/documentation:rel3.8:sd
>
> If you need to iterate over the disks in the template, you may be need
> to use ruby or pythonn for this script. Or, if you assume that there
> are no more than,let say 5 disks,you can get the first 5 diks using
> VM/TEMPLATE/DISK[DISK_ID=0] and so on and work on them if they are
> defined
>
>
> Cheers
>
>
> Ruben
>
>
>> On Fri, Aug 17, 2012 at 9:43 AM, Gary S. Cuozzo 
>> wrote:
>>>> Excellent, thanks.
>>>>
>>>> 
>>>> From: "Ruben S. Montero" 
>>>> To: "Gary S. Cuozzo" 
>>>> Cc: users@lists.opennebula.org
>>>> Sent: Thursday, August 16, 2012 7:19:23 PM
>>>>
>>>> Subject: Re: [one-users] iSCSI recipe
>>>>
>>>> Ok
>>>>
>>>> Know I see the point. We are doing something similar for 
>>>> attaching/detaching
>>>> an specific script is executed before and after the hypervisor. So I guess
>>>> this could be added. I've filled an issue for this and plan it in future
>>>> versions
>>>>
>>>> http://dev.opennebula.org/issues/1419
>>>>
>>>> Meanwhile, as you probably had already found out, you can modify the 
>>>> migrate
>>>> script to embed those "TM scripts".
>>>>
>>>> Cheers
>>>>
>>>> Ruben
>>>>
>>>> On Thu, Aug 16, 2012 at 5:40 PM, Gary S. Cuozzo 
>>>> wrote:
>>>>> Is there a reason that ONE can't call a TM script with the source host,
>>>>> dest host, vm template, etc. when the migrate command is executed?
>>>>>
>>>>> While ONE can't orchestrate the migration, it can at least notify the TM
>>>>> of the migration in advance.  That would allow me to grab the info I need
>>>>> from the template and fire off the login or rescan on the dest host.  ONE
>>>>> has got to know this info in order to even notify libvirt/KVM to do the
>>>>> migration.  So just call a TM script before issuing the libvirt command 
>>>>> and
>>>>> abort if the script returns error.
>>>>>
>>>>> Logout is not as critical to me.  Worst case I have some hosts which iSCSI
>>>>> sessions which are not in use.  The ideal scenario would be to be able to
>>>>> have some sort of TM script that gets called after ONE detects successful
>>>>> migration, b

Re: [one-users] ZFS datastore driver

2012-12-01 Thread Gary S. Cuozzo
Very nice. I do something similar in my ZFS drivers. I have an optional "CLONE" 
attribute in the image definition that I can set to the dataset snapshot that I 
want to use as the source for my image. I have my drivers setup as shared, but 
the idea is the same. 

My driver is setup to organize the images by customer & server in a hierarchy 
of ZFS datasets. This way, I can snapshot a particular server's images (volumes 
and/or files) and do rollbacks without affecting other images. 

gary 

- Original Message -

From: "Rogier Mars"  
To: users@lists.opennebula.org 
Sent: Wednesday, November 7, 2012 9:08:51 PM 
Subject: [one-users] ZFS datastore driver 

Hi, 


I've created a first version of a datastore driver that support ZFS snapshots 
to enable instant cloning of a "golden" image. I've used the idea's from this 
blog: 
http://mperedim.wordpress.com/2010/09/26/opennebula-zfs-and-xen-%E2%80%93-part-2-instant-cloning/
 


The driver that is discussed in this article does not work with ONE 3.6 so I 
decided to update and rewrite it. I use this driver to allow instant cloning of 
large persistent images. I create a golden image, upload it to the datastore 
and create a "golden" snapshot. This golden snapshot is the basis for the 
clones that are created for this image. In my setup I use the ssh transfer 
driver to copy the images to the host after they are cloned. This also takes 
some time, but for performance reasons I prefer this above the shared driver. 
When the VM is stopped the image is transferred back to the ZFS server and 
stored in its own location. 


I think this driver will also function with the shared transfer driver, 
enabling almost instant startup of VMs, even with large disks. 


A few disclaimers: 
- I don't use this driver in production yet, and I would not advice anyone to 
do this :) 
- I'm not a developer, so please feel free to comment on my coding skills :) 


More info and the tar with the zfs-datastore driver are available below: 
http://rogierm.redbee.nl/blog/2012/11/08/opennebula-with-zfs-datastore/ 


Cheers, 


Rogier 




___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] iSCSI recipe

2012-12-01 Thread Gary S. Cuozzo
Hi Ruben,
I'm finally back to work on my opennebula project.  I just upgraded to 3.8.1 
and starting to modify my drivers to use the new pre/postmigrate scripts.  I 
see they only get called for the TM of the system datastore ("shared" in my 
case).  Is there a reason that the pre/post scripts can't be called on the TM 
for the driver of the particular disk that is being migrated?  I have 2 
different drivers, a ZFS-iSCSI and a ZFS-NFS driver that both need to perform 
tasks before & after migration.  If one called the pre/post script for the 
specific TM, I could put all the logic together for each driver.

I guess what I could do for now is modify the shared driver to check the TM of 
the disk and then call the appropriate driver script from there.

What do you think?

Thanks for adding the feature.
gary


- Original Message -
From: "Ruben S. Montero" 
To: "Laurent Grawet" 
Cc: "Gary S. Cuozzo" , "christopher barry" 
, users@lists.opennebula.org
Sent: Wednesday, October 31, 2012 6:41:44 PM
Subject: Re: [one-users] iSCSI recipe

Hi

Please note that the post/pre-migrate scripts are those from the
system datastore. Probably, they should be put in shared or ssh dir
and not in iscsi.


The description of the arguments are in here:
http://opennebula.org/documentation:rel3.8:sd

If you need to iterate over the disks in the template, you may be need
to use ruby or pythonn for this script. Or, if you assume that there
are no more than,let say 5 disks,you can get the first 5 diks using
VM/TEMPLATE/DISK[DISK_ID=0] and so on and work on them if they are
defined....


Cheers


Ruben


>
> On Fri, Aug 17, 2012 at 9:43 AM, Gary S. Cuozzo 
> wrote:
>>> Excellent, thanks.
>>>
>>> 
>>> From: "Ruben S. Montero" 
>>> To: "Gary S. Cuozzo" 
>>> Cc: users@lists.opennebula.org
>>> Sent: Thursday, August 16, 2012 7:19:23 PM
>>>
>>> Subject: Re: [one-users] iSCSI recipe
>>>
>>> Ok
>>>
>>> Know I see the point. We are doing something similar for attaching/detaching
>>> an specific script is executed before and after the hypervisor. So I guess
>>> this could be added. I've filled an issue for this and plan it in future
>>> versions
>>>
>>> http://dev.opennebula.org/issues/1419
>>>
>>> Meanwhile, as you probably had already found out, you can modify the migrate
>>> script to embed those "TM scripts".
>>>
>>> Cheers
>>>
>>> Ruben
>>>
>>> On Thu, Aug 16, 2012 at 5:40 PM, Gary S. Cuozzo 
>>> wrote:
>>>> Is there a reason that ONE can't call a TM script with the source host,
>>>> dest host, vm template, etc. when the migrate command is executed?
>>>>
>>>> While ONE can't orchestrate the migration, it can at least notify the TM
>>>> of the migration in advance.  That would allow me to grab the info I need
>>>> from the template and fire off the login or rescan on the dest host.  ONE
>>>> has got to know this info in order to even notify libvirt/KVM to do the
>>>> migration.  So just call a TM script before issuing the libvirt command and
>>>> abort if the script returns error.
>>>>
>>>> Logout is not as critical to me.  Worst case I have some hosts which iSCSI
>>>> sessions which are not in use.  The ideal scenario would be to be able to
>>>> have some sort of TM script that gets called after ONE detects successful
>>>> migration, but I could work around that on my own if it were not the case.
>>>>
>>>> 
>>>> From: "Ruben S. Montero" 
>>>> To: "Gary S. Cuozzo" 
>>>> Cc: users@lists.opennebula.org
>>>> Sent: Thursday, August 16, 2012 11:28:27 AM
>>>>
>>>> Subject: Re: [one-users] iSCSI recipe
>>>>
>>>> Note that OpenNebula cannot orchestrate the two libvirt/KVM hypervisors
>>>> doing the migration. When the first host is ready to stop running the VM
>>>> (memory has been sent to the second hypervisor, etc..) it executes the hook
>>>> (logout from the iSCSI server) and just right before starting rhe VM in the
>>>> second host the hook (login in the iSCSI server) is run. There is nothing 
>>>> we
>>>> can do to hook in that process.
>>>>
>>>> We could however, include such scripts as part of the TM, distribute them
>>>> in the hosts, and put them in the right places...
>>>>
&g

Re: [one-users] sunstone vnc issue after upgrade to 3.8.1

2012-12-01 Thread Gary S. Cuozzo
Ugh, had to clear the browser cache and shift-reload sunstone even though I had 
restarted the browser a few times. 

Hope this saves somebody some time. 

Cheers, 
gary 

- Original Message -

From: "Gary S. Cuozzo"  
To: users@lists.opennebula.org 
Sent: Sunday, December 2, 2012 1:06:37 AM 
Subject: Re: [one-users] sunstone vnc issue after upgrade to 3.8.1 


Ok, seems like some sort of firefox issue as I tried chrome and it's working. 

- Original Message -----

From: "Gary S. Cuozzo"  
To: users@lists.opennebula.org 
Sent: Sunday, December 2, 2012 12:13:15 AM 
Subject: [one-users] sunstone vnc issue after upgrade to 3.8.1 


Hello all, 
I just upgraded from 3.6.0 --> 3.8.1. All looks fine except for VNC via 
sunstone. When I click to initiate a VNC session, the window just says "Must 
set host and port". There are no errors in any of the log files. I've gone 
through the VNC checklist for troubleshooting. I ran the install script several 
times. 

Anybody seen this before? 

Thanks in advance for any ideas, 
gary 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] sunstone vnc issue after upgrade to 3.8.1

2012-12-01 Thread Gary S. Cuozzo
Ok, seems like some sort of firefox issue as I tried chrome and it's working. 

- Original Message -

From: "Gary S. Cuozzo"  
To: users@lists.opennebula.org 
Sent: Sunday, December 2, 2012 12:13:15 AM 
Subject: [one-users] sunstone vnc issue after upgrade to 3.8.1 


Hello all, 
I just upgraded from 3.6.0 --> 3.8.1. All looks fine except for VNC via 
sunstone. When I click to initiate a VNC session, the window just says "Must 
set host and port". There are no errors in any of the log files. I've gone 
through the VNC checklist for troubleshooting. I ran the install script several 
times. 

Anybody seen this before? 

Thanks in advance for any ideas, 
gary 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] sunstone vnc issue after upgrade to 3.8.1

2012-12-01 Thread Gary S. Cuozzo
Hello all, 
I just upgraded from 3.6.0 --> 3.8.1. All looks fine except for VNC via 
sunstone. When I click to initiate a VNC session, the window just says "Must 
set host and port". There are no errors in any of the log files. I've gone 
through the VNC checklist for troubleshooting. I ran the install script several 
times. 

Anybody seen this before? 

Thanks in advance for any ideas, 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] disk access via virt-manager

2012-08-30 Thread Gary S. Cuozzo
Hello,
I believe I had this same issue in the past and it was related to qemu.conf 
needing the following settings:
user="oneadmin"
group="oneadmin"
dynamic_ownership=0

Hope that helps,
gary

- Original Message -
From: "christopher barry" 
To: users@lists.opennebula.org
Sent: Tuesday, August 28, 2012 5:29:19 PM
Subject: [one-users] disk access via virt-manager

trying to run virt-manager on the FE, I get the following:

Error starting domain: internal error process exited while connecting to
monitor: qemu: could not open disk image /dev/onevg/bubble_data_001:
Permission denied

if I:
# chown oneadmin:oneadmin /dev/onevg/bubble_data_001
it works, but reverts immediately after shutdown.

I have oneadmin in the disk group

# egrep 'oneadmin|libvirt|kvm|qemu' /etc/{passwd,group}
/etc/passwd:libvirt-qemu:x:114:123:Libvirt
Qemu,,,:/var/lib/libvirt:/bin/false
/etc/passwd:oneadmin:x:1000:1000::/var/lib/one:/bin/bash
/etc/group:adm:x:4:oneadmin
/etc/group:disk:x:6:oneadmin,libvirt-qemu
/etc/group:sudo:x:27:oneadmin
/etc/group:kvm:x:123:oneadmin
/etc/group:libvirt:x:124:oneadmin,libvirt-qemu
/etc/group:oneadmin:x:1000:


perms on my disks are:
# ls -la /dev/dm-*
brw-rw 1 root disk 254, 0 Aug 28 15:54 /dev/dm-0
brw-rw 1 root disk 254, 1 Aug 28 15:54 /dev/dm-1
brw-rw 1 root disk 254, 2 Aug 28 15:54 /dev/dm-2
brw-rw 1 root disk 254, 3 Aug 28 15:54 /dev/dm-3
brw-rw 1 root disk 254, 4 Aug 28 15:55 /dev/dm-4
brw-rw 1 root disk 254, 5 Aug 28 15:54 /dev/dm-5


Any thoughts on this problem?

Thanks,
-C

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Error "Stale NFS file handle"

2012-08-30 Thread Gary S. Cuozzo
Hello,
You can try the command 'exportfs -f' on the server to flush things. You may 
also want to try remounting the share on the clients. You may be able to do 
'mount -o remout '. Or you can reboot the client if it's reporting that 
the share is in use.

Hope that helps,
gary

- Original Message -

From: "Carlos Jiménez" 
To: users@lists.opennebula.org
Sent: Thursday, August 30, 2012 4:46:35 AM
Subject: Re: [one-users] Error "Stale NFS file handle"

Hello,

I've configured NFS exports with "fsid=20". So, export options are now:
rw,no_root_squash,fsid=20,sync
After that change I've executed (server side) "exportfs -rav".

The clients were already using NFS v3, so I suppose it is not necessary to 
force it, is it?
Kernel version: 2.4.27-3-386 (server side)
I've shared/exported to the full network 192.168.180.0/24.
The exported directory is /opt/virtual/one and in /opt/virtual it is mounted 
/dev/cciss/c0d0p4 (with default options), which is a 256GB (8GB used) partition 
over a 3 disks RAID5.

We are using an old OS due to the SCSI drivers, not supported on new kernel 
version.


Once added "fsid=20" option, I've tried to create a VM and again the same error.

Thu Aug 30 10:03:58 2012 [DiM][I]: New VM state is ACTIVE.
Thu Aug 30 10:03:59 2012 [LCM][I]: New VM state is PROLOG.
Thu Aug 30 10:03:59 2012 [VM][I]: Virtual Machine has no context
Thu Aug 30 10:25:33 2012 [TM][I]: Command execution fail: 
/var/lib/one/remotes/tm/shared/clone 
frontend:/var/lib/one/datastores/1/5c0455eb494fd43b5c7c576e0c642fbe 
Host2:/var/lib/one//datastores/0/32/disk.0 32 1
Thu Aug 30 10:25:33 2012 [TM][I]: clone: Cloning 
../../1/5c0455eb494fd43b5c7c576e0c642fbe in 
Host2:/var/lib/one//datastores/0/32/disk.0
Thu Aug 30 10:25:33 2012 [TM][E]: clone: Command "cd 
/var/lib/one/datastores/0/32; cp -r ../../1/5c0455eb494fd43b5c7c576e0c642fbe 
/var/lib/one/datastores/0/32/disk.0" failed: cp: reading 
`../../1/5c0455eb494fd43b5c7c576e0c642fbe': Stale NFS file handle
Thu Aug 30 10:25:33 2012 [TM][E]: Error copying 
frontend:/var/lib/one/datastores/1/5c0455eb494fd43b5c7c576e0c642fbe to 
Host2:/var/lib/one//datastores/0/32/disk.0
Thu Aug 30 10:25:33 2012 [TM][I]: ExitCode: 1
Thu Aug 30 10:25:33 2012 [TM][E]: Error executing image transfer script: Error 
copying frontend:/var/lib/one/datastores/1/5c0455eb494fd43b5c7c576e0c642fbe to 
Host2:/var/lib/one//datastores/0/32/disk.0
Thu Aug 30 10:25:34 2012 [DiM][I]: New VM state is FAILED

Also, I've tried copying a file at a client machine from the NFS Server (to 
/dev/null) and it outputs the same stale error:
[root@host1 1]# cp -v 5c0455eb494fd43b5c7c576e0c642fbe /dev/null
cp: overwrite `/dev/null'? y
`5c0455eb494fd43b5c7c576e0c642fbe' -> `/dev/null'
cp: reading `5c0455eb494fd43b5c7c576e0c642fbe': Stale NFS file handle


Any idea?
Thank you for your help.


Carlos.



On 08/29/2012 08:33 PM, Matthew Patton wrote:


All of your NFS exports should have 'fsid=' in your parameters.
I would surmise the NFS client is trying to use NFS v4 and the server doesn't 
understand it or doesn't handle it gracefully.
Force the server to only export as NFS v3 or v2 and likewise force your clients 
to only use v2 or v3.

Doing a loopback copy is pointless. Manually do a copy at a client machine from 
the NFS server and see how long it takes. (you can copy it to /dev/null if you 
like).

Once that is confirmed working, then you can see what's going on with NFS v4.





___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Doubt about user oneadmin

2012-08-24 Thread Gary S. Cuozzo
Yes, I believe it will matter.

On my systems, I modified the Ubuntu packages prior to installing them and I 
specified the UID/GID for the oneadmin user to be 1/1 so they would be 
consistent on all machines.

You can manually edit the uid/gid on the host machines to match, then just run 
a 'find' command to change the ownership of files to match.

gary


- Original Message -
From: "任英杰" 
To: users@lists.opennebula.org
Sent: Thursday, August 23, 2012 1:52:22 AM
Subject: [one-users] Doubt about user oneadmin

hi,
I'm wondering about the configuration of user oneadmin.
According to  the official guide, oneadmin on hosts should have the same 
uid and gid.
I installed opennubla package from ubuntu's source, the uid/gid are 
different  on front-end and hosts.
Does it matter?
Thanks a lot.
Jeff
---
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please 
immediately notify the sender by return e-mail, and delete the original message 
and all copies from 
your system. Thank you. 
---
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] hypervisor:/var/lib/one

2012-08-17 Thread Gary S. Cuozzo
Hi,
On my FE, I only share out (via NFS) the DS's which are shared.  So, I create 
the DS as shared, get the ID, then NFS export /var/lib/one/datastores/.

I think if all your DS's were going to be shared, then you can do the whole 
datastores directory and it would not matter.  I was doing them on a more 
granular level because I have multiple clusters and some have shared and some 
don't.

Hope that helps,
gary


- Original Message -
From: "christopher barry" 
To: Users@lists.opennebula.org
Sent: Friday, August 17, 2012 3:52:02 PM
Subject: [one-users] hypervisor:/var/lib/one

What directories from fe:/var/lib/one need to be exposed in
hypervisor:/var/lib/one for shared DS?

I thought just ./datastores, but not sure.


Regards,
-C

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] iSCSI recipe

2012-08-17 Thread Gary S. Cuozzo
Excellent, thanks. 

- Original Message -

From: "Ruben S. Montero"  
To: "Gary S. Cuozzo"  
Cc: users@lists.opennebula.org 
Sent: Thursday, August 16, 2012 7:19:23 PM 
Subject: Re: [one-users] iSCSI recipe 

Ok 


Know I see the point. We are doing something similar for attaching/detaching an 
specific script is executed before and after the hypervisor. So I guess this 
could be added. I've filled an issue for this and plan it in future versions 


http://dev.opennebula.org/issues/1419 


Meanwhile, as you probably had already found out, you can modify the migrate 
script to embed those "TM scripts". 


Cheers 


Ruben 


On Thu, Aug 16, 2012 at 5:40 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Is there a reason that ONE can't call a TM script with the source host, dest 
host, vm template, etc. when the migrate command is executed? 

While ONE can't orchestrate the migration, it can at least notify the TM of the 
migration in advance. That would allow me to grab the info I need from the 
template and fire off the login or rescan on the dest host. ONE has got to know 
this info in order to even notify libvirt/KVM to do the migration. So just call 
a TM script before issuing the libvirt command and abort if the script returns 
error. 

Logout is not as critical to me. Worst case I have some hosts which iSCSI 
sessions which are not in use. The ideal scenario would be to be able to have 
some sort of TM script that gets called after ONE detects successful migration, 
but I could work around that on my own if it were not the case. 




From: "Ruben S. Montero" < rsmont...@opennebula.org > 
To: "Gary S. Cuozzo" < g...@isgsoftware.net > 
Cc: users@lists.opennebula.org 
Sent: Thursday, August 16, 2012 11:28:27 AM 


Subject: Re: [one-users] iSCSI recipe 

Note that OpenNebula cannot orchestrate the two libvirt/KVM hypervisors doing 
the migration. When the first host is ready to stop running the VM (memory has 
been sent to the second hypervisor, etc..) it executes the hook (logout from 
the iSCSI server) and just right before starting rhe VM in the second host the 
hook (login in the iSCSI server) is run. There is nothing we can do to hook in 
that process. 


We could however, include such scripts as part of the TM, distribute them in 
the hosts, and put them in the right places... 




Cheers 











On Thu, Aug 16, 2012 at 1:48 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Thanks. While I understand that the login/logout needs to happen on the host 
systems, I think that if ONE had it's own hook or, better IMO, a TM script that 
could be called before/after migration, the functionality could be contained 
within the drivers themselves. Then I could just take my code (which is now 
sprinkled across several host machines) and put it in the TM where I think it 
better fits. My DS/TM already make remote calls via SSH to the SAN & vm hosts, 
so this would still fit the model pretty nicely. 

I seem to remember that if the similar events for non-persistent images were 
called for persistent ones, I would be able to accomplish what I want. 

Let me know your thoughts. 

Thanks, 
gary 




From: "Ruben S. Montero" < rsmont...@opennebula.org > 
To: "Gary S. Cuozzo" < g...@isgsoftware.net > 
Cc: users@lists.opennebula.org 
Sent: Thursday, August 16, 2012 5:55:57 AM 
Subject: Re: [one-users] iSCSI recipe 

Hi 


For live-migrations the supported procedure is the libvirt hook, note that live 
migrations requires a close synchronization between the image movements, memory 
movements and the hypervisors. OpenNebula cannot perform the login/logout from 
the iSCSI sessions there. Cold migrations are handled by opennebula as part of 
the save and restore commands. 


Cheers 


Ruben 


On Sun, Aug 12, 2012 at 4:41 AM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Hello, 







* Each host needs to attach to all iSCSI targets that are needed by 
guests running on the host. It's not entirely clear to me if ONE handles 
all that or not (assuming it does). 



ONE handles this by login/logout in an iSCSI session as needed 




My iSCSI setup uses a target for each virtual server, and multiple LUN's per 
target (if the server has multiple disks). I developed a custom driver for it. 
I don't know if I have a shortcoming in my driver, or maybe a config issue on 
ONE, but I found that ONE did NOT handle the login/logout on the host machines. 
It was fine for the initial setup & deployment of the vm, but live migrations 
did not cause my driver to initiate a login on the receiving host and a logout 
on the transferring host. 

I ended up having to write a libvirt hook which scanned for new targets to log 
them in. It also rescanned all current targets to check if they had new LUN's 
attached to them. It works well and my vm's mig

Re: [one-users] iSCSI recipe

2012-08-16 Thread Gary S. Cuozzo
Is there a reason that ONE can't call a TM script with the source host, dest 
host, vm template, etc. when the migrate command is executed? 

While ONE can't orchestrate the migration, it can at least notify the TM of the 
migration in advance. That would allow me to grab the info I need from the 
template and fire off the login or rescan on the dest host. ONE has got to know 
this info in order to even notify libvirt/KVM to do the migration. So just call 
a TM script before issuing the libvirt command and abort if the script returns 
error. 

Logout is not as critical to me. Worst case I have some hosts which iSCSI 
sessions which are not in use. The ideal scenario would be to be able to have 
some sort of TM script that gets called after ONE detects successful migration, 
but I could work around that on my own if it were not the case. 

- Original Message -

From: "Ruben S. Montero"  
To: "Gary S. Cuozzo"  
Cc: users@lists.opennebula.org 
Sent: Thursday, August 16, 2012 11:28:27 AM 
Subject: Re: [one-users] iSCSI recipe 

Note that OpenNebula cannot orchestrate the two libvirt/KVM hypervisors doing 
the migration. When the first host is ready to stop running the VM (memory has 
been sent to the second hypervisor, etc..) it executes the hook (logout from 
the iSCSI server) and just right before starting rhe VM in the second host the 
hook (login in the iSCSI server) is run. There is nothing we can do to hook in 
that process. 


We could however, include such scripts as part of the TM, distribute them in 
the hosts, and put them in the right places... 




Cheers 











On Thu, Aug 16, 2012 at 1:48 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Thanks. While I understand that the login/logout needs to happen on the host 
systems, I think that if ONE had it's own hook or, better IMO, a TM script that 
could be called before/after migration, the functionality could be contained 
within the drivers themselves. Then I could just take my code (which is now 
sprinkled across several host machines) and put it in the TM where I think it 
better fits. My DS/TM already make remote calls via SSH to the SAN & vm hosts, 
so this would still fit the model pretty nicely. 

I seem to remember that if the similar events for non-persistent images were 
called for persistent ones, I would be able to accomplish what I want. 

Let me know your thoughts. 

Thanks, 
gary 




From: "Ruben S. Montero" < rsmont...@opennebula.org > 
To: "Gary S. Cuozzo" < g...@isgsoftware.net > 
Cc: users@lists.opennebula.org 
Sent: Thursday, August 16, 2012 5:55:57 AM 
Subject: Re: [one-users] iSCSI recipe 

Hi 


For live-migrations the supported procedure is the libvirt hook, note that live 
migrations requires a close synchronization between the image movements, memory 
movements and the hypervisors. OpenNebula cannot perform the login/logout from 
the iSCSI sessions there. Cold migrations are handled by opennebula as part of 
the save and restore commands. 


Cheers 


Ruben 


On Sun, Aug 12, 2012 at 4:41 AM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Hello, 







* Each host needs to attach to all iSCSI targets that are needed by 
guests running on the host. It's not entirely clear to me if ONE handles 
all that or not (assuming it does). 



ONE handles this by login/logout in an iSCSI session as needed 




My iSCSI setup uses a target for each virtual server, and multiple LUN's per 
target (if the server has multiple disks). I developed a custom driver for it. 
I don't know if I have a shortcoming in my driver, or maybe a config issue on 
ONE, but I found that ONE did NOT handle the login/logout on the host machines. 
It was fine for the initial setup & deployment of the vm, but live migrations 
did not cause my driver to initiate a login on the receiving host and a logout 
on the transferring host. 

I ended up having to write a libvirt hook which scanned for new targets to log 
them in. It also rescanned all current targets to check if they had new LUN's 
attached to them. It works well and my vm's migrate just fine, but I would 
prefer to have all this handled within ONE. 

What is the script that ONE calls in order to initiate the logins/logouts as 
vm's migrate? 

Cheers, 
gary 



___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 


-- 
Ruben S. Montero, PhD 
Project co-Lead and Chief Architect 
OpenNebula - The Open Source Solution for Data Center Virtualization 
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula 





___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 







-- 
Ruben S. Montero, PhD 
Project co-Lead and Chief Architect 
OpenNebula -

Re: [one-users] Fwd: Template parameters arguments to hook script?

2012-08-16 Thread Gary S. Cuozzo
Hi,
I believe the $TEMPLATE is actually a base64 encoded version of the entire XML 
document with all the variable in it.  That is how the DS/TM stuff works.  If 
you look in any of the DS or TM scripts, you can copy/paste the xpath related 
lines to access any variables you want.

If you want to see the exact structure, you can get the base64 encoded stream 
from the oned.log and run it through the base64 command to see the xml which is 
encoded.

Something like:  echo "encoded stream here" | base64

Hope that helps.  I have not used hooks, but I have used this to develop custom 
DS/TM drivers.
gary


- Original Message -
From: "reeuwijk" 
To: "opennebula users" 
Sent: Thursday, August 16, 2012 10:12:54 AM
Subject: [one-users] Fwd: Template parameters arguments to hook script?

Hi,

I want to pass some parameters from OpenNebula templates to a hook script.

In OpenNebula 2.2 I could use something like 

CONTEXT = [ 
:
   SERVER = "bla"
:
]

and then in oned.conf I could use a hook script with something like:

VM_HOOK = [
   arguments = " … $CONTEXT[SERVER] …"
]

and the hook script would be invoked with the value I defined in CONTEXT.

Unfortunately, it seems that OpenNebula 3.6 does not support this any longer. 
Instead of the value defined in CONTEXT, the expression '$CONTEXT' is replaced 
with the empty string, so the hook script of the example always gets the string 
[SERVER].

From the documentation it is not entirely clear to me whether the mechanism I 
used for 2.2 is supported at all. The webpage at 
http://opennebula.org/documentation:rel3.6:oned_conf only mentions access to 
$VMID and $TEMPLATE and nothing else, so I may have been using an undocumented 
feature of OpenNebula 2.2.

However, limiting access to only those two variables seems overly restrictive 
to me, so for now I am assuming more is possible. But exactly what is allowed, 
and what is not?

-- 
Kees van Reeuwijk



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Storage architecture question

2012-08-16 Thread Gary S. Cuozzo
Hi Jaime,
Yes, I thought of contributing the drivers already and would be happy to do so. 
My scripts are sort of "brute forced" and not as elegant as the ONE drivers, 
but I'm sure they can be cleaned up pretty easily. My goal at the time was to 
test out a theory. I'm sure there are improvements for error checking & other 
things that can be made.

Let me know the best way to contribute. If somebody from ONE were to review the 
scripts and suggest changes, I'm happy to work with them to make them better.

Thanks,
gary


- Original Message -

From: "Jaime Melis" 
To: "Gary S. Cuozzo" 
Cc: users@lists.opennebula.org
Sent: Thursday, August 16, 2012 7:23:41 AM
Subject: Re: [one-users] Storage architecture question

Hi Gary,


sounds great, could you contribute the code at some point?


cheers,
Jaime


On Mon, Aug 6, 2012 at 2:52 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote:




Here's an update on this thread...

So I did originally setup a "local" datastore by using NFS from the host to the 
controller as a shared DS. It worked fine and performed as expected. I was able 
to get the speed of local storage without having bloated capacity requirements 
and delays in deployment due to having large images be copied across the 
network. I also tested rebooting the controller node and the host node and the 
NFS share would come back up reliably.

Having to setup the NFS share was the main drawback. I figured I could get 
around that by writing a custom DS/TS driver. That is the route I took. Over 
the weekend I created a 'localfs' set of drivers and they seem to work great 
and don't require me to setup the NFS share. This seems to give me exactly what 
I was looking for.

The only requirement for my driver is that I create the datastore first so that 
I can get the ID. Then, I mount my local disks (I use an LVM volume) at the 
correct mount point according to the DS ID.

I use a DS template variable to specify the hostname of the host so that I can 
use the driver on multiple hosts. The images get created as 
hostname:/path/to/image.

So far so good.

Just FYI,
gary




From: "Ruben S. Montero" < rsmont...@opennebula.org >

To: "Gary S. Cuozzo" < g...@isgsoftware.net >
Sent: Thursday, August 2, 2012 12:46:28 PM

Subject: Re: [one-users] Storage architecture question

OK, keep us updated with the performance of your final solution




Good Luck


Ruben


On Thu, Aug 2, 2012 at 5:58 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote:




Hi Ruben,
Thank you for the reply & information.

I had previously thought about a similar setup to what you have outlined. One 
drawback I was trying to work around is that I will be duplicating storage 
requirement as I will have to store images on both my SAN/NAS and the local 
system. As my SAN/NAS is fully mirrored, I will actually incur a fairly 
substantial increase in the overhead per GB. I was trying to avoid that.

My other main concern was time to copy the image files to/from the hosts. A few 
of the images will be TB's and the copies will take a long time I think, even 
over GB links.

I do intend to create a separate cluster and datastore for these few hosts. 
I'll just have to try out a few different setups and see what works best. 

You guys have given me some good ideas, thank you.

gary



From: "Ruben S. Montero" < rsmont...@opennebula.org >

To: "Gary S. Cuozzo" < g...@isgsoftware.net >
Cc: users@lists.opennebula.org
Sent: Tuesday, July 31, 2012 5:44:39 PM


Subject: Re: [one-users] Storage architecture question

Hi,


You can set the Images Datastore of type FS to use the shared TM and your fast 
NAS, but the system datastore to use the ssh drivers. This will copy the images 
from the Datastore which uses NFS to the local storage area.


Note 1: This will only work for non-persistent images. Persistent images will 
be linked and thus used directly from the NAS server.


Note 2: you cannot mix both setups in the same cluster. This is, if your system 
datastore is shared the ssh transfer will copy the images to the NFS volume in 
the remote host.


You can:


1.- Create a dedicated cluster for this images that uses ssh as the system 
datastore (so the hosts do not mount the NFS export). You just need to add a 
few host and the datastore of the high demaning I/O images.The 3.6 scheduler 
will only use the resources of that cluster for that VMs [2]


2.- Modify the clone/mv/mvds scripts from a TM shared so it copy some of your 
images to a local disk and link them to the path expected by OpenNebula.




Cheers


Ruben


[1] 
http://opennebula.org/documentation:rel3.6:system_ds#using_the_ssh_transfer_driver


[2] http://opennebula.org/documentation:rel3.6:cluster_guide




On Mon, Jul 30, 2012 at 7:36 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote:


Hi Tino,
Thanks for the rep

Re: [one-users] iSCSI recipe

2012-08-16 Thread Gary S. Cuozzo
Thanks. While I understand that the login/logout needs to happen on the host 
systems, I think that if ONE had it's own hook or, better IMO, a TM script that 
could be called before/after migration, the functionality could be contained 
within the drivers themselves. Then I could just take my code (which is now 
sprinkled across several host machines) and put it in the TM where I think it 
better fits. My DS/TM already make remote calls via SSH to the SAN & vm hosts, 
so this would still fit the model pretty nicely. 

I seem to remember that if the similar events for non-persistent images were 
called for persistent ones, I would be able to accomplish what I want. 

Let me know your thoughts. 

Thanks, 
gary 


- Original Message -

From: "Ruben S. Montero"  
To: "Gary S. Cuozzo"  
Cc: users@lists.opennebula.org 
Sent: Thursday, August 16, 2012 5:55:57 AM 
Subject: Re: [one-users] iSCSI recipe 

Hi 


For live-migrations the supported procedure is the libvirt hook, note that live 
migrations requires a close synchronization between the image movements, memory 
movements and the hypervisors. OpenNebula cannot perform the login/logout from 
the iSCSI sessions there. Cold migrations are handled by opennebula as part of 
the save and restore commands. 


Cheers 


Ruben 


On Sun, Aug 12, 2012 at 4:41 AM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Hello, 







* Each host needs to attach to all iSCSI targets that are needed by 
guests running on the host. It's not entirely clear to me if ONE handles 
all that or not (assuming it does). 



ONE handles this by login/logout in an iSCSI session as needed 




My iSCSI setup uses a target for each virtual server, and multiple LUN's per 
target (if the server has multiple disks). I developed a custom driver for it. 
I don't know if I have a shortcoming in my driver, or maybe a config issue on 
ONE, but I found that ONE did NOT handle the login/logout on the host machines. 
It was fine for the initial setup & deployment of the vm, but live migrations 
did not cause my driver to initiate a login on the receiving host and a logout 
on the transferring host. 

I ended up having to write a libvirt hook which scanned for new targets to log 
them in. It also rescanned all current targets to check if they had new LUN's 
attached to them. It works well and my vm's migrate just fine, but I would 
prefer to have all this handled within ONE. 

What is the script that ONE calls in order to initiate the logins/logouts as 
vm's migrate? 

Cheers, 
gary 



___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 


-- 
Ruben S. Montero, PhD 
Project co-Lead and Chief Architect 
OpenNebula - The Open Source Solution for Data Center Virtualization 
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula 




___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] iSCSI recipe

2012-08-11 Thread Gary S. Cuozzo
Hello, 







* Each host needs to attach to all iSCSI targets that are needed by 
guests running on the host. It's not entirely clear to me if ONE handles 
all that or not (assuming it does). 



ONE handles this by login/logout in an iSCSI session as needed 




My iSCSI setup uses a target for each virtual server, and multiple LUN's per 
target (if the server has multiple disks). I developed a custom driver for it. 
I don't know if I have a shortcoming in my driver, or maybe a config issue on 
ONE, but I found that ONE did NOT handle the login/logout on the host machines. 
It was fine for the initial setup & deployment of the vm, but live migrations 
did not cause my driver to initiate a login on the receiving host and a logout 
on the transferring host. 

I ended up having to write a libvirt hook which scanned for new targets to log 
them in. It also rescanned all current targets to check if they had new LUN's 
attached to them. It works well and my vm's migrate just fine, but I would 
prefer to have all this handled within ONE. 

What is the script that ONE calls in order to initiate the logins/logouts as 
vm's migrate? 

Cheers, 
gary 


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Storage architecture question

2012-08-06 Thread Gary S. Cuozzo
Here's an update on this thread...

So I did originally setup a "local" datastore by using NFS from the host to the 
controller as a shared DS. It worked fine and performed as expected. I was able 
to get the speed of local storage without having bloated capacity requirements 
and delays in deployment due to having large images be copied across the 
network. I also tested rebooting the controller node and the host node and the 
NFS share would come back up reliably.

Having to setup the NFS share was the main drawback. I figured I could get 
around that by writing a custom DS/TS driver. That is the route I took. Over 
the weekend I created a 'localfs' set of drivers and they seem to work great 
and don't require me to setup the NFS share. This seems to give me exactly what 
I was looking for.

The only requirement for my driver is that I create the datastore first so that 
I can get the ID. Then, I mount my local disks (I use an LVM volume) at the 
correct mount point according to the DS ID.

I use a DS template variable to specify the hostname of the host so that I can 
use the driver on multiple hosts. The images get created as 
hostname:/path/to/image.

So far so good.

Just FYI,
gary


- Original Message -

From: "Ruben S. Montero" 
To: "Gary S. Cuozzo" 
Sent: Thursday, August 2, 2012 12:46:28 PM
Subject: Re: [one-users] Storage architecture question

OK, keep us updated with the performance of your final solution




Good Luck


Ruben


On Thu, Aug 2, 2012 at 5:58 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote:




Hi Ruben,
Thank you for the reply & information.

I had previously thought about a similar setup to what you have outlined. One 
drawback I was trying to work around is that I will be duplicating storage 
requirement as I will have to store images on both my SAN/NAS and the local 
system. As my SAN/NAS is fully mirrored, I will actually incur a fairly 
substantial increase in the overhead per GB. I was trying to avoid that.

My other main concern was time to copy the image files to/from the hosts. A few 
of the images will be TB's and the copies will take a long time I think, even 
over GB links.

I do intend to create a separate cluster and datastore for these few hosts. 
I'll just have to try out a few different setups and see what works best. 

You guys have given me some good ideas, thank you.

gary



From: "Ruben S. Montero" < rsmont...@opennebula.org >
To: "Gary S. Cuozzo" < g...@isgsoftware.net >
Cc: users@lists.opennebula.org
Sent: Tuesday, July 31, 2012 5:44:39 PM
Subject: Re: [one-users] Storage architecture question

Hi,


You can set the Images Datastore of type FS to use the shared TM and your fast 
NAS, but the system datastore to use the ssh drivers. This will copy the images 
from the Datastore which uses NFS to the local storage area.


Note 1: This will only work for non-persistent images. Persistent images will 
be linked and thus used directly from the NAS server.


Note 2: you cannot mix both setups in the same cluster. This is, if your system 
datastore is shared the ssh transfer will copy the images to the NFS volume in 
the remote host.


You can:


1.- Create a dedicated cluster for this images that uses ssh as the system 
datastore (so the hosts do not mount the NFS export). You just need to add a 
few host and the datastore of the high demaning I/O images.The 3.6 scheduler 
will only use the resources of that cluster for that VMs [2]


2.- Modify the clone/mv/mvds scripts from a TM shared so it copy some of your 
images to a local disk and link them to the path expected by OpenNebula.




Cheers


Ruben


[1] 
http://opennebula.org/documentation:rel3.6:system_ds#using_the_ssh_transfer_driver


[2] http://opennebula.org/documentation:rel3.6:cluster_guide




On Mon, Jul 30, 2012 at 7:36 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote:


Hi Tino,
Thanks for the reply.

Yes, I think you understand correctly. My goal is to be able to utilize storage 
which is local to a particular vm host node without incurring the overhead of 
duplicated storage on the controller node and transfer time from controller to 
vm host.

I do understand that the images will only be accessible from the particular vm 
host which they reside on, but that is ok as it would be the trade-off for 
local disk performance. I have a great iSCSI/NFS SAN which is used for shared 
storage, but it will never be able to offer the same level of performance as 
local storage. So I'm looking to be able to have that local option for the few 
cases it's required for I/O intensive applications.

I have not actually had the chance to try it out yet, but I think it will give 
me what I'm looking for.

Thanks again,
gary




- Original Message -
From: "Tino Vazquez" < tin...@opennebula.org >
To: "Gary S. Cuozzo" < g...@isgsoftware.net >

[one-users] rename a datastore

2012-08-06 Thread Gary S. Cuozzo
Hello, 
Is there a way to rename a datastore? I didn't see a way via sunstone or CLI. I 
thought I could just change it in the database but wanted to make sure I won't 
cause anything to break. 

Thanks, 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Use of DS driver stat script

2012-08-06 Thread Gary S. Cuozzo
Hello,
That's odd as I see no issues so far.

I have 2 DS & related TS drivers which currently work fine.  One is a iSCSI 
driver which I have tested pretty comprehensively.  It supports image 
create/delete/clone, live migration, etc.  I originally wrote it on v3.6.  It 
does not have a stat script.

The other driver is a custom one for local storage.  This one I wrote over the 
weekend on v3.6.  That is when I noticed the addition of the stat script in the 
docs.  The driver seems to function fine.  Image create/delete/clone is 
working, I have deployed VM's.  As it is a local storage driver, it will not be 
able to do live migrations.  I have not seen the stat script get called yet.

I do only deal with persistent images.  Could that be related to the reason I 
have not seen a need for the stat script?  I also do all the interaction so far 
through sunstone, but I assume that is using the one* commands under the hood.

Thanks for the reply,
gary

- Original Message -
From: "Daniel Molina" 
To: "Gary S. Cuozzo" 
Cc: users@lists.opennebula.org
Sent: Monday, August 6, 2012 4:59:32 AM
Subject: Re: [one-users] Use of DS driver stat script

Hi Gary,

On 5 August 2012 18:40, Gary S. Cuozzo  wrote:
> Hello,
> I am developing a custom DS driver.  When is the 'stat' script called?  I
> have created an implementation of the script, but don't actually know how to
> make it be called by OpenNebula for a "real" test.
>

The 'stat' script is called when adding a new image to the datastore
(oneimage create template.one --datastore your_custom_DS) to determine
the size of the new image.

> I have another drive which I wrote for v3.4 and am also wondering if I need
> to add a stat script for v3.6.

You must, otherwise you won't be able to create new images.

Cheers

-- 
Daniel Molina
Project Engineer
OpenNebula - The Open Source Solution for Data Center Virtualization
www.OpenNebula.org | dmol...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Use of DS driver stat script

2012-08-05 Thread Gary S. Cuozzo
Hello, 
I am developing a custom DS driver. When is the 'stat' script called? I have 
created an implementation of the script, but don't actually know how to make it 
be called by OpenNebula for a "real" test. 

I have another drive which I wrote for v3.4 and am also wondering if I need to 
add a stat script for v3.6. 

Thanks in advance, 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] shutdown vm state

2012-08-02 Thread Gary S. Cuozzo
I guess I just never waited long enough. I went in and did a shutdown on one of 
my vm's and let it sit. It eventually went away and I was able to just redeploy 
the template and boot it back up. That makes much more sense. 

Thanks for the info. 

- Original Message -

From: "Ruben S. Montero"  
To: "Gary S. Cuozzo"  
Cc: users@lists.opennebula.org 
Sent: Thursday, August 2, 2012 7:45:25 PM 
Subject: Re: [one-users] shutdown vm state 

VMs in done state are not show through sunstone or CLI, but kept in the DB for 
accounting purposes. You can check the history and stats of any VM (including 
those not shown, i.e.in done state). you can try it with onevm show 
. 


The default timeout for the shutdown operation is 5min, it can be change in the 
kvmrc file (you need to onehost sync if you change the default). 


You can check remotes/vmm/kvm/shutdown for specific information of the shutdown 
process. In this case we basically issue a shutdown operation and wait for the 
VM to not show up through libvirt. 


Cheers 


Ruben 



On Fri, Aug 3, 2012 at 1:29 AM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Ok. I've never seen any vm's hit the "DONE" state. At least not from the 
sunstone view. Maybe if I try the CLI to check the state I'd see it. My vm's 
just sit in the "shutdown" state where I just delete them. 

I have verified they do in fact shutdown properly. They do a clean shutdown via 
acpi and the vm instance (kvm) does go away on the host system. 

Thanks for the info. 



From: "Ruben S. Montero" < rsmont...@opennebula.org > 
To: "Gary S. Cuozzo" < g...@isgsoftware.net > 
Cc: users@lists.opennebula.org 
Sent: Thursday, August 2, 2012 4:12:23 PM 
Subject: Re: [one-users] shutdown vm state 

Hi, 


Maybe I am missing something. 


1.- SHUTDOWN is a transient state, the VM will end up in DONE, and so -*removed 
from the active list* or FAIL. So, there is no action to be taken from that 
state. You can delete it in case that somehow the process hangs. 


2.- From FAIL you can resubmit the VM preserving any resource (IP, image...). 


3.- Once the VM is DONE you can instantiate it again using the Template pool. 


4.- The VM may reach an UNKNOWN state, this happens when OpenNebula lost track 
of the VM for any reason. From that state you can start it up again, as the 
images are supposed to be already in the host. 


Hope now it is a bit clearer 


Cheers 


Ruben 



On Thu, Aug 2, 2012 at 8:39 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Yes, the shutdown functions as I would expect. I'm just questioning why it 
seems to stay in the list of VM's when all I seem to be able to do is delete 
the VM. Granted the images are still there, so I can delete the VM, then 
recreate it again from the template. But it seems like it would be more useful 
if I could do something else, such as start it up again. I thought maybe I was 
missing something in the overall lifecycle picture, but maybe not. 



From: "Ruben S. Montero" < rsmont...@opennebula.org > 
To: "Gary S. Cuozzo" < g...@isgsoftware.net > 
Cc: users@lists.opennebula.org 
Sent: Thursday, August 2, 2012 1:28:28 PM 
Subject: Re: [one-users] shutdown vm state 

Hi 


Upon receiving the ACPI signal the VM should actually shutdown and then it will 
be removed from the hypervisor active list. From there OpenNebula will move 
around any persistent or save_as image. Check your guest OS for its ACPI 
configuration. 


There is a timeout for the shutdown operation, if the VM is still alive it will 
return to RUNNING. 


Cheers 


Ruben 


On Thu, Aug 2, 2012 at 6:04 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Hello, 
I have a question about the shutdown state of a vm. Using ONE 3.6 and Sunstone 
GUI, if I click the "shutdown" button, the vm seems to do a clean shutdown via 
acpi and it stops running. The state is indicated as "SHUTDOWN". From there, 
the only thing I can seem to do is delete the VM. Any other action I attempt 
results in a wrong state error. 

What is the purpose and proper/intended use of the shutdown feature? 

Thanks, 
gary 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 







-- 
Ruben S. Montero, PhD 
Project co-Lead and Chief Architect 
OpenNebula - The Open Source Solution for Data Center Virtualization 
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 







-- 
Ruben S. Montero, PhD 
Project co-Lead and Chief Architect 
OpenNebula - The Open Source Solution for Data Center Virtualization 
www.OpenNebula.org | rsmont...@opennebula.or

Re: [one-users] shutdown vm state

2012-08-02 Thread Gary S. Cuozzo
Ok. I've never seen any vm's hit the "DONE" state. At least not from the 
sunstone view. Maybe if I try the CLI to check the state I'd see it. My vm's 
just sit in the "shutdown" state where I just delete them. 

I have verified they do in fact shutdown properly. They do a clean shutdown via 
acpi and the vm instance (kvm) does go away on the host system. 

Thanks for the info. 

- Original Message -----

From: "Ruben S. Montero"  
To: "Gary S. Cuozzo"  
Cc: users@lists.opennebula.org 
Sent: Thursday, August 2, 2012 4:12:23 PM 
Subject: Re: [one-users] shutdown vm state 

Hi, 


Maybe I am missing something. 


1.- SHUTDOWN is a transient state, the VM will end up in DONE, and so -*removed 
from the active list* or FAIL. So, there is no action to be taken from that 
state. You can delete it in case that somehow the process hangs. 


2.- From FAIL you can resubmit the VM preserving any resource (IP, image...). 


3.- Once the VM is DONE you can instantiate it again using the Template pool. 


4.- The VM may reach an UNKNOWN state, this happens when OpenNebula lost track 
of the VM for any reason. From that state you can start it up again, as the 
images are supposed to be already in the host. 


Hope now it is a bit clearer 


Cheers 


Ruben 



On Thu, Aug 2, 2012 at 8:39 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Yes, the shutdown functions as I would expect. I'm just questioning why it 
seems to stay in the list of VM's when all I seem to be able to do is delete 
the VM. Granted the images are still there, so I can delete the VM, then 
recreate it again from the template. But it seems like it would be more useful 
if I could do something else, such as start it up again. I thought maybe I was 
missing something in the overall lifecycle picture, but maybe not. 



From: "Ruben S. Montero" < rsmont...@opennebula.org > 
To: "Gary S. Cuozzo" < g...@isgsoftware.net > 
Cc: users@lists.opennebula.org 
Sent: Thursday, August 2, 2012 1:28:28 PM 
Subject: Re: [one-users] shutdown vm state 

Hi 


Upon receiving the ACPI signal the VM should actually shutdown and then it will 
be removed from the hypervisor active list. From there OpenNebula will move 
around any persistent or save_as image. Check your guest OS for its ACPI 
configuration. 


There is a timeout for the shutdown operation, if the VM is still alive it will 
return to RUNNING. 


Cheers 


Ruben 


On Thu, Aug 2, 2012 at 6:04 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Hello, 
I have a question about the shutdown state of a vm. Using ONE 3.6 and Sunstone 
GUI, if I click the "shutdown" button, the vm seems to do a clean shutdown via 
acpi and it stops running. The state is indicated as "SHUTDOWN". From there, 
the only thing I can seem to do is delete the VM. Any other action I attempt 
results in a wrong state error. 

What is the purpose and proper/intended use of the shutdown feature? 

Thanks, 
gary 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 







-- 
Ruben S. Montero, PhD 
Project co-Lead and Chief Architect 
OpenNebula - The Open Source Solution for Data Center Virtualization 
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 







-- 
Ruben S. Montero, PhD 
Project co-Lead and Chief Architect 
OpenNebula - The Open Source Solution for Data Center Virtualization 
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] shutdown vm state

2012-08-02 Thread Gary S. Cuozzo
Yes, the shutdown functions as I would expect. I'm just questioning why it 
seems to stay in the list of VM's when all I seem to be able to do is delete 
the VM. Granted the images are still there, so I can delete the VM, then 
recreate it again from the template. But it seems like it would be more useful 
if I could do something else, such as start it up again. I thought maybe I was 
missing something in the overall lifecycle picture, but maybe not. 

- Original Message -

From: "Ruben S. Montero"  
To: "Gary S. Cuozzo"  
Cc: users@lists.opennebula.org 
Sent: Thursday, August 2, 2012 1:28:28 PM 
Subject: Re: [one-users] shutdown vm state 

Hi 


Upon receiving the ACPI signal the VM should actually shutdown and then it will 
be removed from the hypervisor active list. From there OpenNebula will move 
around any persistent or save_as image. Check your guest OS for its ACPI 
configuration. 


There is a timeout for the shutdown operation, if the VM is still alive it will 
return to RUNNING. 


Cheers 


Ruben 


On Thu, Aug 2, 2012 at 6:04 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote: 




Hello, 
I have a question about the shutdown state of a vm. Using ONE 3.6 and Sunstone 
GUI, if I click the "shutdown" button, the vm seems to do a clean shutdown via 
acpi and it stops running. The state is indicated as "SHUTDOWN". From there, 
the only thing I can seem to do is delete the VM. Any other action I attempt 
results in a wrong state error. 

What is the purpose and proper/intended use of the shutdown feature? 

Thanks, 
gary 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 







-- 
Ruben S. Montero, PhD 
Project co-Lead and Chief Architect 
OpenNebula - The Open Source Solution for Data Center Virtualization 
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] shutdown vm state

2012-08-02 Thread Gary S. Cuozzo
Hello, 
I have a question about the shutdown state of a vm. Using ONE 3.6 and Sunstone 
GUI, if I click the "shutdown" button, the vm seems to do a clean shutdown via 
acpi and it stops running. The state is indicated as "SHUTDOWN". From there, 
the only thing I can seem to do is delete the VM. Any other action I attempt 
results in a wrong state error. 

What is the purpose and proper/intended use of the shutdown feature? 

Thanks, 
gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Storage architecture question

2012-07-30 Thread Gary S. Cuozzo
Hi Tino,
Thanks for the reply.

Yes, I think you understand correctly.  My goal is to be able to utilize 
storage which is local to a particular vm host node without incurring the 
overhead of duplicated storage on the controller node and transfer time from 
controller to vm host.

I do understand that the images will only be accessible from the particular vm 
host which they reside on, but that is ok as it would be the trade-off for 
local disk performance.  I have a great iSCSI/NFS SAN which is used for shared 
storage, but it will never be able to offer the same level of performance as 
local storage.  So I'm looking to be able to have that local option for the few 
cases it's required for I/O intensive applications.

I have not actually had the chance to try it out yet, but I think it will give 
me what I'm looking for.

Thanks again,
gary


- Original Message -
From: "Tino Vazquez" 
To: "Gary S. Cuozzo" 
Cc: users@lists.opennebula.org
Sent: Monday, July 30, 2012 12:40:06 PM
Subject: Re: [one-users] Storage architecture question

Dear Gary,

I am not sure I understand 100% your desired set-up, but if I grasped
it correctly, I think the problem you may found is the images would
only be local to the node that is exporting the NFS share. Otherwise I
think it will work as you expect.

Regards,

-Tino

--
Constantino Vázquez Blanco, MSc
Project Engineer
OpenNebula - The Open-Source Solution for Data Center Virtualization
www.OpenNebula.org | @tinova79 | @OpenNebula


On Fri, Jul 27, 2012 at 11:44 PM, Gary S. Cuozzo  wrote:
> Hi Users,
> I am running ONE 3.6 and would like to be able to run a combination of
> shared storage (via iSCSI) and local storage (to take advantage of local
> disk performance for certain applications).  My question is related to the
> local storage aspect.
>
> From what I've seen, I can use a local datastore and the ssh TM to
> accomplish local storage.  The drawback that I see is that I need 2x the
> amount of disk space because I need storage for the permanent image on the
> controller node, then storage on the local host for the running image when
> it is deployed.  A secondary issue for me is that the images have to be
> transferred between the datastore and the host machine, which will take some
> time with larger images.
>
> To get around the problem, I thought I could set the datastore up as a
> shared filesystem, except the sharing would actually be from the host
> machine to the controller machine via NFS.  Is there any particular
> reason(s) that would be a bad idea?  On the surface it seems like it should
> work just fine, but I'm somewhat new to ONE and want to be sure I'm not
> going down a bad path since I plan to do this with several host machines.
>
> Thanks in advance for any advice.
>
> gary
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] mulitple live migrations fail

2012-07-30 Thread Gary S. Cuozzo
Hello Tino,
The version of libvirt is 0.9.8-2ubuntu17.2. It looks like there is a 0.9.13 
release from July 2.

It looks like I should be able to build some packages using files here: 
  https://launchpad.net/ubuntu/+source/libvirt/0.9.13-0ubuntu4

Hopefully I can get to that soon and let you know how it goes.

Thanks for your reply,
gary


- Original Message -
From: "Tino Vazquez" 
To: "Gary S. Cuozzo" 
Cc: users@lists.opennebula.org
Sent: Monday, July 30, 2012 5:40:18 AM
Subject: Re: [one-users] mulitple live migrations fail

Dear Gary,

Which version of libvirt are you using? If possible, please update to
the latest available version on libvirt.org.

Regards,

-Tino

--
Constantino Vázquez Blanco, MSc
Project Engineer
OpenNebula - The Open-Source Solution for Data Center Virtualization
www.OpenNebula.org | @tinova79 | @OpenNebula


On Mon, Jul 30, 2012 at 5:16 AM, Gary S. Cuozzo  wrote:
> Hello,
> I am able to reliably and successfully migration single vm's between hosts. 
> However, when I attempt to do multiple migrations of vm's typically they will 
> fail. I have seen cases where I do 3 and 1 or 2 succeed, but typically all 
> will fail. I see errors in the vm log such as:
>
> migrate: Command "virsh --connect qemu:///system migrate --live one-5 
> qemu+ssh://vmhost2/system" failed: error: End of file while reading data: : 
> Input/output error
> error: Reconnected to the hypervisor
>
> Another error I see is:
>
> migrate: Command "virsh --connect qemu:///system migrate --live one-4 
> qemu+ssh://vmhost3./system" failed: error: Unable to wait for child process: 
> Bad file descriptor Sun Jul 29 23:13:53 2012
> Could not migrate one-4 to vmhost3
>
> I know this seems more related to libvirt than to opennebula, but I have not 
> been able to turn up any answers elsewhere and thought maybe somebody already 
> ran into this before.
>
> I'm running OpenNebula v3.6 on Ubuntu 12.04.
>
> Thanks,
> gary
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] mulitple live migrations fail

2012-07-29 Thread Gary S. Cuozzo
Hello, 
I am able to reliably and successfully migration single vm's between hosts. 
However, when I attempt to do multiple migrations of vm's typically they will 
fail. I have seen cases where I do 3 and 1 or 2 succeed, but typically all will 
fail. I see errors in the vm log such as:

migrate: Command "virsh --connect qemu:///system migrate --live one-5 
qemu+ssh://vmhost2/system" failed: error: End of file while reading data: : 
Input/output error
error: Reconnected to the hypervisor 

Another error I see is: 

migrate: Command "virsh --connect qemu:///system migrate --live one-4 
qemu+ssh://vmhost3./system" failed: error: Unable to wait for child process: 
Bad file descriptor Sun Jul 29 23:13:53 2012
Could not migrate one-4 to vmhost3

I know this seems more related to libvirt than to opennebula, but I have not 
been able to turn up any answers elsewhere and thought maybe somebody already 
ran into this before. 

I'm running OpenNebula v3.6 on Ubuntu 12.04. 

Thanks, 
gary 
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] error on sunstone dashboard

2012-07-28 Thread Gary S. Cuozzo
Very nice, thank you Rogier. I wasn't using the marketplace at all, but I see 
it has some items in it now and the error is gone. 

Regards, 
gary 

- Original Message -

From: "Rogier Mars"  
To: "Gary S. Cuozzo"  
Cc: users@lists.opennebula.org 
Sent: Saturday, July 28, 2012 3:14:02 PM 
Subject: Re: [one-users] error on sunstone dashboard 

Hi Gary, 


To fix this add the following line to sunstone-server.conf and restart 
sunstone: 



:marketplace_url: https://marketplace.c12g.com/appliance 




cheers, 


rogier 



On Jul 28, 2012, at 4:22 PM, Gary S. Cuozzo wrote: 




Hello, 
Since my recent upgrade to 3.6, when I login to sunstone gui I see the 
following error displayed: 
Error connecting to server (Connection refused - connect(2)). Server: 
localhost:9292 

Everything seems to work fine. I don't see any reference to port 9292 in the 
config files and am not sure what service should be listening there. 

Is there a service that I have to start up? 

Thanks, 
gary 
___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 




___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] error on sunstone dashboard

2012-07-28 Thread Gary S. Cuozzo
Hello, 
Since my recent upgrade to 3.6, when I login to sunstone gui I see the 
following error displayed: 
Error connecting to server (Connection refused - connect(2)). Server: 
localhost:9292 

Everything seems to work fine. I don't see any reference to port 9292 in the 
config files and am not sure what service should be listening there. 

Is there a service that I have to start up? 

Thanks, 
gary 
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Storage architecture question

2012-07-27 Thread Gary S. Cuozzo
Hi Users, 
I am running ONE 3.6 and would like to be able to run a combination of shared 
storage (via iSCSI) and local storage (to take advantage of local disk 
performance for certain applications). My question is related to the local 
storage aspect. 

>From what I've seen, I can use a local datastore and the ssh TM to accomplish 
>local storage. The drawback that I see is that I need 2x the amount of disk 
>space because I need storage for the permanent image on the controller node, 
>then storage on the local host for the running image when it is deployed. A 
>secondary issue for me is that the images have to be transferred between the 
>datastore and the host machine, which will take some time with larger images. 

To get around the problem, I thought I could set the datastore up as a shared 
filesystem, except the sharing would actually be from the host machine to the 
controller machine via NFS. Is there any particular reason(s) that would be a 
bad idea? On the surface it seems like it should work just fine, but I'm 
somewhat new to ONE and want to be sure I'm not going down a bad path since I 
plan to do this with several host machines. 

Thanks in advance for any advice. 

gary 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Fwd: No space left on device ->when Create Image via Marketplace for CentOS 6.2-kvm using iSCSI (block) datastore

2012-07-27 Thread Gary S. Cuozzo
Assuming there is space left in the LVM VG, you should be able to use lvextend 
to add a bit of extra storage to that volume so the deployment can succeed. 
Something like 'lvextend -L + 100M ' 

Hope that helps, 
gary 

- Original Message -

From: "Ruben S. Montero"  
To: "Lawrence Chiong"  
Cc: Users@lists.opennebula.org 
Sent: Friday, July 27, 2012 7:45:24 AM 
Subject: Re: [one-users] Fwd: No space left on device ->when Create Image via 
Marketplace for CentOS 6.2-kvm using iSCSI (block) datastore 

Hi, 


This is a known issue when using the market place with block based datastores, 
see [1]. 


The size of the image is get from the ContentLength header field (HTTP) using 
the marketplace URL. This size is for the compressed image, so the actual size 
of the block is not properly configured. Also the usage quotas will not be 
properly updated... 


Probably we either set contnent lenght to the uncompressed size, or provide an 
additional header for it. The requirement here is not to download the full 
image to get the size. The download, uncompress, write process is "streamed", 
so we need the size beforehand. 


Cheers 


Ruben 




[1] http://dev.opennebula.org/issues/1345 


On Fri, Jul 27, 2012 at 1:04 AM, Lawrence Chiong < juni...@gmail.com > wrote: 


additional info: 

template]$ sudo lvs 
LV VG Attr LSize Origin Snap% Move Log Copy% Convert 
imgdtastore vgNebula -wi-ao 100.00g 
lv-one-17 vgNebula -wi-ao 40.00m 
lv-one-17-2 vgNebula -wi-ao 40.00m 
lv-one-17-5 vgNebula -wi-ao 40.00m 
lv-one-17-6 vgNebula -wi-ao 40.00m 
lv-one-17-7 vgNebula -wi-ao 40.00m 
lv-one-23 vgNebula -wi-ao 40.00m 
lv-one-24 vgNebula -wi-ao 40.00m 
lv-one-25 vgNebula -wi-ao 580.00m 
rootvol vgroot -wi-ao 49.97g 

There's NO error when I created an image for ttylinux-kvm via marketplace. 






Hello there, 

Need help about error found below when tried to create an image for CentOS 6.2 
x86_64 +kvm using iSCSI (block) datastore- 

Thu Jul 26 16:45:29 2012 [ImM][E]: cp: Command "eval 
/var/lib/one/remotes/datastore/iscsi/../downloader.sh --md5 
3b05125c401c2e74e8238515ffa3bd6b 
https://marketplace.c12g.com/appliance/4fc76a938fb81d351702/download - | 
ssh 192.168.0.20 sudo dd of=/dev/vgNebula/lv-one-25 bs=64k" failed: dd: writing 
`/dev/vgNebula/lv-one-25': No space left on device 

Your help is very much appreciated. 

Thank you. 


___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 







-- 
Ruben S. Montero, PhD 
Project co-Lead and Chief Architect 
OpenNebula - The Open Source Solution for Data Center Virtualization 
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula 

___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org