Hi Jaime,
Yes, I thought of contributing the drivers already and would be happy to do so. 
My scripts are sort of "brute forced" and not as elegant as the ONE drivers, 
but I'm sure they can be cleaned up pretty easily. My goal at the time was to 
test out a theory. I'm sure there are improvements for error checking & other 
things that can be made.

Let me know the best way to contribute. If somebody from ONE were to review the 
scripts and suggest changes, I'm happy to work with them to make them better.

Thanks,
gary


----- Original Message -----

From: "Jaime Melis" <jme...@opennebula.org>
To: "Gary S. Cuozzo" <g...@isgsoftware.net>
Cc: users@lists.opennebula.org
Sent: Thursday, August 16, 2012 7:23:41 AM
Subject: Re: [one-users] Storage architecture question

Hi Gary,


sounds great, could you contribute the code at some point?


cheers,
Jaime


On Mon, Aug 6, 2012 at 2:52 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote:




Here's an update on this thread...

So I did originally setup a "local" datastore by using NFS from the host to the 
controller as a shared DS. It worked fine and performed as expected. I was able 
to get the speed of local storage without having bloated capacity requirements 
and delays in deployment due to having large images be copied across the 
network. I also tested rebooting the controller node and the host node and the 
NFS share would come back up reliably.

Having to setup the NFS share was the main drawback. I figured I could get 
around that by writing a custom DS/TS driver. That is the route I took. Over 
the weekend I created a 'localfs' set of drivers and they seem to work great 
and don't require me to setup the NFS share. This seems to give me exactly what 
I was looking for.

The only requirement for my driver is that I create the datastore first so that 
I can get the ID. Then, I mount my local disks (I use an LVM volume) at the 
correct mount point according to the DS ID.

I use a DS template variable to specify the hostname of the host so that I can 
use the driver on multiple hosts. The images get created as 
hostname:/path/to/image.

So far so good.

Just FYI,
gary




From: "Ruben S. Montero" < rsmont...@opennebula.org >

To: "Gary S. Cuozzo" < g...@isgsoftware.net >
Sent: Thursday, August 2, 2012 12:46:28 PM

Subject: Re: [one-users] Storage architecture question

OK, keep us updated with the performance of your final solution




Good Luck


Ruben


On Thu, Aug 2, 2012 at 5:58 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote:

<blockquote>


Hi Ruben,
Thank you for the reply & information.

I had previously thought about a similar setup to what you have outlined. One 
drawback I was trying to work around is that I will be duplicating storage 
requirement as I will have to store images on both my SAN/NAS and the local 
system. As my SAN/NAS is fully mirrored, I will actually incur a fairly 
substantial increase in the overhead per GB. I was trying to avoid that.

My other main concern was time to copy the image files to/from the hosts. A few 
of the images will be TB's and the copies will take a long time I think, even 
over GB links.

I do intend to create a separate cluster and datastore for these few hosts. 
I'll just have to try out a few different setups and see what works best. 

You guys have given me some good ideas, thank you.

gary



From: "Ruben S. Montero" < rsmont...@opennebula.org >

To: "Gary S. Cuozzo" < g...@isgsoftware.net >
Cc: users@lists.opennebula.org
Sent: Tuesday, July 31, 2012 5:44:39 PM


Subject: Re: [one-users] Storage architecture question

Hi,


You can set the Images Datastore of type FS to use the shared TM and your fast 
NAS, but the system datastore to use the ssh drivers. This will copy the images 
from the Datastore which uses NFS to the local storage area.


Note 1: This will only work for non-persistent images. Persistent images will 
be linked and thus used directly from the NAS server.


Note 2: you cannot mix both setups in the same cluster. This is, if your system 
datastore is shared the ssh transfer will copy the images to the NFS volume in 
the remote host.


You can:


1.- Create a dedicated cluster for this images that uses ssh as the system 
datastore (so the hosts do not mount the NFS export). You just need to add a 
few host and the datastore of the high demaning I/O images.The 3.6 scheduler 
will only use the resources of that cluster for that VMs [2]


2.- Modify the clone/mv/mvds scripts from a TM shared so it copy some of your 
images to a local disk and link them to the path expected by OpenNebula.




Cheers


Ruben


[1] 
http://opennebula.org/documentation:rel3.6:system_ds#using_the_ssh_transfer_driver


[2] http://opennebula.org/documentation:rel3.6:cluster_guide




On Mon, Jul 30, 2012 at 7:36 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote:

<blockquote>
Hi Tino,
Thanks for the reply.

Yes, I think you understand correctly. My goal is to be able to utilize storage 
which is local to a particular vm host node without incurring the overhead of 
duplicated storage on the controller node and transfer time from controller to 
vm host.

I do understand that the images will only be accessible from the particular vm 
host which they reside on, but that is ok as it would be the trade-off for 
local disk performance. I have a great iSCSI/NFS SAN which is used for shared 
storage, but it will never be able to offer the same level of performance as 
local storage. So I'm looking to be able to have that local option for the few 
cases it's required for I/O intensive applications.

I have not actually had the chance to try it out yet, but I think it will give 
me what I'm looking for.

Thanks again,
gary




----- Original Message -----
From: "Tino Vazquez" < tin...@opennebula.org >
To: "Gary S. Cuozzo" < g...@isgsoftware.net >
Cc: users@lists.opennebula.org
Sent: Monday, July 30, 2012 12:40:06 PM
Subject: Re: [one-users] Storage architecture question

Dear Gary,

I am not sure I understand 100% your desired set-up, but if I grasped
it correctly, I think the problem you may found is the images would
only be local to the node that is exporting the NFS share. Otherwise I
think it will work as you expect.

Regards,

-Tino

--
Constantino Vázquez Blanco, MSc
Project Engineer
OpenNebula - The Open-Source Solution for Data Center Virtualization
www.OpenNebula.org | @tinova79 | @OpenNebula


On Fri, Jul 27, 2012 at 11:44 PM, Gary S. Cuozzo < g...@isgsoftware.net > wrote:
> Hi Users,
> I am running ONE 3.6 and would like to be able to run a combination of
> shared storage (via iSCSI) and local storage (to take advantage of local 
> disk performance for certain applications). My question is related to the
> local storage aspect.
>
> From what I've seen, I can use a local datastore and the ssh TM to
> accomplish local storage. The drawback that I see is that I need 2x the
> amount of disk space because I need storage for the permanent image on the
> controller node, then storage on the local host for the running image when
> it is deployed. A secondary issue for me is that the images have to be
> transferred between the datastore and the host machine, which will take some
> time with larger images.
>
> To get around the problem, I thought I could set the datastore up as a
> shared filesystem, except the sharing would actually be from the host
> machine to the controller machine via NFS. Is there any particular
> reason(s) that would be a bad idea? On the surface it seems like it should
> work just fine, but I'm somewhat new to ONE and want to be sure I'm not
> going down a bad path since I plan to do this with several host machines.
>
> Thanks in advance for any advice.
>
> gary
>
>
> _______________________________________________
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


--
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - The Open Source Solution for Data Center Virtualization
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula





</blockquote>






--
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - The Open Source Solution for Data Center Virtualization
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula


_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


</blockquote>




--
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | jme...@opennebula.org

_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to