I'm trying to add our high-performance cluster to our OpenNebula 3.0 system
to run VMs on the cluster. Due to security requirements on the cluster, we
have had to write several wrapper scripts for things such as virsh,
ovs-vsctl (for Open vSwitch), and brctl to ensure that someone does not
misconfigure a cluster node and crash the system.

Therefore, the current set of OpenNebula scripts in /var/remotes will not
work on the cluster. The quick/dirty solution we came up with is to create
a second version of the remotes scripts that uses our wrappers and place
them in /var/remotes_cluster. Whenever the remote scripts are copied to or
updated on a cluster node host, we would use this directory instead of
/var/remotes.

I've been looking through the source code and trying to figure out where I
might be able to add some code that checks to see if hostname belongs to
our cluster and then use the remotes_cluster directory, and if not, use the
normal remotes directory. I thought that maybe in update_remotes() in
either one/libs/mads/one_im_exec.rb or in one/lib/ruby/CommandManager.rb
would work, but I'm having trouble with that. I tried adding in a command
to the method to create a simple test directory just to see if that was the
correct place in the code, but I never see the directory created on my
nodes. Any ideas? Again, we're using OpenNebula 3.0.

Thank you,

Greg Stabler
Computer Science/MS Candidate
School of Computing, McAdams 120
Clemson University, Clemson, SC 29634
gsta...@clemson.edu
_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to