Hi Bill,

Thank you very much for sharing!. I found really interesting your setup :)

Ruben

On Wed, Apr 10, 2013 at 3:04 PM, Campbell, Bill
<bcampb...@axcess-financial.com> wrote:
> I know this is WAY delayed, but regarding your Ceph question:
>
> OpenNebula is capable of doing all of your RBD image management 
> (creation/deletion/etc.), so you shouldn't need to manually create any images 
> on the back end, and you definitely do not need to use CephFS (not really 
> ready for production use yet anyway).
>
> In our particular setup we do the following:
>
>
> * We change the 'system' datastore to use the 'SSH' transfer manager, so we 
> don't have to worry about linking up the OpenNebula system to the Ceph 
> cluster (we use a separate 10Gb network for storage).  We have to modify the 
> premigrate/postmigrate scripts for the SSH transfer manager to ensure VM 
> deployment information is copied to each KVM host (else migrations fail).
>
> * Have a Ceph cluster created, and copy the ceph.conf and ceph.keyring to the 
> /etc/ceph directory of every node in the cluster (minus the OpenNebula 
> system), and ensure that the oneadmin user you create has at a minimum read 
> permissions on the file (we change ownership to root.oneadmin and apply 640 
> permissions).  I know on the roadmap for 4.2 is a feature for making the 
> integration with Ceph more secure (without relying on the keyring being on 
> each node, and security being defined within each Libvirt deployment file 
> directly).
>
> * We have a dedicated system for management of images that OpenNebula 
> connects to (as we've found connecting directly to a monitor sometimes seems 
> to cause issues when dealing with large images, nothing catastrophic but the 
> mon daemon sometimes stops and restarts.  Still chasing that one down.) and 
> define that dedicate system in the ceph.conf file in the DS mad.
>
> * A oneadmin account is created as per documentation on the OpenNebula 
> system, the KVM Hosts, and the dedicated image management system.  All SSH 
> keys are transferred to this dedicated system as well.
>
> * The default storage pool is 'one', so you can have a datastore created 
> without defining the pool name, create a pool named 'one' in the Ceph cluster 
> and you should be off to the races.  The nice thing about the Ceph 
> implementation in OpenNebula is that you can define a pool name for each 
> datastore created, so that way when you start using larger deployments of 
> OpenNebula/Ceph, you can segment off your VMs to different Datastores/Pools 
> (for instance, if you have an array/pool with dedicated SSDs for extremely 
> fast I/O, you can have that in the same cluster, just defined in a separate 
> Datastore/Pool, with your CRUSH map configured accordingly).  Ceph is very 
> extensible.
>
>
> We've been using Ceph with OpenNebula since 3.8.3 with our own driver and it 
> has been working very well for us.  The testing of the integration with 4.0 
> has been going well for us (we made some slight modifications that we 
> submitted to take advantage of RBD Format 2 images for copy-on-write cloning 
> of non-persistent images) and so far have only found a couple of issues 
> (which are already resolved).
>
> If you have any additional questions please don't hesitate to ask!
>
>
> ----- Original Message -----
> From: "Jaime Melis" <jme...@opennebula.org>
> To: "Jon" <three1...@gmail.com>
> Cc: "Users OpenNebula" <users@lists.opennebula.org>
> Sent: Wednesday, April 10, 2013 6:37:30 AM
> Subject: Re: [one-users] [Docs] Problem with OpenVSwitch and some questions 
> about Ceph
>
> Hi Jon
>
> sorry for the delay in the answer.
>
> You can always contibute to the community wiki: http://wiki.opennebula.org/
>
> thanks!
>
> cheers,
> Jaime
>
>
> On Sun, Mar 31, 2013 at 10:11 PM, Jon < three1...@gmail.com > wrote:
>
>
>
> Hello Jamie,
>
> Thanks for the clarification, I didn't realize that install_novnc.sh being in 
> two different locations was a bug.
>
> I hadn't installed OpenNebula on anything other than a Debian / Ubuntu 
> system, so this makes sense.
>
> is there any userspace documentation we can contribute to? Even if not the 
> documentation itself, perhaps a wiki? I didn't see one but that doesn't mean 
> there isn't one.
>
> Thanks,
> Jon A
>
>
> On Thu, Mar 28, 2013 at 2:56 PM, Jaime Melis < jme...@opennebula.org > wrote:
>
>
>
> 3) Documentation
>
> The fact that install_novnc.sh is being installed to two separate locations 
> is a bug that has already been fixed.
>
> With regard to the "/usr/share/opennebula" issue, I'm afraid that's because 
> of the Debian/Ubuntu packaging policy. For other distros (CentOS and openSUSE 
> for example) the path is the one that appears in the documentation.
>
> To makes things easier for users I think this should be reflected in the 
> README.debian file, and in the platform notes in the documentation. So thanks 
> a lot for pointing this out.
>
> I created this feature to follow this problem:
> http://dev.opennebula.org/issues/1844
>
> cheers,
> Jaime
>
> On Thu, Mar 28, 2013 at 1:06 PM, Jon < three1...@gmail.com > wrote:
>
>
> Hello All,
>
> I've just installed OpenNebula 3.9.80 and I have to say this is
> amazing. Everything works so smoothly.
>
> Anyway, down to business.
>
> OpenVSwitch:
>
> I've installed and configured OpenVSwitch and am able to manually add
> the OVS config using libvirt, then launch a VM,
>
>>> <interface type='bridge'>
>>> <source bridge='ovsbr0'/>
>>> <virtualport type='openvswitch'>
>>> </virtualport>
>>> <model type='virtio'/>
>>> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
>>> </interface>
>
> Which creates the device in ovs:
>
>>> system@ovsbr0:
>>> lookups: hit:1346333 missed:46007 lost:0
>>> flows: 8
>>> port 0: ovsbr0 (internal)
>>> port 1: eth0
>>> port 2: br0 (internal)
>>> port 7: vnet0
>
>
> However, when I attempt to create a virtual network without assigning
> an IP and instantiate the template I get the error:
>
>>> [TemplateInstantiate] Error allocating a new virtual machine. Cannot get 
>>> IP/MAC lease from virtual network 0.
>
> The template of the virtual network is:
>
>>> oneadmin@loki:~$ onevnet show testnet1
>>> VIRTUAL NETWORK 0 INFORMATION
>>> ID : 0
>>> NAME : testnet1
>>> USER : oneadmin
>>> GROUP : oneadmin
>>> CLUSTER : -
>>> TYPE : FIXED
>>> BRIDGE : ovsbr0
>>> VLAN : No
>>> USED LEASES : 0
>>>
>>> PERMISSIONS
>>> OWNER : um-
>>> GROUP : ---
>>> OTHER : ---
>>>
>>> VIRTUAL NETWORK TEMPLATE
>>>
>>>
>>> VIRTUAL MACHINES
>>>
>
> If I add an IP to the vnet, I get the following template and error
> logs (full vm log attached, I think I've identified the relevant
> line):
>
>>> Thu Mar 28 10:34:05 2013 [VMM][E]: post: Command "sudo /usr/bin/ovs-ofctl 
>>> add-flow ovsbr0 
>>> in_port=,dl_src=02:00:44:47:83:43,priority=40000,actions=normal" failed.
>
>>> oneadmin@loki:~$ onevnet show testnet1
>>> VIRTUAL NETWORK 0 INFORMATION
>>> ID : 0
>>> NAME : testnet1
>>> USER : oneadmin
>>> GROUP : oneadmin
>>> CLUSTER : -
>>> TYPE : FIXED
>>> BRIDGE : ovsbr0
>>> VLAN : No
>>> USED LEASES : 1
>>>
>>> PERMISSIONS
>>> OWNER : um-
>>> GROUP : ---
>>> OTHER : ---
>>>
>>> VIRTUAL NETWORK TEMPLATE
>>>
>>>
>>> USED LEASES
>>> LEASE=[ MAC="02:00:44:47:83:43", IP="192.168.0.2", 
>>> IP6_LINK="fe80::400:44ff:fe47:8343", USED="1", VID="7" ]
>>>
>>> VIRTUAL MACHINES
>>>
>>> ID USER GROUP NAME STAT UCPU UMEM HOST TIME
>>> 7 oneadmin oneadmin template-4-7 fail 0 0K 0d 00h00
>
>>> root@loki:~# cat /var/log/openvswitch/ovs-vswitchd.log
>>> Mar 28 10:34:04|00081|bridge|INFO|created port vnet1 on bridge ovsbr0
>>> Mar 28 10:34:07|00082|netdev_linux|WARN|ethtool command ETHTOOL_GSET on 
>>> network device vnet1 failed: No such device
>>> Mar 28 10:34:07|00083|netdev_linux|INFO|ioctl(SIOCGIFHWADDR) on vnet1 
>>> device failed: No such device
>>> Mar 28 10:34:07|00084|netdev|WARN|failed to get flags for network device 
>>> vnet1: No such device
>>> Mar 28 10:34:07|00085|netdev|WARN|failed to retrieve MTU for network device 
>>> vnet1: No such device
>>> Mar 28 10:34:07|00086|netdev|WARN|failed to get flags for network device 
>>> vnet1: No such device
>>> Mar 28 10:34:07|00087|bridge|INFO|destroyed port vnet1 on bridge ovsbr0
>
> I attempted to run the command but I never set a password for the
> oneadmin user, but I don't think it's a permissions / sudo access
> problem.
>
> Not really sure where to look next. Any ideas are appreciated.
>
> CEPH:
>
> I'm trying to use a Ceph datastore with a RBD instead of a cephFS
> (it's an option).
> When I try to create a Ceph datastore with a RBD type, I get a state
> of "Error",
> but I'm not sure where to look for relevant logs, oned.log didn't seem
> to have anything,
> or maybe I'm just grepping for the wrong string.
>
> As a work around, I have been creating the directory, creating the rbd
> then manually mounting it,
> this seems to work in my test environment, but doesn't seem very
> scalable, how are others using CEPH?
>
> Documentation:
>
> I've noticed some errors in the documentation, namely the location of
> the install scripts,
>
> The docs state they are in:
>>> /usr/share/one/install_gems
>>> /usr/share/one/sunstone/install_novnc.sh
>
> However, I found them in:
>>> /usr/share/opennebula/install_gems
>>> /usr/share/opennebula/install_novnc.sh
>>> /usr/share/opennebula/sunstone/install_novnc.sh
>
> Is there some repository of the documentation somewhere that we can
> contribute to?
> It's a small thing, but when I'm going through the instructions, I
> like to copy / paste;
> I figured it out, but I know if it caused me problems, it might cause
> others problems too.
>
> Thanks again, I can't wait for the final release of OpenNebula!
>
> Best Regards,
> Jon A
>
> _______________________________________________
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
>
>
> --
> Jaime Melis
> Project Engineer
> OpenNebula - The Open Source Toolkit for Cloud Computing
> www.OpenNebula.org | jme...@opennebula.org
>
>
>
>
> --
> Jaime Melis
> Project Engineer
> OpenNebula - The Open Source Toolkit for Cloud Computing
> www.OpenNebula.org | jme...@opennebula.org
>
> _______________________________________________
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> NOTICE: Protect the information in this message in accordance with the 
> company's security policies. If you received this message in error, 
> immediately notify the sender and destroy all copies.
> _______________________________________________
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

--
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - The Open Source Solution for Data Center Virtualization
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula
_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to