Re: [Openstack] Installing Dashboard standalone
Hi Guillermo, Would not modifying the local_settings.py and changing the OPENSTACK_HOST to reference a node other than 127.0.0.1 resolve the issue? Cheers David On Thu, Dec 20, 2012 at 1:49 AM, Guillermo Alvarado guillermoalvarad...@gmail.com wrote: BTW I am trying to use a my own version of the openstack-dashboard/ horizon because I made some modifications to the GUI. My version is based in Essex release. Please anybody can help me with this? 2012/12/19 Guillermo Alvarado guillermoalvarad...@gmail.com I Installed the openstack-dashboard but I have this error in the apache logs: ImproperlyConfigured: Error importing middleware horizon.middleware: cannot import name users 2012/12/19 Guillermo Alvarado guillermoalvarad...@gmail.com Hi everyone, I want to install the openstack-dashboard/horizon standalone, I mean, I want to have a node for compute, a node for controller and a node for the dashboard. How can I achive this? Thanks in advance, Best Regards. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] two or more NFS / gluster mounts
Hi Andrew, Is this for glance or nova ? For nova change: state_path = /var/lib/nova lock_path = /var/lib/nova/tmp in your nova.conf For glance I'm unsure, may be easier to just mount gluster right onto /var/lib/glance (similarly could do the same for /var/lib/nova). And just my £0.02 I've had no end of problems getting gluster to play nice on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs, tried 2 replica N distribute setups with many a random glusterfs death), as such I have opted for using ceph. ceph's rados can also be used with cinder from the brief reading I've been doing into it. Cheers David On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway a.hol...@syseleven.dewrote: Hi, If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I control where openstack puts the disk files? Thanks, Andrew ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] two or more NFS / gluster mounts
Hi Andrew, An interesting idea, but I am unaware if nova supports storage affinity in any way, it does support host affinity iirc, as a kludge you could have say some nova compute nodes using your slow mount and reserve the fast mount nodes as required, perhaps even defining separate zones for deployment? Cheers David On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway a.hol...@syseleven.dewrote: Hi David, It is for nova. Im not sure I understand. I want to be able to say to openstack; openstack, please install this instance (A) on this mountpoint and please install this instance (B) on this other mountpoint. I am planning on having two NFS / Gluster based stores, a fast one and a slow one. I probably will not want to say please every time :) Thanks, Andrew On Dec 20, 2012, at 3:42 PM, David Busby wrote: Hi Andrew, Is this for glance or nova ? For nova change: state_path = /var/lib/nova lock_path = /var/lib/nova/tmp in your nova.conf For glance I'm unsure, may be easier to just mount gluster right onto /var/lib/glance (similarly could do the same for /var/lib/nova). And just my £0.02 I've had no end of problems getting gluster to play nice on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs, tried 2 replica N distribute setups with many a random glusterfs death), as such I have opted for using ceph. ceph's rados can also be used with cinder from the brief reading I've been doing into it. Cheers David On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway a.hol...@syseleven.de wrote: Hi, If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I control where openstack puts the disk files? Thanks, Andrew ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] two or more NFS / gluster mounts
I may of course be entirely wrong :) which would be cool if this is achievable / on the roadmap. At the very least if this is not already in discussion I'd raise it on launchpad as a potential feature. On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway a.hol...@syseleven.dewrote: Ah shame. You can specify different storage domains in oVirt. On Dec 20, 2012, at 4:16 PM, David Busby wrote: Hi Andrew, An interesting idea, but I am unaware if nova supports storage affinity in any way, it does support host affinity iirc, as a kludge you could have say some nova compute nodes using your slow mount and reserve the fast mount nodes as required, perhaps even defining separate zones for deployment? Cheers David On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway a.hol...@syseleven.de wrote: Hi David, It is for nova. Im not sure I understand. I want to be able to say to openstack; openstack, please install this instance (A) on this mountpoint and please install this instance (B) on this other mountpoint. I am planning on having two NFS / Gluster based stores, a fast one and a slow one. I probably will not want to say please every time :) Thanks, Andrew On Dec 20, 2012, at 3:42 PM, David Busby wrote: Hi Andrew, Is this for glance or nova ? For nova change: state_path = /var/lib/nova lock_path = /var/lib/nova/tmp in your nova.conf For glance I'm unsure, may be easier to just mount gluster right onto /var/lib/glance (similarly could do the same for /var/lib/nova). And just my £0.02 I've had no end of problems getting gluster to play nice on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs, tried 2 replica N distribute setups with many a random glusterfs death), as such I have opted for using ceph. ceph's rados can also be used with cinder from the brief reading I've been doing into it. Cheers David On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway a.hol...@syseleven.de wrote: Hi, If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I control where openstack puts the disk files? Thanks, Andrew ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [swift] RAID Performance Issue
Hi Zang, As JuanFra points out there's not much sense in using Swift on top of raid as swift handel; extending on this RAID introduces a write penalty ( http://theithollow.com/2012/03/21/understanding-raid-penalty/) this in turn leads to performance issues, refer the link for write penalty's per configuration. As I recall (though this was from way back in October 2010) the suggested method of deploying swift is onto standalone XFS drives, leaving swift to handel the replication and distribution. Cheers David On Wed, Dec 19, 2012 at 9:12 AM, JuanFra Rodriguez Cardoso juanfra.rodriguez.card...@gmail.com wrote: Hi Zang: Basically, it makes no sense to use Swift on top of RAID because Swift just delivers replication schema. Regards, JuanFra. 2012/12/19 Hua ZZ Zhang zhu...@cn.ibm.com Hi, I have read the admin document of Swift and find there's recommendation of not using RAID 5 or 6 because swift performance degrades quickly with it. Can anyone explain why this could happen? If the RAID is done by hardware RAID controller, will the performance issue still exist? Anyone can share such kind of experience of using RAID with Swift? Appreciated for any suggestion from you. -Zhang Hua ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] A Step-by-Step Guide to Deploying OpenStack on CentOS Using the KVM Hypervisor and GlusterFS Distributed File System
Hi Anton, Thanks for this, having a quick read through it looks great. I'd be interested to know what sort of performance you see with gluster providing a replicated file system, have you been able to do some high I/O burn in tests on guests? Thanks David On Fri, Aug 17, 2012 at 9:16 AM, Anton Beloglazov anton.belogla...@gmail.com wrote: Hi All, I and other people from the CLOUDS lab (http://www.cloudbus.org/) have just completed writing a step-by-step guide to deploying OpenStack on multiple nodes with CentOS 6.3 using KVM and GlusterFS based on our experience. Each step is implemented as a separate shell script, which allows going slowly to understand every installation step. I thought it might be useful for some people; therefore, I'm announcing it in this mailing list. The guide is available as a PDF: https://github.com/beloglazov/openstack-centos-kvm-glusterfs/raw/master/doc/openstack-centos-kvm-glusterfs-guide.pdf All the shell scripts are on github: https://github.com/beloglazov/openstack-centos-kvm-glusterfs Best regards, Anton Beloglazov ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] A Step-by-Step Guide to Deploying OpenStack on CentOS Using the KVM Hypervisor and GlusterFS Distributed File System
Hi Anton, For a strait gluster vs native; sysbench ( http://sysbench.sourceforge.net/docs/#fileio_mode also available from epel for el6 http://koji.fedoraproject.org/koji/buildinfo?buildID=262308) may be able to show some insight into guest I/O performance; over an extended period of time. Potentially you could then look at concurrency by running sysbench on multiple guests to gauge degradation of performance due to concurrent I/O across nodes (if any exists), here I'd be particularly curious if high I/O on one compute node, due to replication caused a performance hit on another node. Regards David On Fri, Aug 17, 2012 at 9:45 AM, Anton Beloglazov anton.belogla...@gmail.com wrote: Hi David, I haven't had a chance to run any performance tests yet. What kind of tests would you suggest? Thanks, Anton On Fri, Aug 17, 2012 at 6:40 PM, David Busby d.bu...@saiweb.co.uk wrote: Hi Anton, Thanks for this, having a quick read through it looks great. I'd be interested to know what sort of performance you see with gluster providing a replicated file system, have you been able to do some high I/O burn in tests on guests? Thanks David On Fri, Aug 17, 2012 at 9:16 AM, Anton Beloglazov anton.belogla...@gmail.com wrote: Hi All, I and other people from the CLOUDS lab (http://www.cloudbus.org/) have just completed writing a step-by-step guide to deploying OpenStack on multiple nodes with CentOS 6.3 using KVM and GlusterFS based on our experience. Each step is implemented as a separate shell script, which allows going slowly to understand every installation step. I thought it might be useful for some people; therefore, I'm announcing it in this mailing list. The guide is available as a PDF: https://github.com/beloglazov/openstack-centos-kvm-glusterfs/raw/master/doc/openstack-centos-kvm-glusterfs-guide.pdf All the shell scripts are on github: https://github.com/beloglazov/openstack-centos-kvm-glusterfs Best regards, Anton Beloglazov ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Time for a UK Openstack User Group meeting ?
As someone who is a way north of london; would something like a google+ hangout be possible to tie in for those interested but unable to attend? On Jul 4, 2012 4:58 PM, Day, Phil philip@hp.com wrote: Hi All, ** ** I’m thinking it’s about time we had an OpenStack User Group meeting in the UK , and would be interested in hearing from anyone interested in attending, presenting, helping to organise, etc. ** ** London would seem the obvious choice, but we could also host here in HP Bristol if that works for people. ** ** Reply here or e-mail me directly (phil@hp.com), and if there’s enough interest I’ll pull something together. ** ** Phil Day Compute Tech Lead HP Cloud Services ** ** ** ** ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Openstack and Google Compute Engine
I for one am waiting on my invite to be processed; then I will be looking at things like aeolus to run hybird cloud setups (I'll be contributing code to that effect); That said remember google open compute (as far as I am aware) is not open source i.e. you can not download the engine and run a private google open compute cloud. I am for example using openstack to run internal private clouds; with multiple end hosting providers (hybird cloud); of which google will be one. I think the thing to take away is google are a company with vast resources and the ability to deploy datacentres on a whim (search for their shipping container dc's); if you have that kind of budget then I cant see any reason for a similar openstack sized deployment, also if you have that kind of buget I'll take 2 dc's ... im not greedy ;-) On Jun 30, 2012 9:05 PM, Simon G. semy...@gmail.com wrote: Hello, I've heard about Google's cloud recently. What do you think about it? Will it be compatible with openstack? Or will openstack be compatible with them? Anyone knows something about their solution? it's purely their technology? Or maybe they were inspired by openstack or something else. What about scalability? Their test app is really impressive - 600.000 cores. Is it even achievable in openstack? How can I create environment with so many cores, in openstack? I'm waiting for my invitation to google compute, but maybe someone has already tested it. Cheers, ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] MySQL / MariaDB security vulnerability
Thanks Ewan, Please note my findings on this CVE and feel free to correct / reply with anything I have missed. I've found in my tests of this CVE today that Percona 55-5.5.24 is not vulnerable (http://repo.percona.com/centos/6/os/x86_64/Percona-Server-server-55-5.5.24-rel26.0.256.rhel6.x86_64.rpm), whilst mysql v 5.5.23 is (5.5.23-1 on FC17), as such it appears Percona is not vulnerable to this attack though I am unsure from which version onward; rdp as the changelog was last updated in Fed 2011 ... Also in testing I found that host ACLs can differ this issue, in that to exploit this issue you must use a valid user@host (unless of course there are wildcards), this assume therfor in a secure setup the granted host must originate the attack for the target user. Cheers David On Mon, 2012-06-11 at 19:46 +0100, Ewan Mellor wrote: Anyone who is using OpenStack with MySQL / MariaDB, please see this _extremely_ dangerous security vulnerability, announced on Saturday: https://community.rapid7.com/community/metasploit/blog/2012/06/11/cve-2012-2122-a-tragically-comedic-security-flaw-in-mysql Ewan. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp signature.asc Description: This is a digitally signed message part ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] question about keystone and dashboard on RHEL6
Diablo (at least for nova, glance, swift) is capable of using keystone there is however a lot of manual configuration needed, something that is on my list for complete when integrating keystone into my 2011.3 deploy per my blog post. All I can say at the moment is this is far from trivial, and finding documentation on this can be a problem (if anyone on the list could correct me on this, it would help massively and expidite my documentation of integrating keystone with nova glance on 2011.3) Cheers Davud On Wed, Apr 11, 2012 at 4:16 PM, Adam Young ayo...@redhat.com wrote: On 04/03/2012 09:25 AM, Russell Bryant wrote: On 04/02/2012 08:44 PM, Xin Zhao wrote: On 4/2/2012 6:35 PM, Russell Bryant wrote: On 04/02/2012 03:09 PM, Xin Zhao wrote: Hello, I am new to OpenStack and trying to install the diablo release on a RHEL6 cluster. I follow instructions here: http://fedoraproject.org/wiki/**Getting_started_with_**OpenStack_Novahttp://fedoraproject.org/wiki/Getting_started_with_OpenStack_Nova The instruction doesn't mention how to install and configure Keystone and dashboard services, I wonder: 1) are these two services available for RHEL6, in diablo release? 2) do I need to go to the latest Essex release, and where the instructions is? The dashboard, horizon, is not included with the Diablo packages that you find in EPEL6 right now. When we update EPEL6 to Essex, which should be within the next few weeks, horizon will be included as well. How about keystone, the instructions here (http://fedoraproject.org/**wiki/Getting_started_with_**OpenStack_Novahttp://fedoraproject.org/wiki/Getting_started_with_OpenStack_Nova) doesn't mention how to install and configure keystone, although it tells how to clean up keystone, which makes me think there is something missing in the earlier sections of this instruction. Keystone is there. It has already been updated to one of the Essex RCs, acutally. As EPEL6 gets updated to Essex, these instructions will become the ones you want to follow, and they include Keystone: https://fedoraproject.org/**wiki/Getting_started_with_** OpenStack_on_Fedora_17https://fedoraproject.org/wiki/Getting_started_with_OpenStack_on_Fedora_17 I don't think that Diablo code is actually capable of using Keystone. Keystone is only required for Essex. __**_ Mailing list: https://launchpad.net/~**openstackhttps://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~**openstackhttps://launchpad.net/~openstack More help : https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] KVM disk performance
Hi Martin 1. Does /var/lib/nova/instances reside on your SSD? (just double checking it's not instead pointing to a normal stoage device). 2. Do you see the expected performance on the host operating system? 3. Could you please provide the complete libvirt .xml file? 4. Could you please provide the operating system and version of the guest? Cheers David On Mon, Mar 26, 2012 at 1:59 PM, Martin van Wilderen - JDN BV mar...@jdn.nl wrote: Hi List, I have a question about KVM disk performance. We are using Openstack Nova on three machines. These machines have a SSD drive with a dd write performance of about 130mb/s Within the instance the write performance is down to about 5 mb/s. When using the allocate trick (dd zero to disk before newfs) we get a performance of 20 mb/s. Things a have tried but don't give any extra results are: - Settings the disklayout from qcow2 to raw - Settings the cache type in libvirt.xml (writeback, writethrough, none) - Switching KSM on and off. - Tested with different guests OS, Linux, FreeBSD, Windows. Is there someone who had some extra info i can check? Or are there more people with this issue? Snippet from libvirt.xml driver type='qcow2'/ cache='writeback'/ source file='/var/lib/nova/instances/instance-0113/disk'/ target dev='vda' bus='virtio'/ Kind regards, Martin ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] RHEL 5 6 OpenStack image archive...
We currently have openstack available in EPEL testing repo, help in testing is appreciated. Sent from my iPhone On 2 Mar 2012, at 18:13, Edgar Magana (eperdomo) eperd...@cisco.com wrote: Hi Marc, I ended up creating my own RHEL 6.1 image. If you want I can share it with you. Thanks, Edgar Magana CTO Cloud Computing From: openstack-bounces+eperdomo=cisco@lists.launchpad.net [mailto:openstack-bounces+eperdomo=cisco@lists.launchpad.net] On Behalf Of J. Marc Edwards Sent: Friday, March 02, 2012 8:23 AM To: openstack@lists.launchpad.net Subject: [Openstack] RHEL 5 6 OpenStack image archive... Can someone tell me where these base images are located for use on my OpenStack deployment? Kind regards, Marc -- J. Marc Edwards Lead Architect - Semiconductor Design Portals Nimbis Services, Inc. Skype: (919) 747-3775 Cell: (919) 345-1021 Fax: (919) 882-8602 marc.edwa...@nimbisservices.com www.nimbisservices.com ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] HPC with Openstack?
May be worth looking at rightscale: http://www.rightscale.com/products/plans-pricing/grid-edition.php The article there is only and only cites EC2 usage, but their API's support Rackspace cloud which is Nova http://support.rightscale.com/12-Guides/RightScale_API Cheers David On 2 Dec 2011, at 12:17, Sandy Walsh wrote: I've recently had inquiries about High Performance Computing (HPC) on Openstack. As opposed to the Service Provider (SP) model, HPC is interested in fast provisioning, potentially short lifetime instances with precision metrics and scheduling. Real-time vs. Eventually. Anyone planning on using Openstack in that way? If so, I'll direct those inquires to this thread. Thanks in advance, Sandy ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp