Re: [Openstack] manage personal instance from openstack
On 07/26/2013 05:10 AM, Deepak Jeswani1 wrote: Hi everyone, I have an instance running various applications in my environment and I want to transfer it to Openstack. One way is to take image of my instance, register it with Openstack image library and then create an instance out of it. I am wondering whether there can be a direct way to register it with Openstack. Please suggest me a good way to transfer my instance to Openstack. If you have a Swift installation, it's quite easy. 1) Snapshot your instance in your VMWare or Virtualbox environment 2) Convert the snapshot to a format that the hypervisor used in your OpenStack environment supports (ISO or QCOW2 is easiest for KVM) 3) Upload your converted image into Swift 4) Issue a call to Glance to register your image from Swift: glance image-create --disk-format=FORMAT --container-format=FORMAT --location=SWIFT_URI The image will then appear in your tenant's list of images in Horizon or glance image-list, and you may use it to launch an instance. All the best, -jay p.s. You don't necessarily need to use Swift, either... you could always just place your converted image on a web server somewhere and replace SWIFT_URI with the URI of your image. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Call to API very slow [Grizzly]
You will need to provide more details than old vs. new OpenStack. For example... 1) What is your network model in the old vs. new 2) What version of OpenStack is the old 3) Is Keystone used in old and new? If so, what drivers are used in Keystone? 4) Do you have errors in any of your log files (usually an indication that something like a timeout or failure on RPC which may affect performance) 5) Are you using nova-conductor in the new? 6) What database backend are you using? 7) Do a time keystone user-list on both old and new 8) Pastebin your conf files, with passwords removed The more information you give, the better folks can help you. Best, -jay On 07/25/2013 07:14 AM, Chu Duc Minh wrote: Check some more API (I run these command below from Controller node): # time quantum subnet-list (...have 4 subnet) real0m0.676s user0m0.196s sys 0m0.020s # time quantum router-list (...have 1 router) real0m0.496s user0m0.164s sys 0m0.052s # time nova list --all_tenants=1 (...have 5 instances) real0m1.290s user0m0.308s sys 0m0.040s Compare with my old OpenStack deployment on weaker servers, it took 1/3 in times. On Thu, Jul 25, 2013 at 5:43 PM, Peter Cheung mcheun...@hotmail.com mailto:mcheun...@hotmail.com wrote: I am having a problem about calling API speed is up and down, something need 0.1s, something it needs 3s Thanks from Peter Date: Thu, 25 Jul 2013 17:41:11 +0700 From: chu.ducm...@gmail.com mailto:chu.ducm...@gmail.com To: openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net; openst...@lists.openstack.org mailto:openst...@lists.openstack.org Subject: [Openstack] Call to API very slow [Grizzly] All operations in my Openstack dashboard very slow (compare to my old Openstack deployment) Then i do some check on an instance: $ time curl http://169.254.169.254/openstack 2012-08-10 2013-04-04 latest real0m*5.605s* user0m0.004s sys0m0.004s 5 seconds for a simple API query !?? in quantum-ns-metadata-proxy.log, i saw: 2013-07-25 *17:17:09 * DEBUG [quantum.agent.metadata.namespace_proxy] Request: GET /openstack HTTP/1.0 Accept: */* Content-Type: text/plain Host: 169.254.169.254 User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 http://1.2.3.4 libidn/1.23 librtmp/2.3 2013-07-25 *17:17:14*DEBUG [quantum.agent.metadata.namespace_proxy] {'date': 'Thu, 25 Jul 2013 10:17:14 GMT', 'status': '200', 'content-length': '28', 'content-type': 'text/html; charset=UTF-8', 'content-location': u'http://169.254.169.254/openstack'} 2013-07-25 17:17:14DEBUG [quantum.agent.metadata.namespace_proxy] 2012-08-10 2013-04-04 latest I take a look at metadata-agent.log, and saw almost request/response finished @*17:17:09 * But the last finished *@**17:17:14 *2013-07-25 *17:17:14*DEBUG [quantum.agent.metadata.agent] {'date': 'Thu, 25 Jul 2013 10:17:14 GMT', 'status': '200', 'content-length': '28', 'content-type': 'text/html; charset=UTF-8', 'content-location': u'http://172.30.1.14:8775/openstack'} * * I enabled slow query log on MySql, but can't find any slow query. Do you know possible problems in this situation? Thank you very much! ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Ansible playbooks for OpenStack
On 07/10/2013 08:09 AM, Daniel P. Berrange wrote: On Mon, Jul 08, 2013 at 10:02:59AM -0500, Timothy Gerla wrote: Hi everyone, I wanted to share this with the group in the hopes that we could get some feedback. We've built a set of Ansible playbooks to install configure a full OpenStack deployment based on the Red Hat packaging on CentOS 6. (Ansible is a configuration management and software deployment automation tool: http://www.ansibleworks.com/) Here are the playbooks and some basic documentation: https://github.com/ansible/ansible-redhat-openstack I'm interested to know what folks think and if anyone finds this useful. I think it might be useful for quick deployments (you can go from a group of minimally installed CentOS boxes to OpenStack in 20-30 minutes), and I think this could serve as the basis for a more sophisticated production deployment mechanism. FWIW, there is a tool called PackStack which uses Puppet to fully automate deployments of OpenStack in a matter of minutes. It is the tool that Red Hat currently recommend for deployment on Fedora, RHEL CentOS, etc https://wiki.openstack.org/wiki/Packstack And for completeness, for folks using Chef, there is an OpenStack + Chef mailing list [1], an example Chef repo, and a set of OpenStack-specific cookbooks housed on Stackforge [2]. Currently everything is under very active development, with a focus on getting turnkey installation like PackStack done by the Havana release. [1] https://groups.google.com/forum/?fromgroups=#!forum/opscode-chef-openstack [2] https://github.com/stackforge/openstack-chef-repo and https://github.com/stackforge/cookbook-openstack-XXX where XXX in (common, compute, block-storage, image, identity, network, object-storage, metering, orchestration, ops-messaging, ops-database) Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Propagation of account state management changes in keystone across all the services
On 06/25/2013 01:50 PM, Balle, Susanne wrote: Hi We are looking into how to best architect the propagation of account state management changes in keystone across all the services. For example, when we delete a customer domain and/or its tenants, it is currently a multi-step process with potentially many manual tasks. This is error prone and does not scale. Ideally, we would want the state change in Keystone to dynamically propagate to all our services so they can do things like provision or de-provision internal entities. We were thinking of initially implementing some standard APIs in Keystone to propagate the state change but are open to discussing the best architectural path forward to solving this problem( e.g. Message queue, standard API, etc). Additionally we need to be able to do administrative functions like delete a project and have that propagate throughout all the services and perform the correct cleanup operations. Hi Susanne, This is what you are looking for: https://blueprints.launchpad.net/keystone/+spec/notifications Please lobby to get this done in Havana. Frankly, Keystone really should get aligned with the rest of the OpenStack projects WRT to notifications and use oslo.notify. Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] The OpenStack Community Welcomes Developers in All Programming Languages
On 06/12/2013 10:09 AM, Everett Toews wrote: The OpenStack community has been and needs to continue to be a welcoming community for developers in all programming languages. Naturally I’m referring to developers who are building systems on top of OpenStack and not the developers of OpenStack itself. This email is prompted by a minor incident in the #openstack IRC channel. I’m not looking to single people out so I’ll use a pretty generic description so it can’t so easily be found in the IRC logs. A developer came to #openstack to ask a question about a software development kit (SDK) in another programming language. Within 1 minute he got a reply that can only be described as snarky. Undeterred, he went ahead and asked his questions. 20 minutes later a couple of more snarky responses were added to it. No real help at all. It’s not the lack of help that’s at issue though. It’s the unwelcoming attitude. I have not seen that developer in the channel since then. Like I said, a minor incident. I don’t want to blow this out of proportion but it does need to be addressed. It’s one of those cases where you see the cracks start to appear, it’s best to fix them right away before they become real problems. I’m sure we’ve all been part of such chats about languages. When you’re face-to-face or online but know the people personally, it usually goes without saying that it’s good natured. However, when you’re new to a community, it’s not so clear. Of course the OpenStack community is Python-centric but the OpenStack API is not. We need developers from all of the other languages building on top of OpenStack in whatever language they need to work with. Remember, it might not even be their choice! Let’s continue to be good stewards of the OpenStack API and encourage it’s use by all programming languages by being an inclusive and welcoming community. If you ever encounter someone looking for help with another language, you can always point them to the SDKs wiki page [1]. They should be able to find their way from there. Everett [1] https://wiki.openstack.org/wiki/SDKs Well said, Everett. -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] glance client not working
On 05/01/2013 02:28 PM, Dennis Jacobfeuerborn wrote: Hi, I'm currently working on setting up OpenStack using Ansible (after giving up on Puppet) and have keystone running and Glance running. The problem I now have is that the client doesn't seem to work: [root@controller1 ~]# glance index ID Name Disk Format Container Format Size -- -- 'NoneType' object has no attribute 'rfind' [root@controller1 ~]# glance image-list 'NoneType' object has no attribute 'rfind' [root@controller1 ~]# glance image-create --name=cirros-0.3-x86_64 --is-public=true --container-format=bare --disk-format=qcow2 cirros-0.3.0-x86_64-disk.img 'NoneType' object has no attribute 'rfind' Unfortunately I don't get any meaningful error and the api and registry logs don't show anything either despite debug=True. Any ideas how I could find out what the problem is? The problem is likely your image endpoint in the Keystone service catalog. What does `keystone service-catalog` show? Also, do a `glance --debug image-list` ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Gerrit Review + SSH
On 04/04/2013 02:23 PM, Jeremy Stanley wrote: On 2013-04-04 10:51:20 -0700 (-0700), Ronak Shah wrote: As OS dev cycle involves Gerrit review tool which requires ssh into the gerrit server, I was wondering if any of you guys face problems where your company/org does not allow ssh to external hosts. [...] It usually involves the uphill battle of convincing whoever manages network security/firewalls/proxies for your employer that the Internet is more than just a bunch of Web pages. Companies which exclusively limit their employees to only browsing the Web are basically cutting themselves off from innovations which rely on a myriad of other protocols. For non-technology companies that might be fine, but for a technology company that's often a sign that it's going out of business pretty soon. +1000 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] CY13-Q1 Community Analysis — OpenStack vs OpenNebula vs Eucalyptus vs CloudStack
Great work again, John! One thing I might suggest is to remove the review.openstack.org domain from the git domain part at the end of the analysis, as it skews the results significantly (it's the Gerrit patch review tool's email address, presumably for merge commits.) Best, -jay On 04/02/2013 05:32 AM, Qingye Jiang (John) wrote: Hi all, I am glad to present to you the 6 edition of my quarterly analysis on this subject. CY13-Q1 Community Analysis — OpenStack vs OpenNebula vs Eucalyptus vs CloudStack is now available for your reading at the following URL: CY13-Q1 Community Analysis — OpenStack vs OpenNebula vs Eucalyptus vs CloudStack http://www.qyjohn.net/?p=3120 In this report I have added some preliminary analysis on the github activities of these 4 projects. Best regards, Qingye Jiang (John) ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Keystone Design Session - Fine Grained Access Control
On 04/02/2013 09:51 AM, Joe Savak wrote: I’d like to propose a design session on Fine Grained Access Control for the summit. Session info: http://summit.openstack.org/cfp/edit/99 Blueprint: https://blueprints.launchpad.net/keystone/+spec/fine-grain Details: a large implementation, there can be many users each having some level of access to a shared pool of resources. Not all users need that much access though and there are cases where access must be restricted further. V3 introduces policies and that works for restricting access to certain capabilities (only a user with the role admin or group foo can create server in nova, etc). Policies bloat up though if they need to get down the resource level (only joe can delete server ABC). Once you go down this super-fine-grained route and start managing privileges in this manner, it definitely complicates things. :) This blue print (which will be expanded upon) introduces the concept of a resource group in an attempt to provide highly-available, easily modifiable fine grained access control to OpenStack services. What is the difference between a resource group and a group? 1. The v3 core spec doesn’t allow for fine-grained access control. You can force it into policy blobs, but that isn’t scalable or transparent enough Do your scalability concerns revolve around having a single policy BLOB for the service and the corresponding single record, multi-writer data access pattern that would mean? If you had a policy BLOB per group, would that assuage those concerns? Meaning... a compromise between having a REST API that would essentially operate on single resources and the existing API that changes all policies in one call? Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] floating ips history
On 03/25/2013 06:48 AM, Antonio Messina wrote: Hi all, I wonder if there is an easy way to know the instance a specific floating ip was assigned to on a specific point in time. Unfortunately, not using the database schema as it currently stands. You could, however, search your log files (nova-network and nova-api-os-compute IIRC) for the the IP address in question and determine the history from the log records. Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] StackTach and Stacy github repos have moved ...
Thanks for the heads up, Sandy. There was some talk in the last release cycle about possibly merging some or all of the StackTach functionality with Ceilometer. Is that still on the horizon or has that idea been scuttled? Best, -jay On 03/25/2013 09:42 AM, Sandy Walsh wrote: Hi, how are you? Me? Right as rain, thanks for asking. Due to a recent reorg of the Rackspace github repo, StackTach and Stacky are now moved under the rackerlabs organization. The new coordinates are: https://github.com/rackerlabs/stacktach https://github.com/rackerlabs/stacky Please update your .git/config accordingly. Cheers -S PS A bunch of other projects have moved to this new location, couldn't hurt to have a peek and see if it affects you. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] StackTach and Stacy github repos have moved ...
All good in the hood. :) Thanks mate! On 03/25/2013 09:57 AM, Sandy Walsh wrote: On 03/25/2013 10:47 AM, Jay Pipes wrote: Thanks for the heads up, Sandy. There was some talk in the last release cycle about possibly merging some or all of the StackTach functionality with Ceilometer. Is that still on the horizon or has that idea been scuttled? Nope, we are absolutely working towards that. We just had some internal tactical stuff we had to deal with. Getting everyone freed up now. Cheers! -S Best, -jay On 03/25/2013 09:42 AM, Sandy Walsh wrote: Hi, how are you? Me? Right as rain, thanks for asking. Due to a recent reorg of the Rackspace github repo, StackTach and Stacky are now moved under the rackerlabs organization. The new coordinates are: https://github.com/rackerlabs/stacktach https://github.com/rackerlabs/stacky Please update your .git/config accordingly. Cheers -S PS A bunch of other projects have moved to this new location, couldn't hurt to have a peek and see if it affects you. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Project quotas on multi-region
On 03/23/2013 09:31 PM, Nathanael Burton wrote: Glaucimar, Currently quotas are maintained within each nova system so there is not a global view/management/enforcement of quotas. I would love to see a discussion of centralizing things from nova like key pairs, AZs, and quotas in keystone. +100 See my summit proposal to bring availability zone and region management under Keystone: http://summit.openstack.org/cfp/details/114 -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] download ec2 creds fails consistently in horizon
It's actually not nova-cert that you need. It is the Keystone EC2 credentials API extension that is the problem. It only works for users with admin role. I logged a bug on it and am working on a fix: https://bugs.launchpad.net/keystone/+bug/1136190 Best, -jay On 03/14/2013 10:57 AM, Wyllys Ingersoll wrote: I figured it out - nova-cert was not installed and running. I need to add this to my setup when EC2 is enabled, I wasn't aware of the dependency. -Wyllys On Mar 14, 2013, at 10:35 AM, Wyllys Ingersoll wyllys.ingers...@evault.com wrote: I have EC2 configured correctly as far as I can tell because I am able to view my containers using the S3 APIs and S3 tools such as CyberDuck or s3curl.pl, using ec2 credentials returned by the keystone command line tool. However, when I use the Horizon user settings interface and select Download EC2 Credentials, nothing happens and it eventually returns yet another System Error. According to the logs, the failure is because the call to request os-certificates is timing out. I know this is probably because some other nova service is not running, but Im not sure which one it needs to complete this transaction. It'd be nice if the error message somewhere that indicated which service was not responding or what to do about it. Can someone tell me which nova service I need to have running and configured to issue os-certificates? Also, I really only want the EC2 credentials to be created and downloaded, Im not so much interested in the X509 certificates at this point. It'd be nice if the user settings EC2 panel had more options, such as just creating and/or listing the EC2 access ID and Key for a particular user rather than assuming you want/need everything all at once. thanks, Wyllys Ingersoll EVault ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Unauthenticated service probe for OpenStack components
Or, alternately, you can post to the endpoint root (without the version) and that should respond with a 300 Multiple Choice for most of the OpenStack service endpoints. Best, -jay On 03/14/2013 04:52 PM, Dean Troyer wrote: On Thu, Mar 14, 2013 at 3:40 PM, Tim Bell tim.b...@cern.ch wrote: Currently, curl to the service URL just gives 401 error. Check the exit code, it should be 0 because that 401 error is generated by the api server. It's alive and responding, just not with 200 codes. If the server doesn't respond curl's exit code is non-zero (7 I think?). We did the same thing in DevStack for checking api server responses. dt ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] download ec2 creds fails consistently in horizon
On 03/15/2013 01:12 PM, Wyllys Ingersoll wrote: I am able to login as a non-admin user and access the containers. Your original post talked about the Download EC2 Credentials link not working. That's what I was referring to. Nothing to do with Swift containers. -jay In addition to missing nova-cert, I also had to change the keystoneauth settings in /etc/swift/proxy-server.conf to add Member to the operator_roles list, which I suppose is equivalent to making a Member user the equivalent of an administrator for Swift. -Wyllys On Mar 15, 2013, at 1:02 PM, Jay Pipes jaypi...@gmail.com wrote: It's actually not nova-cert that you need. It is the Keystone EC2 credentials API extension that is the problem. It only works for users with admin role. I logged a bug on it and am working on a fix: https://bugs.launchpad.net/keystone/+bug/1136190 Best, -jay On 03/14/2013 10:57 AM, Wyllys Ingersoll wrote: I figured it out - nova-cert was not installed and running. I need to add this to my setup when EC2 is enabled, I wasn't aware of the dependency. -Wyllys On Mar 14, 2013, at 10:35 AM, Wyllys Ingersoll wyllys.ingers...@evault.com wrote: I have EC2 configured correctly as far as I can tell because I am able to view my containers using the S3 APIs and S3 tools such as CyberDuck or s3curl.pl, using ec2 credentials returned by the keystone command line tool. However, when I use the Horizon user settings interface and select Download EC2 Credentials, nothing happens and it eventually returns yet another System Error. According to the logs, the failure is because the call to request os-certificates is timing out. I know this is probably because some other nova service is not running, but Im not sure which one it needs to complete this transaction. It'd be nice if the error message somewhere that indicated which service was not responding or what to do about it. Can someone tell me which nova service I need to have running and configured to issue os-certificates? Also, I really only want the EC2 credentials to be created and downloaded, Im not so much interested in the X509 certificates at this point. It'd be nice if the user settings EC2 panel had more options, such as just creating and/or listing the EC2 access ID and Key for a particular user rather than assuming you want/need everything all at once. thanks, Wyllys Ingersoll EVault ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Tempest for Integration testing of Openstack (FOLSOM)
On 03/12/2013 11:14 AM, Girija Sharan wrote: But the tests in *tempest-stable-folsom/tempest/tests/network *are not running in Folsom with Quantum. All other tests are running fine. Someone said that this stable-folsom release of tempest is not for testing Quantum in Folsom. Is it true ? If yes then how do I test my Quantum in Folsom deployment using Tempest ? I'm sorry, I don't know how to answer your question without seeing the errors you are getting when running Tempest. -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Incredibly odd mysql permission error
On 03/08/2013 12:19 PM, Samuel Winchenbach wrote: Hi All, I have two nodes (test1 and test2) that I am trying to set up in a highly available configuration. During the setup process I tried running nova-manage service list on both nodes. It worked fine on test2, but fails on test1 even though I can connect to the database with the mysql client from test1. Here is a screen capture that shows the setup on the two nodes are basically identical: http://paste2.org/p/3084223 In the above paste you are doing: mysql -unova -hmysql-ha -u root nova -p Note you are supplying 2 -u arguments, and mysql will take the second (root). -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Incredibly odd mysql permission error
I'm stumped :( Looks like everything is set up correctly to me. What is interested is that your nova user access works from test2, but there is no nova@test2 user in the mysql.user table. What about doing a DROP USER nova@test1; FLUSH PRIVILEGES; and then see if that fixes things... since the nova@10.21.0.0/255.255.0.0 user is clearly working for the access from test2. Also, I'd recommend highly removing the nova@% user. Best, -jay On 03/08/2013 03:09 PM, Samuel Winchenbach wrote: http://paste2.org/p/3085807 On Fri, Mar 8, 2013 at 2:46 PM, Jay Pipes jaypi...@gmail.com mailto:jaypi...@gmail.com wrote: Please paste the results of SELECT User, Host, Password FROM mysql.user when running as root... Thanks! -jay On 03/08/2013 02:25 PM, Samuel Winchenbach wrote: Here are my grants. I don't know if this helps, but I did verify that the password was identical for each grant: http://paste2.org/p/3085361 On Fri, Mar 8, 2013 at 2:17 PM, Samuel Winchenbach swinc...@gmail.com mailto:swinc...@gmail.com mailto:swinc...@gmail.com mailto:swinc...@gmail.com wrote: root@test1:/var/log# mysql -hmysql-ha -unova -p -eSELECT User, Host, Password FROM mysql.user; ERROR 1142 (42000) at line 1: SELECT command denied to user 'nova'@'test1' for table 'user' On Fri, Mar 8, 2013 at 2:06 PM, Jay Pipes jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com wrote: What does this show? mysql -hmysql-ha -unova -pPASS -eSELECT User, Host, Password FROM mysql.user -jay On 03/08/2013 01:46 PM, Samuel Winchenbach wrote: Sorry, that must have been a copy and paste error. Here is what I actually ran: http://paste2.org/p/3084996 On Fri, Mar 8, 2013 at 12:40 PM, Jay Pipes jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com wrote: On 03/08/2013 12:19 PM, Samuel Winchenbach wrote: Hi All, I have two nodes (test1 and test2) that I am trying to set up in a highly available configuration. During the setup process I tried running nova-manage service list on both nodes. It worked fine on test2, but fails on test1 even though I can connect to the database with the mysql client from test1. Here is a screen capture that shows the setup on the two nodes are basically identical: http://paste2.org/p/3084223 In the above paste you are doing: mysql -unova - hmysql-ha -u root nova -p Note you are supplying 2 -u arguments, and mysql will take the second (root). -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Incredibly odd mysql permission error
Sorry, I really can't think of anything :( On 03/08/2013 03:52 PM, Samuel Winchenbach wrote: I dropped those users and no change. I also set up general logging in mysql but it really doesn't provide any additional information. Any idea for a next step I could take? I am almost at the point of taking a tcpdump and trying to recreate the salted password. :/ Thanks for the help Sam On Fri, Mar 8, 2013 at 3:38 PM, Jay Pipes jaypi...@gmail.com mailto:jaypi...@gmail.com wrote: I'm stumped :( Looks like everything is set up correctly to me. What is interested is that your nova user access works from test2, but there is no nova@test2 user in the mysql.user table. What about doing a DROP USER nova@test1; FLUSH PRIVILEGES; and then see if that fixes things... since the nova@10.21.0.0/255.255.0.0 http://nova@10.21.0.0/255.255.0.0 user is clearly working for the access from test2. Also, I'd recommend highly removing the nova@% user. Best, -jay On 03/08/2013 03:09 PM, Samuel Winchenbach wrote: http://paste2.org/p/3085807 On Fri, Mar 8, 2013 at 2:46 PM, Jay Pipes jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com wrote: Please paste the results of SELECT User, Host, Password FROM mysql.user when running as root... Thanks! -jay On 03/08/2013 02:25 PM, Samuel Winchenbach wrote: Here are my grants. I don't know if this helps, but I did verify that the password was identical for each grant: http://paste2.org/p/3085361 On Fri, Mar 8, 2013 at 2:17 PM, Samuel Winchenbach swinc...@gmail.com mailto:swinc...@gmail.com mailto:swinc...@gmail.com mailto:swinc...@gmail.com mailto:swinc...@gmail.com mailto:swinc...@gmail.com mailto:swinc...@gmail.com mailto:swinc...@gmail.com wrote: root@test1:/var/log# mysql -hmysql-ha -unova -p -eSELECT User, Host, Password FROM mysql.user; ERROR 1142 (42000) at line 1: SELECT command denied to user 'nova'@'test1' for table 'user' On Fri, Mar 8, 2013 at 2:06 PM, Jay Pipes jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com wrote: What does this show? mysql -hmysql-ha -unova -pPASS -eSELECT User, Host, Password FROM mysql.user -jay On 03/08/2013 01:46 PM, Samuel Winchenbach wrote: Sorry, that must have been a copy and paste error. Here is what I actually ran: http://paste2.org/p/3084996 On Fri, Mar 8, 2013 at 12:40 PM, Jay Pipes jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com mailto:jaypi...@gmail.com wrote: On 03/08/2013 12:19 PM, Samuel Winchenbach wrote: Hi All, I have two nodes (test1 and test2) that I am trying to set up in a highly available configuration. During the setup process I tried running nova-manage service list on both nodes. It worked fine on test2, but fails on test1 even though I can connect to the database with the mysql client from test1. Here is a screen capture that shows the setup on the two nodes are basically identical: http://paste2.org/p/3084223 In the above paste you are doing: mysql -unova - hmysql-ha -u root nova -p
Re: [Openstack] deal with booting lots of instance simultaneously
Are you using multi_host setup? If not, as Vish suggested, that will alleviate much of the problem. Best, -jay On 02/19/2013 04:09 AM, gtt116 wrote: Hi Diego Thanks for you reply. How many hosts do you have? I have 4 hosts. And in this bug, https://bugs.launchpad.net/nova/+bug/1094226, The N is 20. In my environment N is about 16. I found that nova-network is too busy to deal with so many rpc request at the same time. The Rabbitmq is strong enough in the scenario. 于 2013年02月19日 16:54, Diego Parrilla Santamaría 写道: Hi gtt, what does it mean for you 'lots of instance simultaneously'? 100, 1000, 1, more? We have launched 100 (but less than 1000) simultaneously without any issue. Rabbit running in a multicore with several gigs of RAM with out of the box configuration. Cheers Diego -- Diego Parrilla http://www.stackops.com/*CEO* **www.stackops.com* http://www.stackops.com/ | * diego.parri...@stackops.com mailto:diego.parri...@stackops.com | +34 649 94 43 29 | skype:diegoparrilla* * http://www.stackops.com/ * * On Tue, Feb 19, 2013 at 9:35 AM, gtt116 gtt...@126.com mailto:gtt...@126.com wrote: Hi all, When create lots of instance simultaneously, there will be lots of instance in ERROR state. And most of them are caused by network rpc request timeout. This result is not so graceful. I think it will be better if scheduler keep a queue of creating request. when he find all the hosts are busy enough(compute_node.current_workload reach some value), stop cast the request to host temporarily, until he found some host free enough. In this way, we can make sure booting lots of instances simultaneously results in active instances rather than lots of ERROR instance. but will cause a small weak point, if the top value of current_workload small enough, create instance processing will be slow. Do you have another quick fix? Thanks, -- best regards, gtt ___ Mailing list: https://launchpad.net/~openstack https://launchpad.net/%7Eopenstack Post to : openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack https://launchpad.net/%7Eopenstack More help : https://help.launchpad.net/ListHelp -- best regards, gtt ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Use a running ESXi hypervisor
No. On 02/15/2013 02:15 PM, Logan McNaughton wrote: I'm sorry if this question has been asked before: Is it possible to add an already running ESXi hypervisor (with live VM's) into OpenStack? For instance if I start a VM on ESX and install and configure nova-compute, can the running VM's somehow be imported into OpenStack without disruption? ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [keystone] Why are we returing such a big payload in validate token?
+1000 On 01/31/2013 07:44 PM, Ali, Haneef wrote: Hi, As of now v3 validateToken response has “tokens, service catalog, users, project , roles and domains. (i.e) Except for groups we are returning everything. We also discussed about the possibility of 100s of endpoints. ValidateToken is supposed to be a high frequency call . This is going to be a huge performance impact . What is the use case for such a big payload when compared with v2? If a service needs catalog , then the service can always ask for the catalog. Thanks Haneef ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Windows instance licensing in OpenStack
On 01/23/2013 08:41 AM, Balamurugan V G wrote: Hi, I wonder how the Windows Licensing would work in OpenStack. Lets say I have a Windows Image which I have already activated. Now if I launch N instances of this image, what are the SO licensing implications? Will I have to license/re-activate on each of the instances after they boot up? If you use XenServer or KVM, I believe so, yes. I've heard that if you use ESXi you can get some sort of license for all Windows images on a host. Not sure of the details of this, though, I've just heard it through the IT grapevine... Best, -jay p.s. If you find out any more solid info, please do post back :) ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Is nova-client thread safe ?
On 01/21/2013 01:24 PM, Day, Phil wrote: Hi Folks, Does anyone know if the nova-client python binding is written to be thread safe ? We saw some odd behavour when using it with multiple threads, and before digging deeper just thought I’d check if there were known issues, etc. The client itself, as you know, just makes HTTP calls, so it's unlikely that the client calls themselves are not thread-safe, as they don't have any state associated with them. The stuff that may not be thread-safe is wherever novaclient is saving state. The only thing that does, AFAIK, are the two cache mechanisms -- the keyring token/mgmt_URL caching [1] and the file-based UUID cache manager [2]. That's where I'd look to make thread-safe changes. Best, -jay [1] https://github.com/openstack/python-novaclient/blob/master/novaclient/client.py#L381 [2] https://github.com/openstack/python-novaclient/blob/master/novaclient/base.py#L82 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Total Network Confusion
On 01/15/2013 05:31 AM, James Condron wrote: Jay, Guys, The Vlan Manager stuff looks spot on for my needs but I am a tad confused. (Perhaps Folsom addresses these; I'm just on a deadline to get a PoC running and I don't want to look like I've been wasting time building this). Assuming I configure my vlan on my switch, set my switchport to trunk and use vlanmanager do Scenarios 6 and 7 extend out to hosts *not* on OpenStack/ not configured via OpenStack? Would I be able to, say, connect from my PC vlan to one of the vlans configured via OpenStack? Would this also allow me to configure bridges on Open Stack to route via their own IPs and Vlans? Not quite sure, actually. I'm certainly no networking guru, sorry :( I'd imagine you *could* do this, but it would take manually modifying iptables on the individual compute nodes -- which would mess with the nova-network controller on the compute nodes IIUC... -jay Thanks, James On 14 Jan 2013, at 18:11, Jay Pipes jaypi...@gmail.com wrote: I'd recommend Folsom over Essex :) And I'd highly recommend these articles from Mirantis which really step through the networking setup in VLANManager. Read through them in the following order and I promise at the end you will have a much better understanding of networking in Nova. http://www.mirantis.com/blog/openstack-networking-flatmanager-and-flatdhcpmanager/ http://www.mirantis.com/blog/openstack-networking-single-host-flatdhcpmanager/ http://www.mirantis.com/blog/openstack-networking-vlanmanager/ http://www.mirantis.com/blog/vlanmanager-network-flow-analysis/ All the best, -jay On 01/14/2013 11:52 AM, James Condron wrote: Hi all, I've recently started playing with (and working with) OpenStack with a view to migrate our production infrastructure from esx 4 to Essex. My issue, or at least utter idiocy, is in the network configuration. Basically I can't work out whether in the configuration of OpenStack I have done something daft, on the network something daft or I've not understood the technology properly. *NB: *I can get to the outside world form my VMs; I don't want to confuse things further. As attached is a diagram I knocked up to hopefully make this simpler, though I hope I can explain it simply with: * *Given both public and private interfaces on my server being on the same network and infrastructure how would one go about accessing VMs via their internal IP and not have to worry about a VPN or Public IPs?* * My corporate network works on simple vlans; I have a vlan for my production boxen, one for development, one for PCs, telephony, etc. etc. These are pretty standard. The public, eth0 NIC on my compute node (Single node setup, nothing overly fancy; pretty vanilla) is on my production vlan and everything is accessible. the second nic, eth1, is supposedly on a vlan for this specific purpose. I am hoping to be able to access these internal IPs on their... Internal IPs (For want of a better phrase). Is this possible? I'm reasonably confident this isn't a routing issue as I can ping the eth1 IP from the switch: #ping 10.12.0.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.12.0.1, timeout is 2 seconds: ! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/8 ms But none of the ones assigned to VMs: #ping 10.12.0.4 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.12.0.4, timeout is 2 seconds: . Success rate is 0 percent (0/5) Or for those looking at the attached diagram: vlan101 is great and works fine; what do I need to do (If at all possible) to get vlan102 listening? ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Glance, boto and image id
On 01/14/2013 06:06 AM, Antonio Messina wrote: Apparently not. This is the output of glance image-show: +---+--+ | Property | Value| +---+--+ | Property 'kernel_id' | e1c78f4d-eca9-4979-9ee0-54019d5f79c2 | | Property 'ramdisk_id' | 99c8443e-c3b2-4aef-8bf8-79cc58f127a2 | | checksum | 2f81976cae15c16ef0010c51e3a6c163 | | container_format | ami | | created_at| 2012-12-04T22:59:13 | | deleted | False| | disk_format | ami | | id| 67b612ac-ab20-4227-92fc-adf92841ba8b | | is_public | True | | min_disk | 0| | min_ram | 0| | name | cirros-0.3.0-x86_64-uec | | owner | ab267870ac72450d925a437f9b7c064a | | protected | False| | size | 25165824 | | status| active | | updated_at| 2012-12-04T22:59:14 | +---+--+ :( Oh well, was worth a shot. Looking at the code in https://github.com/openstack/nova/blob/master/nova/api/ec2/ec2utils.py#L70 and https://github.com/openstack/nova/blob/master/nova/api/ec2/ec2utils.py#L126 it seems that the conversion is simply done by getting an id of type integer (not uuid-like string) and then converting it to hex form and appending it to the string 'ami-' Question is: where this id comes from and is there any way to show it in the horizon web interface? There is an integer key in the s3_images table that stores the map between the UUID and the AMI image id: https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L964 Not sure this is available via Horizon... sorry. Best, -jay .a. On Sun, Jan 13, 2013 at 10:17 PM, Jay Pipes jaypi...@gmail.com mailto:jaypi...@gmail.com wrote: The EC2-style image ID would probably be stored in the custom key/value pairs in the Glance image record for the image... so if you do a glance image-show IMAGE_UUID you should see the EC2 image ID in there... -jay On 01/11/2013 09:51 AM, Antonio Messina wrote: Hi all, I am using boto library to access an folsom installation, and I have a few doubts regarding image IDs. I understand that boto uses ec2-style id for images (something like ami ami-16digit number) and that nova API converts glance IDs to EC2 id. However, it seems that there is no way from the horizon web interface nor from euca-tools to get this mapping. How can I know the EC2 id of an image, having access only to the web interface or boto? I could use the *name* of the instance instead of the ID, but the name is not unique... .a. -- antonio.s.mess...@gmail.com mailto:antonio.s.mess...@gmail.com mailto:antonio.s.mess...@gmail.com mailto:antonio.s.mess...@gmail.com GC3: Grid Computing Competence Center http://www.gc3.uzh.ch/ University of Zurich Winterthurerstrasse 190 CH-8057 Zurich Switzerland ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- antonio.s.mess...@gmail.com mailto:antonio.s.mess...@gmail.com GC3: Grid Computing Competence Center http://www.gc3.uzh.ch/ University of Zurich Winterthurerstrasse 190 CH-8057 Zurich Switzerland ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Total Network Confusion
I'd recommend Folsom over Essex :) And I'd highly recommend these articles from Mirantis which really step through the networking setup in VLANManager. Read through them in the following order and I promise at the end you will have a much better understanding of networking in Nova. http://www.mirantis.com/blog/openstack-networking-flatmanager-and-flatdhcpmanager/ http://www.mirantis.com/blog/openstack-networking-single-host-flatdhcpmanager/ http://www.mirantis.com/blog/openstack-networking-vlanmanager/ http://www.mirantis.com/blog/vlanmanager-network-flow-analysis/ All the best, -jay On 01/14/2013 11:52 AM, James Condron wrote: Hi all, I've recently started playing with (and working with) OpenStack with a view to migrate our production infrastructure from esx 4 to Essex. My issue, or at least utter idiocy, is in the network configuration. Basically I can't work out whether in the configuration of OpenStack I have done something daft, on the network something daft or I've not understood the technology properly. *NB: *I can get to the outside world form my VMs; I don't want to confuse things further. As attached is a diagram I knocked up to hopefully make this simpler, though I hope I can explain it simply with: * *Given both public and private interfaces on my server being on the same network and infrastructure how would one go about accessing VMs via their internal IP and not have to worry about a VPN or Public IPs?* * My corporate network works on simple vlans; I have a vlan for my production boxen, one for development, one for PCs, telephony, etc. etc. These are pretty standard. The public, eth0 NIC on my compute node (Single node setup, nothing overly fancy; pretty vanilla) is on my production vlan and everything is accessible. the second nic, eth1, is supposedly on a vlan for this specific purpose. I am hoping to be able to access these internal IPs on their... Internal IPs (For want of a better phrase). Is this possible? I'm reasonably confident this isn't a routing issue as I can ping the eth1 IP from the switch: #ping 10.12.0.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.12.0.1, timeout is 2 seconds: ! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/8 ms But none of the ones assigned to VMs: #ping 10.12.0.4 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.12.0.4, timeout is 2 seconds: . Success rate is 0 percent (0/5) Or for those looking at the attached diagram: vlan101 is great and works fine; what do I need to do (If at all possible) to get vlan102 listening? ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Glance, boto and image id
The EC2-style image ID would probably be stored in the custom key/value pairs in the Glance image record for the image... so if you do a glance image-show IMAGE_UUID you should see the EC2 image ID in there... -jay On 01/11/2013 09:51 AM, Antonio Messina wrote: Hi all, I am using boto library to access an folsom installation, and I have a few doubts regarding image IDs. I understand that boto uses ec2-style id for images (something like ami ami-16digit number) and that nova API converts glance IDs to EC2 id. However, it seems that there is no way from the horizon web interface nor from euca-tools to get this mapping. How can I know the EC2 id of an image, having access only to the web interface or boto? I could use the *name* of the instance instead of the ID, but the name is not unique... .a. -- antonio.s.mess...@gmail.com mailto:antonio.s.mess...@gmail.com GC3: Grid Computing Competence Center http://www.gc3.uzh.ch/ University of Zurich Winterthurerstrasse 190 CH-8057 Zurich Switzerland ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] How to create vm instance to specific compute node?
Oh, nice! Thanks for the hint! -jay On 01/03/2013 08:03 AM, Day, Phil wrote: Note this is an admin-only ability by default and can oversubscribe the compute node the instance goes on. It is now controlled by a policy (create:forced_host) - so if you want to extend it to other users you can, for example, set up the policy file to control this via a Keystone role Phil -Original Message- From: openstack-bounces+philip.day=hp@lists.launchpad.net [mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of Jay Pipes Sent: 27 December 2012 22:39 To: openstack@lists.launchpad.net Subject: Re: [Openstack] How to create vm instance to specific compute node? No. Use nova boot --availability_zone=nova:hostname where nova: is your availability zone and hostname is the hostname of the compute node you wish to put the instance on. Note this is an admin-only ability by default and can oversubscribe the compute node the instance goes on. Best, -jay On 12/27/2012 02:45 PM, Rick Jones wrote: Does the convention of adding --onhost--computenodename to the instanc name being created still work? rick jones ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Glance upload: No checksum when using swift and image swift_store_large_object_size
Bug. Please do file one. Best, -jay On 01/02/2013 09:30 AM, Robert van Leeuwen wrote: Hi, I just noticed that when using glance with a swift backend the checksum is not populated when the size is below the swift_store_large_object_size when adding an image. This results in an error message when downloading the image (and breaking nova instance creation). Looking at the glance/store/swift.py code it seems to me the checksum code is only hit when the size is above swift_store_large_object_size. Is this a bug or is there something else going on? I'm running the EPEL Folsom packages on SL6.3 Thanks, Robert ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] boot multi instances at a time issue
On 01/02/2013 12:52 AM, heut2008 wrote: Hi,all When booting multi instances at a time , we face a hostname naming problem, now all instances will using the same hostname provideed at booting time,as a developer,I am looking for suggestions and requirements that from the user side, how do you want nova naming the hostname when booting multi instances? when provide a hostname at booting time ,is adding a number suffix ok? or more configurable naming is needed. I believe the API should be modified to make suffix/templatized names possible, as well as the ability to specify names as a list, with each name in the list of names corresponding to a server. Let's say I want to create 20 instances, with instances names instance-00 through instance-19. Of course, there isn't any way to do this right now, since the name parameter of the createServer call sets the name the same for all instances. We could add a new parameter called nameTemplate that would be filled in with some easy rules: * Replace %(launch_id)d with the launch sequence. So, the third instance booted with a nameTemplate of instance-%(launch_id)d would get the name instance-3 * Replace %(image_name)s with the name of the image. For example, if the nameTemplate was %(image_name)s-%(launch_id)d and the name of the image was UbuntuPrecise, then the third launched instance would be named UbuntuPrecise-3 * Any other sensible rules one might want to give... -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] How to create vm instance to specific compute node?
On 12/30/2012 04:25 PM, Rick Jones wrote: On 12/27/2012 02:39 PM, Jay Pipes wrote: No. Pity - I'd just gotten used to that mechanism :) The one constant in OpenStack development is change, as well you know! ;) Use nova boot --availability_zone=nova:hostname where nova: is your availability zone and hostname is the hostname of the compute node you wish to put the instance on. Note this is an admin-only ability by default and can oversubscribe the compute node the instance goes on. Will it use the same /var/lib/nova/sch_hosts/id mechanism to allow mere mortals to use it like the onhost stuff did? Vish and others would know more, but AFAIK, the specifying of --availability_zone=nova:hostname is an admin-only operation. I believe the reasoning is that a cloud is a cloud, and hosts should be of no consequence to a cloud user -- they shouldn't know need or want to know what physical machine a guest ends up on. Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] How to create vm instance to specific compute node?
No. Use nova boot --availability_zone=nova:hostname where nova: is your availability zone and hostname is the hostname of the compute node you wish to put the instance on. Note this is an admin-only ability by default and can oversubscribe the compute node the instance goes on. Best, -jay On 12/27/2012 02:45 PM, Rick Jones wrote: Does the convention of adding --onhost--computenodename to the instanc name being created still work? rick jones ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] DEBUG nova.utils [-] backend
You can ignore this. On 12/12/2012 06:06 AM, Andrew Holway wrote: Hi, 2012-12-12 12:04:48 DEBUG nova.utils [-] backend module 'nova.db.sqlalchemy.migration' from '/usr/lib/python2.6/site-packages/nova/db/sqlalchemy/migration.pyc' from (pid=14756) __get_backend /usr/lib/python2.6/site-packages/nova/utils.py:494 I get this error a lot when using the command line nova tools. Anything to worry about? Thanks, Andrew ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack-qa-team] Right place for blueprints?
On 11/01/2012 03:45 PM, Sean Dague wrote: As we start to file QA blueprints, which of these is the right place to do them in: https://blueprints.launchpad.net/openstack-qa or No, this is the QA documentation. https://blueprints.launchpad.net/tempest or is there some 3rd place we should use? This is it. :) The first one seems more generically useful, but there's a lot of old stuff in it that all seems to be targetted to essex. I'd like to make sure we capture all the ideas that came up in the various sessions as blueprints so we can, as a team, decide what's most important for people to tackle. Just want to make sure I'm getting those registered in the right places. There will be a follow up email tomorrow with a bunch of possible blueprints from the various sessions I participated in. I encourage others to do the same. Looking forward to it! Cheers, -jay -- Mailing list: https://launchpad.net/~openstack-qa-team Post to : openstack-qa-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-qa-team More help : https://help.launchpad.net/ListHelp
Re: [Openstack-qa-team] Need to change mailing list server?
On 11/01/2012 04:49 PM, David Kranz wrote: There is now a full tempest run going daily and reporting failures to this list. But that won't work because jenkins and gerrit cannot be launchpad members. According to the ci folks, others have dealt with this by moving their mailing lists to lists.openstack.org. Perhaps we should do the same? We need to do something in any event. I'm good with moving the QA list to lists.openstack.org. Stefano, can you assist here? Thanks! -jay -- Mailing list: https://launchpad.net/~openstack-qa-team Post to : openstack-qa-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-qa-team More help : https://help.launchpad.net/ListHelp
Re: [Openstack-qa-team] Need to change mailing list server?
On 11/02/2012 11:43 AM, Stefano Maffulli wrote: Not a problem at all. All I need is: - name of the list (I assume it's: openstack...@lists.openstack.org) Yes. - one terse line to describe the list on http://lists.openstack.org All Things QA. - one or more paragraphs to describe the list on its page This list is dedicated to quality engineering and assurance efforts for OpenStack projects. We discuss strategies for test design, test configuration, and the Tempest integration test suite. - email address of the administrator(s) davidkr...@qrclab.com. Just kiddin. You can put me email in there :) jaypi...@gmail.com. Best, -jay Cheers, stef On Fri 02 Nov 2012 04:37:18 PM CET, Jay Pipes wrote: On 11/01/2012 04:49 PM, David Kranz wrote: There is now a full tempest run going daily and reporting failures to this list. But that won't work because jenkins and gerrit cannot be launchpad members. According to the ci folks, others have dealt with this by moving their mailing lists to lists.openstack.org. Perhaps we should do the same? We need to do something in any event. I'm good with moving the QA list to lists.openstack.org. Stefano, can you assist here? Thanks! -jay -- Mailing list: https://launchpad.net/~openstack-qa-team Post to : openstack-qa-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-qa-team More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Instance provisioning taking more time for all the instances
Hi Nagaraju, apologies for long delay in replying. Answer inline. On 09/29/2012 05:40 AM, Nagaraju Bingi wrote: Hi, We have deployed Openstack on VMware and we could able to provision Instances but image is not getting cached on compute/ESX server and for every provisioning of instances the image is getting downloaded from glance. Please provide steps to cache VMDK images on compute. I've cc'd Mikal Still, who wrote the original image cache in the libvirt driver. I'm not sure but I think work was done in Folsom to make the image cache more generic, but I'm not totally sure. Hoping Mikal has an answer to that. All the best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack-qa-team] S3/EC2 test based on python-boto
You should just be able to do: cd $localtempestrepodir git review -d 3064 # Make changes to files git commit -a git review I've cc'd Jim Blair to make sure I haven't forgotten anything. All the best, -jay On 10/23/2012 02:30 AM, Attila Fazekas wrote: Thank you. I wonder what is the correct git/gerrit workflow for submitting a change based on someone else rejected/expired commit ? I would like to mark correctly the original code base. Best Regards, Attila - Original Message - From: Jay Pipes jaypi...@gmail.com To: openstack-qa-team@lists.launchpad.net Cc: zul...@gmail.com Sent: Monday, October 22, 2012 5:53:20 PM Subject: Re: [Openstack-qa-team] S3/EC2 test based on python-boto On 10/22/2012 10:36 AM, Attila Fazekas wrote: Hi everyone, I am considering implementing test cases for using the EC2 and S3 API in tempest. I would like to know is anybody else working on this kind of tests ? Hi! :) It's old, but Chuck Short (cc'd) once gave this a go and the code review is still up on Gerrit: https://review.openstack.org/#/c/3064/ You could use that as a starting point for the EC2 tests if you wanted... (hint: to download that branch to your local env, do this: git review -d 3064) Do you mind If I add python-boto as a dependency. It is used by nova and swift as well. No problem from me. Just add it in the requirements.txt file in the root dir. Any suggestion about implementation details are welcome. As much as possible, try to follow the example of existing code. Looking forward to seeing this in Tempest. Once it is, we can get rid of the devstack exercises stuff in our gate, as I believe the EC2 exercise is the only one remaining that is not covered by Tempest. All the best, -jay Regards, Attila -- Mailing list: https://launchpad.net/~openstack-qa-team Post to : openstack-qa-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-qa-team More help : https://help.launchpad.net/ListHelp
Re: [Openstack-qa-team] S3/EC2 test based on python-boto
On 10/22/2012 10:36 AM, Attila Fazekas wrote: Hi everyone, I am considering implementing test cases for using the EC2 and S3 API in tempest. I would like to know is anybody else working on this kind of tests ? Hi! :) It's old, but Chuck Short (cc'd) once gave this a go and the code review is still up on Gerrit: https://review.openstack.org/#/c/3064/ You could use that as a starting point for the EC2 tests if you wanted... (hint: to download that branch to your local env, do this: git review -d 3064) Do you mind If I add python-boto as a dependency. It is used by nova and swift as well. No problem from me. Just add it in the requirements.txt file in the root dir. Any suggestion about implementation details are welcome. As much as possible, try to follow the example of existing code. Looking forward to seeing this in Tempest. Once it is, we can get rid of the devstack exercises stuff in our gate, as I believe the EC2 exercise is the only one remaining that is not covered by Tempest. All the best, -jay Regards, Attila -- Mailing list: https://launchpad.net/~openstack-qa-team Post to : openstack-qa-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-qa-team More help : https://help.launchpad.net/ListHelp
Re: [Openstack-qa-team] Moving follow-up Unconference to 1:45 today
Hi Yaniv, answers inline... On 10/22/2012 11:41 AM, Yaniv Kaul wrote: On 10/22/2012 05:33 PM, Jay Pipes wrote: Hi Sean :) Here's a quick recap: We agreed: * nosetests just isn't a good foundation for our work -- especially regarding performance/parallelism Any proposed alternatives? I am looking from time to time at https://code.google.com/p/robotframework/ and wondering if it allow faster test cases development I don't like the behaviour in Robot Frameworks of creating test cases in a tabular format: http://robotframework.googlecode.com/hg/doc/userguide/RobotFrameworkUserGuide.html?r=2.7#id377 I vastly prefer code-based test cases. The library we decided to take a look at was testtools, written in part by Robert Collins, who is a member of the CI team working on OpenStack stuff at HP: http://testtools.readthedocs.org/en/latest/index.html In the past we've also looked at PyVows: http://heynemann.github.com/pyvows/ as well as DTest: https://github.com/klmitch/dtest Basically, the issues we have with nosetests are: * It is very intrusive * It doesn't handle module and class-level fixtures properly (see https://github.com/nose-devs/nose/issues/551) * Multiprocessing plugin is total fail (https://github.com/nose-devs/nose/issues/550) Anything that makes test code cleaner, with the ability to handle fixtures cleanly, annotate dependencies between tests, and parallelize execution effectively is fine in my book. :) * We need to produce good, updated documentation on what different categories of tests are -- smoke, fuzz, positive/negative, etc -- and put this up on the wiki Perhaps a naming convention to the test case names, such as component_category_test_case? For example: compute_sanity_launchVM() compute_negative_launchVM() No, we use decorators to annotate different types of tests (grep for @attr\(type= in Tempest. What we don't have is good basic documentation on what we agree is an acceptance/smoke test and what isn't ;) * We need to produce a template example test case for Tempest that provides excellent code examples of how to create different tests in a best practice way -- I believe David Kranz is going to work on this first? * We need to get traceability matrixes done for the public APIs -- this involves making a wiki page (or even something generated) that lists the API calls and variations of those calls and whether or not they are tested in Tempest Wouldn't code coverage report be better? If you call FunctionX(param1, param2) - and you've called it with 2 different param1 values and always the default in param2 - what does it mean, from coverage perspective? No, unfortunately code coverage doesn't work for functional integration tests in the same way it does for unit tests, for a number of reasons: 1) Tempest executes a series of HTTP calls against public REST endpoints in OpenStack. It has no way of determining what code was run **on the server**. It only has the ability to know what Tempest itself executed, not what percentage of the total API those calls represented 2) Specs don't always exist for the APIs -- yes, I know, this isn't good. Especially problematic are some of the Compute API extensions that aren't documented well, or at all. Best, -jay Thanks, Y. * I will start the traceability matrix stuff and publish for people to go and update * Antoni from HP is going to investigate using things in testtools and testrepository for handling module and package-level fixtures and removing some of the nose-based cruft * I will be working on the YAML file that describes a test environment so that different teams that use different deployment frameworks can use a commonly-agreed format to describe their envs to the CI system * A new member of Gigi's team (really sorry, I've forgotten your name! :( ) is going to look at the fuzz testing discussion from earlier and see about prototyping something together that would be used for negative and security/fuzz testing -- this would enable us to remove the static negative tests from Tempest's main test directories. For the record (and for the team member whose name I have forgotten, here is the relevant link: https://lists.launchpad.net/openstack-qa-team/msg00155.html and https://lists.launchpad.net/openstack-qa-team/msg00156.html) Best, -jay On 10/19/2012 02:47 PM, Sean Gallagher wrote: Daryl, I wasn't able to make the follow-up meeting. :/ Can you/ someone send out a recap? David: you had a list of items. Can you post or share that somewhere? We discussed Blueprints for managing some of the planning work. What about higher level planning docs? Useful? Do we have any? Should we? Re Google Hangout next week, I'm interested. -sean -- Mailing list: https://launchpad.net/~openstack-qa-team Post to : openstack-qa-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-qa-team More help : https://help.launchpad.net/ListHelp
Re: [Openstack-qa-team] Moving follow-up Unconference to 1:45 today
On 10/22/2012 12:41 PM, Yaniv Kaul wrote: Ok - although it's not very well documented - http://testtools.readthedocs.org/en/latest/py-modindex.html http://mumak.net/testtools/apidocs/ -- Mailing list: https://launchpad.net/~openstack-qa-team Post to : openstack-qa-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-qa-team More help : https://help.launchpad.net/ListHelp
Re: [Openstack-qa-team] What is going on with test_server_*_ops?
Approved and merged. On 09/28/2012 11:51 AM, David Kranz wrote: This was the problem (trivial) https://review.openstack.org/#/c/13840/. Some one please review. I am not sure when the behavior changed. -David On 9/25/2012 10:59 AM, Dolph Mathews wrote: That generally pops up when you're bypassing authentication using --endpoint --token (no authentication == no service catalog). Is it using old command line options to specify auth attributes, which were just removed in favor of --os-username, --os-password, etc? https://github.com/openstack/python-keystoneclient/commit/641f6123624b6ac89182c303dfcb0459b28055a2 -Dolph On Tue, Sep 25, 2012 at 9:35 AM, Jay Pipes jaypi...@gmail.com mailto:jaypi...@gmail.com wrote: On 09/25/2012 09:38 AM, David Kranz wrote: I heard from some of my team members that test_server_basic_ops and test_server_advanced_ops were failing and I can reproduce it with current devstack/tempest. Looking at the code it seems that the keystone Client object does not have a service_catalog object like the error says. So why is this not failing the tempest build? Looking at the transcript of a recent successful build I don't see any evidence that this test is running but I don't know why that would be. -David == ERROR: test suite for class 'tempest.tests.compute.test_server_basic_ops.TestServerBasicOps' -- Traceback (most recent call last): File /usr/lib/python2.7/dist-packages/nose/suite.py, line 208, in run self.setUp() File /usr/lib/python2.7/dist-packages/nose/suite.py, line 291, in setUp self.setupContext(ancestor) File /usr/lib/python2.7/dist-packages/nose/suite.py, line 314, in setupContext try_run(context, names) File /usr/lib/python2.7/dist-packages/nose/util.py, line 478, in try_run return func() File /opt/stack/tempest/tempest/test.py, line 39, in setUpClass cls.manager = cls.manager_class() File /opt/stack/tempest/tempest/manager.py, line 96, in __init__ self.image_client = self._get_image_client() File /opt/stack/tempest/tempest/manager.py, line 138, in _get_image_client endpoint = keystone.service_catalog.url_for(service_type='image', AttributeError: 'Client' object has no attribute 'service_catalog' I wouldn't be surprised if this is due to a change in python-keystoneclient. Dolph, was anything changed recently that might have produced this failure? Thanks, -jay -- Mailing list: https://launchpad.net/~openstack-qa-team Post to : openstack-qa-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-qa-team More help : https://help.launchpad.net/ListHelp
Re: [Openstack-qa-team] What is going on with test_server_*_ops?
That's because I forgot to decorate the base SmokeTest with the @attr(type=smoke) decorator :( Feel like doing that? -jay On 09/28/2012 01:05 PM, David Kranz wrote: Thanks, Jay. But this now confirms that test_server_basic_ops is not running in the gating job. But it does run when I do 'nosetests -v tempest' in my local environment. How could this be? -David Nothing in the gate log, but this in my local: test_001_create_keypair (tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok test_002_create_security_group (tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok test_003_boot_instance (tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok test_004_wait_on_active (tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok test_005_pause_server (tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok test_006_unpause_server (tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok test_007_suspend_server (tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok test_008_resume_server (tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok test_099_terminate_instance (tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok On 9/28/2012 12:12 PM, Jay Pipes wrote: Approved and merged. On 09/28/2012 11:51 AM, David Kranz wrote: This was the problem (trivial) https://review.openstack.org/#/c/13840/. Some one please review. I am not sure when the behavior changed. -David On 9/25/2012 10:59 AM, Dolph Mathews wrote: That generally pops up when you're bypassing authentication using --endpoint --token (no authentication == no service catalog). Is it using old command line options to specify auth attributes, which were just removed in favor of --os-username, --os-password, etc? https://github.com/openstack/python-keystoneclient/commit/641f6123624b6ac89182c303dfcb0459b28055a2 -Dolph On Tue, Sep 25, 2012 at 9:35 AM, Jay Pipesjaypi...@gmail.com mailto:jaypi...@gmail.com wrote: On 09/25/2012 09:38 AM, David Kranz wrote: I heard from some of my team members that test_server_basic_ops and test_server_advanced_ops were failing and I can reproduce it with current devstack/tempest. Looking at the code it seems that the keystone Client object does not have a service_catalog object like the error says. So why is this not failing the tempest build? Looking at the transcript of a recent successful build I don't see any evidence that this test is running but I don't know why that would be. -David == ERROR: test suite forclass 'tempest.tests.compute.test_server_basic_ops.TestServerBasicOps' -- Traceback (most recent call last): File /usr/lib/python2.7/dist-packages/nose/suite.py, line 208, in run self.setUp() File /usr/lib/python2.7/dist-packages/nose/suite.py, line 291, in setUp self.setupContext(ancestor) File /usr/lib/python2.7/dist-packages/nose/suite.py, line 314, in setupContext try_run(context, names) File /usr/lib/python2.7/dist-packages/nose/util.py, line 478, in try_run return func() File /opt/stack/tempest/tempest/test.py, line 39, in setUpClass cls.manager = cls.manager_class() File /opt/stack/tempest/tempest/manager.py, line 96, in __init__ self.image_client = self._get_image_client() File /opt/stack/tempest/tempest/manager.py, line 138, in _get_image_client endpoint = keystone.service_catalog.url_for(service_type='image', AttributeError: 'Client' object has no attribute 'service_catalog' I wouldn't be surprised if this is due to a change in python-keystoneclient. Dolph, was anything changed recently that might have produced this failure? Thanks, -jay -- Mailing list: https://launchpad.net/~openstack-qa-team Post to : openstack-qa-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-qa-team More help : https://help.launchpad.net/ListHelp
Re: [Openstack-qa-team] Tempest Gating
On 09/07/2012 08:13 PM, Dan Smith wrote: DW We also have a different problem with running tests in parallel DW now. None of the newly designed basic/advanced ops test can be run DW in parallel given their dependency between tests. The only way I can DW think of to proceed would be to rework the nose plugin to run test DW classes in parallel, but tests within the classes in serial. I'll DW poke around this weekend to see what's possible. Daryl Running tempest tests in parallel is definitely a good goal to have. However, I believe that all the tests run in a gate are done sequentially right now, correct? Seems like if we kicked off tempest at the same time as the regular gate stuff, much of their execution would overlap. I'm not sure if the current CI stuff does any parallel jobs like that, but I think it shouldn't be too hard to do and would help with the latency-to-merge issue. Part of the problem is that the CI system runs Tempest on the same CI node as the OpenStack environment that devstack installs. If we could run multiple Tempest workers that each fired a subset of the full tempest run against a CI node, that would solve the parallel problem. But unfortunately, that's just not how Jenkins works :( -jay -- Mailing list: https://launchpad.net/~openstack-qa-team Post to : openstack-qa-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-qa-team More help : https://help.launchpad.net/ListHelp
Re: [Openstack-qa-team] Policy for commits
On 09/06/2012 02:21 PM, David Kranz wrote: Do we have a policy about whether bug tickets are needed for every change? I happened to see a silly coding error and would prefer to avoid the overhead of a bug ticket for such things. Silly coding errors/typos/style cleanups do not need a bug. But pretty much everything else should. Best, -jay -- Mailing list: https://launchpad.net/~openstack-qa-team Post to : openstack-qa-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-qa-team More help : https://help.launchpad.net/ListHelp
Re: [Openstack] A plea from an OpenStack user
Ryan, thank you for your excellent and detailed comments about problems you encountered during the upgrade process. This is precisely the kind of constructive feedback that is needed and desired. Someone mentioned automated testing of upgrade paths. This is exactly what needs to happen. Hopefully the Tempest folks can work with the CI team in the G timeframe to incorporate upgrade path testing for the OpenStack components. It likely won't solve ALL the issues -- such as the poor LDAP port in Keystone Light -- but it will at least serve to highlight where the major issues are BEFORE folks run into them. It will also help identify those tricky things like the Glance issue below: Glance itself upgraded its data effectively, but failed to produce scripts to modify the Nova image database IDs at the same time. Thanks again, -jay On 08/28/2012 05:26 PM, Ryan Lane wrote: Yesterday I spent the day finally upgrading my nova infrastructure from diablo to essex. I've upgraded from bexar to cactus, and cactus to diablo, and now diablo to essex. Every single upgrade is becoming more and more difficult. It's not getting easier, at all. Here's some of the issues I ran into: 1. Glance changed from using image numbers to uuids for images. Nova's reference to these weren't updated. There was no automated way to do so. I had to map the old values to the new values from glance's database then update them in nova. The mention of testing upgrade paths go 2. Instance hostnames are changed every single release. In bexar and cactus it was the ec2 style id. In diablo it was changed and hardcoded to instance-ec2-style-id. In essex it is hardcoded to the instance name; the instance's ID is configurable (with a default of instance-ec2-style-id, but it only affects the name used in virsh/the filesystem. I put a hack into diablo (thanks to Vish for that hack) to fix the naming convention as to not break our production deployment, but it only affected the hostnames in the database, instances in virsh and on the filesystem were still named instance-ec2-style-id, so I had to fix all libvirt definitions and rename a ton of files to fix this during this upgrade, since our naming convention is the ec2-style format. The hostname change still affected our deployment, though. It's hardcoded. I decided to simply switch hostnames to the instance name in production, since our hostnames are required to be unique globally; however, that changes how our puppet infrastructure works too, since the certname is by default based on fqdn (I changed this to use the ec2-style id). Small changes like this have giant rippling effects in infrastructures. 3. There used to be global groups in nova. In keystone there are no global groups. This makes performing actions on sets of instances across tenants incredibly difficult; for instance, I did an in-place ubuntu upgrade from lucid to precise on a compute node, and needed to reboot all instances on that host. There's no way to do that without database queries fed into a custom script. Also, I have to have a management user added to every single tenant and every single tenant-role. 4. Keystone's LDAP implementation in stable was broken. It returned no roles, many values were hardcoded, etc. The LDAP implementation in nova worked, and it looks like its code was simply ignored when auth was moved into keystone. My plea is for the developers to think about how their changes are going to affect production deployments when upgrade time comes. It's fine that glance changed its id structure, but the upgrade should have handled that. If a user needs to go into the database in their deployment to fix your change, it's broken. The constant hardcoded hostname changes are totally unacceptable; if you change something like this it *must* be configurable, and there should be a warning that the default is changing. The removal of global groups was a major usability killer for users. The removal of the global groups wasn't necessarily the problem, though. The problem is that there were no alternative management methods added. There's currently no reasonable way to manage the infrastructure. I understand that bugs will crop up when a stable branch is released, but the LDAP implementation in keystone was missing basic functionality. Keystone simply doesn't work without roles. I believe this was likely due to the fact that the LDAP backend has basically no tests and that Keystone light was rushed in for this release. It's imperative that new required services at least handle the functionality they are replacing, when released. That said, excluding the above issues, my upgrade went fairly smoothly and this release is *way* more stable and performs *way* better, so kudos to the community for that. Keep up the good work! - Ryan ___ Mailing list: https://launchpad.net/~openstack Post to :
Re: [Openstack] multiple interfaces for floating IPs
No, not that I'm aware of -- at least not on the same compute node... You can only specify public_interface=XXX for a single interface (or bridge) used for all floating IPs for the VMs on a compute node. Best, -jay On 08/20/2012 12:13 PM, Juris wrote: Greetings everyone, Just a quick question. Is it possible to assign floating IPs to multiple nova node interfaces? For instance if I have a server with 4 NICs and I'd like to use NIC1 for private network, NIC2 for data and management, NIC3 for one of my public IP subnets and NIC4 for another public IP subnet? Best wishes, Juris ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack-qa-team] Tempest Gating
On 08/21/2012 05:45 PM, Dan Smith wrote: In other suites, I've seen an XFAIL result used to mark tests that we know are failing right now so that they're not SKIPped like tests that are missing some component, but rather just not fatal to the task at hand. Maybe something like that would be useful in tempest? If I found a bug in Nova right now and wanted to get a test into tempest ASAP to poke it, submitting as XFAIL would (a) not break Jenkins because the test failed (as expected) and (b) raise a flag when the test started to pass to make sure that it gets un-marked as XFAIL. This would be ideal! Unfortunately, I don't know of a way to do this with nosetests/unittest(2) in Python. Very open to suggestions, though :) DW They may not be caused by the patch at hand, but servers and volumes DW going into error status definitely signal issues, whether they be in DW code or environment. I don't have access to the Tempest CI DW environment so I don't have much insight into those issues DW specifically, though there might be some additional error checking DW that we can do to get more information on what is going wrong. Yeah, I've been trying to reproduce the issues locally, as I'm happy to fix them up if I can figure out what the problem is. However, I feel like I'm flying blind a bit, without a view into the CI machine itself :) I believe Jim Blair has addressed this in followup emails... DW I'm doing what I can Dan to get your patches reviewed. The trick DW being that since there is a dependency chain between most of the DW commits, it adds a level of complexity. Jay, who's done most of the DW CI setup thus far, is out of country, so I'm trying to find other DW folks I can reach out to help stabilize the environment. Yeah, where is that slacker? :) Right now? Venice, Italy. In the last few days, Milan, Istanbul, Sofia. Next couple days, Florence then Rome, then home. Back on the 29th. So sorry to slack off so much! ;) Best, -jay -- Mailing list: https://launchpad.net/~openstack-qa-team Post to : openstack-qa-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-qa-team More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [Nova] How common is user_data for instances?
On 08/13/2012 07:38 PM, Michael Still wrote: On 14/08/12 08:54, Jay Pipes wrote: I was *going* to create a random-data table with the same average row size as the instances table in Nova to see how long the migration would take, and then I realized something... The user_data column is already of column type MEDIUMTEXT, not TEXT: jpipes@uberbox:~$ mysql -uroot nova -e DESC instances | grep user_data user_datamediumtext YES NULL So the column can already store data up to 2^24 bytes long, or 16MB of data. So this might be a moot issue already? Do we expect user data to be more than 16MB? The bug reports truncation at 64kb. The last schema change I can see for that column is Essex version 82, which has: $ grep user_data *.py 082_essex.py:Column('user_data', Text), http://docs.sqlalchemy.org/en/latest/dialects/mysql.html says that Text is MySQL TEXT type, for text up to 2^16 characters. Am I misunderstanding something here? No, I read the exact same thing in the SQLAlchemy docs and was surprised to see the column type was MEDIUMTEXT. But I assure you it is :) Just run devstack and verify! -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [Nova] How common is user_data for instances?
On 08/13/2012 09:12 AM, Dan Prince wrote: - Original Message - From: Michael Still michael.st...@canonical.com To: openstack@lists.launchpad.net, openstack-operat...@lists.openstack.org Sent: Saturday, August 11, 2012 5:12:22 AM Subject: [Openstack] [Nova] How common is user_data for instances? Greetings. I'm seeking information about how common user_data is for instances in nova. Specifically for large deployments (rackspace and HP, here's looking at you). What sort of costs would be associated with changing the data type of the user_data column in the nova database? Bug 1035055 [1] requests that we allow user_data of more than 65,535 bytes per instance. Note that this size is a base64 encoded version of the data, so that's only a bit under 50k of data. This is because the data is a sqlalchemy Text column. We could convert to a LongText column, which allows 2^32 worth of data, but I want to understand the cost to operators of that change some more. Is user_data really common? Do you think people would start uploading much bigger user_data? Do you care? Nova has configurable quotas on most things so if we do increase the size of the DB column we should probably guard it in a configurable manner with quotas as well. My preference would actually be that we go the other way though and not have to store user_data in the database at all. That unfortunately may not be possible since some images obtain user_data via the metadata service which needs a way to look it up. Other methods of injecting metadata via disk injection, agents and/or config drive however might not need it to be store in the database right? +1 When we can, let's not hobble ourselves to the EC2 API way of doing things when we can have a more efficient and innovative solution. As a simpler solution: Would setting a reasonable limit (hopefully smaller) and returning a HTTP 400 bad request if incoming requests exceed that limit be good enough to resolve this ticket? That way we don't have to increase the DB column at all and end users would be notified up front that user_data is too large (not silently truncated). They way I see it user_data is really for bootstrapping instances... we probably don't need it to be large enough to write an entire application, etc. Seems reasonable to me. -jay Mikal 1: https://bugs.launchpad.net/nova/+bug/1035055 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [Nova] How common is user_data for instances?
On 08/13/2012 09:53 AM, Stephen Gran wrote: Hi, I think user_data is probably reasonably common - most people who use, eg, cloud-init will use it (we do). As the 64k limit is a MySQL limitation, and not a nova limitation, why not just say, if you want more storage, use postgres (or similar)? I have no issue with making the size guarded in the application, with a configurable limit, but the particular problem that started this off is an implementation issue rather than a code issue. Or just set the column to the LONGTEXT type and both MySQL and PostgreSQL will be just as happy. Storing the user_data in some place like the database is fairly important for making things like launch configs for autoscale groups work. I'd like to not make that harder to implement. Why is storing user_data in the database fairly important? You say above you don't want an implementation issue to be misconceived as a code issue -- and then go on to say that an implementation issue (storing user_data in a database) isn't a code issue. I don't think you can have it both ways. :) Now, I totally buy the argument that there is a large existing cloud-init userbase out there that relies on the EC2 Metadata API service living on the hard-coded 169.254.169.254 address, and we shouldn't do anything to mess up that experience. But I totally think that config-drive or disk-injection is a better way to handle this stuff -- and certainly doesn't force an implementation that has proven to be a major performance and scaling bottleneck (the EC2 Metadata service) Best, -jay Cheers, On Mon, 2012-08-13 at 09:12 -0400, Dan Prince wrote: - Original Message - From: Michael Still michael.st...@canonical.com To: openstack@lists.launchpad.net, openstack-operat...@lists.openstack.org Sent: Saturday, August 11, 2012 5:12:22 AM Subject: [Openstack] [Nova] How common is user_data for instances? Greetings. I'm seeking information about how common user_data is for instances in nova. Specifically for large deployments (rackspace and HP, here's looking at you). What sort of costs would be associated with changing the data type of the user_data column in the nova database? Bug 1035055 [1] requests that we allow user_data of more than 65,535 bytes per instance. Note that this size is a base64 encoded version of the data, so that's only a bit under 50k of data. This is because the data is a sqlalchemy Text column. We could convert to a LongText column, which allows 2^32 worth of data, but I want to understand the cost to operators of that change some more. Is user_data really common? Do you think people would start uploading much bigger user_data? Do you care? Nova has configurable quotas on most things so if we do increase the size of the DB column we should probably guard it in a configurable manner with quotas as well. My preference would actually be that we go the other way though and not have to store user_data in the database at all. That unfortunately may not be possible since some images obtain user_data via the metadata service which needs a way to look it up. Other methods of injecting metadata via disk injection, agents and/or config drive however might not need it to be store in the database right? As a simpler solution: Would setting a reasonable limit (hopefully smaller) and returning a HTTP 400 bad request if incoming requests exceed that limit be good enough to resolve this ticket? That way we don't have to increase the DB column at all and end users would be notified up front that user_data is too large (not silently truncated). They way I see it user_data is really for bootstrapping instances... we probably don't need it to be large enough to write an entire application, etc. Mikal 1: https://bugs.launchpad.net/nova/+bug/1035055 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Does glance-scrubber.conf require sql_connection?
On 08/12/2012 10:12 PM, Lorin Hochstein wrote: Doc question: Does glance-scrubber require sql_connection? The Install and Deploy Guide specifies the sql_connection parameter http://docs.openstack.org/essex/openstack-compute/install/apt/content/glance-scrubber-conf-file.html, but it wasn't clear to me that the scrubber actually makes any queries against the database. It used to make direct queries against the registry database, but now it makes queries via the registry's REST API. So this option can safely be removed now. Jason, do you concur? Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Does glance-scrubber.conf require sql_connection?
On 08/13/2012 01:45 PM, Lorin Hochstein wrote: On Aug 13, 2012, at 11:33 AM, Jay Pipes jaypi...@gmail.com wrote: On 08/12/2012 10:12 PM, Lorin Hochstein wrote: Doc question: Does glance-scrubber require sql_connection? The Install and Deploy Guide specifies the sql_connection parameter http://docs.openstack.org/essex/openstack-compute/install/apt/content/glance-scrubber-conf-file.html, but it wasn't clear to me that the scrubber actually makes any queries against the database. It used to make direct queries against the registry database, but now it makes queries via the registry's REST API. So this option can safely be removed now. Does now mean as of essex or as of folsom? Sorry, good point, Lorin :) This behaviour (of not requiring the registry database connection) was implemented in Essex: https://bugs.launchpad.net/glance/+bug/836381 Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [Nova] How common is user_data for instances?
On 08/13/2012 06:02 PM, Michael Still wrote: On 14/08/12 01:24, Jay Pipes wrote: Or just set the column to the LONGTEXT type and both MySQL and PostgreSQL will be just as happy. This is what I was originally aiming at -- will large deployers be angry if I change this column to longtext? Will the migration be a significant problem for them? From the MySQL standpoint, the migration impact is neglible. It's essentially changing the row pointer size from 2 bytes to 4 bytes and rewriting data pages. For InnoDB tables, it's unlikely many rows would even be moved, as InnoDB stores a good chunk of these types of rows in its main data pages -- I think up to 4KB if I remember correctly -- so unless the user data exceeded that size, I don't think the rows would even need to move data pages... I would guess that an ALTER TABLE that changes the column from a TEXT to a LONGTEXT would likely take less than a minute for even a pretty big (millions of rows in the instances table) database. I was *going* to create a random-data table with the same average row size as the instances table in Nova to see how long the migration would take, and then I realized something... The user_data column is already of column type MEDIUMTEXT, not TEXT: jpipes@uberbox:~$ mysql -uroot nova -e DESC instances | grep user_data user_data mediumtext YES NULL So the column can already store data up to 2^24 bytes long, or 16MB of data. So this might be a moot issue already? Do we expect user data to be more than 16MB? -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [openstack-dev] [nova] Call for Help -- OpenStack API XML Support
On 08/09/2012 11:05 PM, George Reese wrote: On Aug 9, 2012, at 8:14 PM, Doug Davis d...@us.ibm.com mailto:d...@us.ibm.com wrote: Situations like this are always interesting to watch. :-) On the one hand its open-source, so if you care about something then put up the resources to make it happen. This attitude always bothers me. This isn't some Open Source labor of love. It's a commercial collaboration in which many of the contributors have a significant economic interest. To be more blunt: if I'm writing code, it's for enStratus. Patches always welcome, George. If you can't see that code you may write for enStratus might be globally useful, then you're missing the point of this open development community. And although there are many in this OpenStack community that work for a commercial entity and contribute code as such, there are many who don't -- and dismissing their contributions as some Open Source labor of love is degrading and shows the type of opinion you have towards anything that doesn't fit nicely in your everything-is-a-commercial-agenda worldview. If you care about something, then help to fix it. -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] jcloud has a conneciton pool?
On 08/10/2012 04:14 PM, chaohua wang wrote: Hi Folks, I am working on Jcloud and Openstack. We have application(using jcloud) to connect Hp cloud service. But for each request to HP cloud service, we created restContext (RestContextNovaApi, NovaAsyncApi restContext = computeContext.unwrap();), then get NovaAPi ( NovaApi novaApi = restContext.getApi();). once task is finished, we closed restContext, so this is not efficient. since each request to create restconects, then closed once job is done. There is any way to reuse this restContext and NoavaApi? do i need to build a connection pool to reuse it? I am not sure originally why engineers did in this way. it may for security reason since each request we ask token from service. Hi Chwang, an interesting question indeed, but I think it may be better to ask over on the jClouds mailing list. Not sure there are too many experts on jClouds here on the OpenStack mailing list! :) Here is the jCloud ML, for reference: https://groups.google.com/forum/?fromgroups#!forum/jclouds Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Help with meta-data
On 08/08/2012 03:57 AM, Simon Walter wrote: Hi all, I've completed the excruciating Launchpad process of subscribing to a mailing list to ask for your help with having my instances access their meta-data. What was excruciating about the subscription process? However, they cannot access their meta-data: Begin: Running /scripts/init-bottom ... done. cloud-init start-local running: Wed, 08 Aug 2012 07:33:07 +. up 8.32 seconds no instance data found in start-local ci-info: lo: 1 127.0.0.1 255.0.0.0 . ci-info: eth1 : 0 . . fa:16:3e:5a:f3:05 ci-info: eth0 : 1 192.168.1.205 255.255.255.0 fa:16:3e:23:d7:7c ci-info: route-0: 0.0.0.0 192.168.1.1 0.0.0.0 eth0 UG ci-info: route-1: 192.168.1.0 0.0.0.0 255.255.255.0 eth0 U cloud-init start running: Wed, 08 Aug 2012 07:33:10 +. up 11.95 seconds 2012-08-08 07:33:54,243 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: url error [[Errno 113] No route to host] snip 2012-08-08 07:35:55,308 - DataSourceEc2.py[CRITICAL]: giving up on md after 124 seconds no instance data found in start Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd I can see something on the host: curl http://169.254.169.254:8775/ 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 2009-04-04 Where are you curl'ing from? The compute node or the host running the nova-ec2-metadata service? But doing something like: I get a HTTP 500 error. I think you're missing a paste above :) doing something like what? I don't know if the problem is routing or with the meta-data service. Well, it's unlikely it's an issue with the metadata service because the metadata service is clearly responding properly to at least ONE host, as evidenced above. It's more likely a routing issue. Can you SSH into the VM in question and try pinging the EC2 metadata service URL? (http://169.254.169.254:8775/) Best, -jay Any help is appreciated. I'm running this all on one box. Here is my nova.conf: --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --allow_admin_api=true --use_deprecated_auth=false --auth_strategy=keystone --scheduler_driver=nova.scheduler.simple.SimpleScheduler --s3_host=192.168.1.14 --ec2_host=192.168.1.14 --rabbit_host=192.168.1.14 --cc_host=192.168.1.14 --nova_url=http://192.168.1.14:8774/v1.1/ --routing_source_ip=192.168.1.14 --glance_api_servers=192.168.1.14:9292 --image_service=nova.image.glance.GlanceImageService --iscsi_ip_prefix=192.168.22 --sql_connection=mysql://nova:s7ack3d@127.0.0.1/nova --ec2_url=http://192.168.1.14:8773/services/Cloud --keystone_ec2_url=http://192.168.1.14:5000/v2.0/ec2tokens --api_paste_config=/etc/nova/api-paste.ini --libvirt_type=kvm --libvirt_use_virtio_for_bridges=true --start_guests_on_host_boot=true --resume_guests_state_on_host_boot=true --vnc_enabled=true --vncproxy_url=http://192.168.1.14:6080 --vnc_console_proxy_url=http://192.168.1.14:6080 # network specific settings --network_manager=nova.network.manager.FlatDHCPManager --public_interface=eth0 --flat_interface=eth1 --flat_network_bridge=br100 --fixed_range=10.0.2.0/24 --floating_range=192.168.1.30/27 --network_size=32 --flat_network_dhcp_start=10.0.2.1 --flat_injected=False --force_dhcp_release --iscsi_helper=tgtadm --connection_type=libvirt --root_helper=sudo nova-rootwrap --verbose I have a question about VNC as well, but this is by far more important. Thanks for your help, Simon ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] KVM live block migration: stability, future, docs
On 08/07/2012 08:57 AM, Blair Bethwaite wrote: Hi Sébastien, Thanks for responding! By the way, I have come across your blog post regarding this and should reference it for the list: http://www.sebastien-han.fr/blog/2012/07/12/openstack-block-migration/ On 7 August 2012 17:45, Sébastien Han han.sebast...@gmail.com wrote: I think it's a pretty useful feature, a good compromise. As you said using a shared fs implies a lot of things and can dramatically decrease your performance rather than using the local fs. Agreed, scale-out distributed file-systems are hard. Consistent hashing based systems (like Gluster and Ceph) seem like the answer to many of the existing issues with systems trying to mix scalability, performance and POSIX compliance. But the key issue is how one measures performance for these systems... throughput for large synchronous reads writes may scale linearly (up to network saturation), but random IOPS are another thing entirely. As far as I can tell, random IOPS are the primary metric of concern in the design of the nova-compute storage, whereas both capacity and throughput requirements are relatively easy to specify and simply represent hard limits that must be met to support the various instance flavours you plan to offer. It's interesting to note that RedHat do not recommend using RHS (RedHat Storage), their RHEL-based Gluster (which they own now) appliance, for live VM storage. Additionally, operations issues are much harder to handle with a DFS (even NFS), e.g., how can I put an upper limit on disk I/O for any particular instance when its ephemeral disk files are across the network and potentially striped into opaque objects across multiple storage bricks...? We at ATT are also interested in this area, for the record, and will likely do testing in this area in the next 6-12 months. We will release any information and findings to the mailing list of course, and hopefully we can collaborate on this important area. I tested it and I will use it for my deployment. I'll be happy to discuss more deeply with you about this feature :) Great. We have tested too. Compared to regular (non-block) live migrate, we don't see much difference in the guest - both scenarios involve a minute or two of interruption as the guest is moved (e.g. VNC and SSH sessions hang temporarily), which I find slightly surprising - is that your experience too? Why would you find this surprising? I'm just curious... I also feel a little concern about this statement: It don't work so well, it complicates migration code, and we are building a replacement that works. I have to go further with my tests, maybe we could share some ideas, use case etc... I think it may be worth asking about this on the KVM lists, unless anyone here has further insights...? I grabbed the KVM 1.0 source from Ubuntu Precise and vanilla KVM 1.1.1 from Sourceforge, block migration appears to remain in place despite those (sparse) comments from the KVM meeting minutes (though I am naive to the source layout and project structure, so could have easily missed something). In any case, it seems unlikely Precise would see a forced update to the 1.1.x series. cc'd Daniel Berrange, who seems to be keyed in on upstream KVM/Qemu activity. Perhaps Daniel could shed some light. Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] KVM live block migration: stability, future, docs
On 08/07/2012 08:23 PM, Blair Bethwaite wrote: Hi Jay, On 8 August 2012 06:13, Jay Pipes jaypi...@gmail.com wrote: Why would you find this surprising? I'm just curious... The live migration algorithm detailed here: http://www.linux-kvm.org/page/Migration, seems to me to indicate that only a brief pause should be expected. Indeed, the summary says, Almost unnoticeable guest down time. But to the contrary. I tested live-migrate (without block migrate) last night using a guest with 8GB RAM (almost fully committed) and lost any access/contact with the guest for over 4 minutes - it was paused for the duration. Not something I'd want to do to a user's web-server on a regular basis... Sorry, from your original post, I didn't think you were referring to live migration, but rather just server migration. You had written Compared to regular (non-block) live migrate, but I read that as Compared to regular migrate and thought you were referring to the server migration behaviour that Nova supports... sorry about that! Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] KVM live block migration: stability, future, docs
On 08/07/2012 09:42 PM, Blair Bethwaite wrote: On 8 August 2012 11:33, Jay Pipes jaypi...@gmail.com wrote: Sorry, from your original post, I didn't think you were referring to live migration, but rather just server migration. You had written Compared to regular (non-block) live migrate, but I read that as Compared to regular migrate and thought you were referring to the server migration behaviour that Nova supports... sorry about that! Jay, is your use of the wording behaviour that Nova supports there, significant? I mean, you're not trying to indicate that Nova does not support _live_ migration, are you? No, I was referring to the differentiation between server migration in Nova and live migration in Nova. In other words, the difference between: $ nova migrate SERVER ... and $ nova live-migrate SERVER ... Anyway, I found this relevant and stale bug: https://bugs.launchpad.net/nova/+bug/883845. VIR_MIGRATE_LIVE remains undefined in https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py. We only just discovered the lack of this as a default option, so we'll test further, this time with VIR_MIGRATE_LIVE=1 explicitly specified in nova.conf... OK, cheers, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [Nova] Create multiple instances in one request
On 08/07/2012 02:27 PM, Anne Gentle wrote: It seems the log from the last Nova meeting where this was discussed is gathered together with the QA team meeting due to the meetbot not being turned off between meetings. The log is here, scroll to the bottom to read. http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-08-02-17.00.log.html Yeah, that was my fault. I forgot to type #endmeeting at the end of the weekly QA status meeting. Sorry about that. :( -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] keystone and ssl ?
On 08/03/2012 05:18 AM, Pierre Amadio wrote: snip https://blueprints.launchpad.net/keystone/+spec/2-way-ssl At the bottom of the blueprint, there are 2 addressed by links with a set of patches: https://review.openstack.org/1038 https://review.openstack.org/7706 But i do not find trace of those patches in the ubuntu package snip I also fail to find trace of those in a git checkout of the refs/heads/stable/essex branch of keystone's git repository. I am confused. The reason is because that code and a bunch of other stuff was ripped out of Keystone late in the Essex release series with the move to Keystone Light, which was essentially a rewrite of Keystone that replaced the Keystone project that had the code in it that you refer to above. I've cc'd Joe Heck to give you some information on when SSL support might be re-added to Keystone. Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] best practices for merging common into specific projects
On 08/02/2012 08:52 PM, Eric Windisch wrote: What do you mean by membership services? See the email today from Yun Mao. This is a proposal to have a pluggable framework for integration services that maintain memberships. This was originally desiged to replace the MySQL heartbeats in Nova, although there will be a mysql-heartbeat backend by default as a drop-in replacement. There is a zookeeper backend in the works, and we've discussed the possibility of building a backend that can poll RabbitMQ's list_consumers. Ah, yes. I've urged the team to use the term ServiceGroup instead of the Zookeeper membership terminology -- as membership has other connotations in Glance and Nova -- for instance, membership in a project/tenant really has nothing to do with the concept of service groups that can monitor response/hearbeat of service daemons. Best, -jay This is useful for more than just Nova's heartbeats, however. This will largely supplant the requirement for the matchmaker to build these backends in itself, which had been my original plan (the matchmaker is already in openstack-common). As such, it had already been my intent to have a MySQL-backed matchmaker. The only thing new is that someone has actually written the code. In the first pass, the intention is to leave the matchmaker in and introduce the membership modules. Then, the matchmaker would either use the new membership modules as a backend, or even replaced entirely. Regards, Eric Windisch ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Cannot pass hint to Nova Scheduler
On 08/03/2012 09:28 AM, Heng Xu wrote: Another questions is, I can get all the status of a computing node in the mysql nova database, and select * from compute_node, but now I am using json filter, the only field I have success with now is the free_ram_db, if my hint uses free_disk_gb, then I always get error, but the database is showing my compute node has $free_disk_gb equal 17, so I was wondering, where to find exactly what kind of json field can use in json filter, thanks in advance The nova.scheduler.host_manager.HostState class is what is checked for attributes, not the ComputeNode model. So, you need to use $free_disk_mb, not free_disk_gb. Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Qcow2 Details on base images
On 08/02/2012 07:47 AM, Gaurab Basu wrote: Hi Jay, Thanks for your reply, it helped me get started. I have been going through the code and some of the sparse docs that are available. This is the code file https://github.com/openstack/nova/blob/master/nova/virt/libvirt/utils.py However I am facing a new issue and require some help. I wanted to modify how openstack handles the cow layer as such and also the qcow2 format. It turns out that openstack issues the external command qemu-img. First of all, is qemu-img internal to openstack ( I mean code for how qemu-img is implemented is in openstack or in qemu ) It is in openstack, where is the code located. If it is outside openstack, does that mean i have to change the code in qemu and then link those binaries with openstack. QEMU is a totally separate project from Nova, yes. QEMU is written in C and has a number of executables such as qemu-img and qemu-nbd, etc. Nova calls out to these executables in subprocesses. If you want to make changes to QEMU, yes, you would want to look into the QEMU contribution process and community. Here's where to start: http://wiki.qemu.org/Documentation/GettingStartedDevelopers Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Cannot pass hint to Nova Scheduler
Sorry for top-posting, but there's not really a good place to inline comment. First, let's tackle logging in devstack... When using devstack, you noticed that it logs to the screen session by default. To make devstack ALSO log to a file, put the following in your localrc: LOG_COLOR=False SCREEN_LOGDIR=/opt/stack/logs And re-run stack.sh. You will now find the various service log files in /opt/stack/logs. Second, let's handle the JSON issue... Nova isn't trying to decode a file. It's trying to JSON-decode the string you're putting on the command line: --hint query=['=','$free_ram_mb',1024] The novaclient is passing the string ['=','$free_ram_mb',1024] to the jsonutils.loads() function, which is what is failing. You can try parsing this string yourself and see that the failure is raised the same as appears in the log: jpipes@uberbox:~/repos/tempest$ python Python 2.7.3 (default, Apr 20 2012, 22:39:59) [GCC 4.6.3] on linux2 Type help, copyright, credits or license for more information. import json p = json.loads(['=','$free_ram_mb',1024]) Traceback (most recent call last): File stdin, line 1, in module File /usr/lib/python2.7/json/__init__.py, line 326, in loads return _default_decoder.decode(s) File /usr/lib/python2.7/json/decoder.py, line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File /usr/lib/python2.7/json/decoder.py, line 384, in raw_decode raise ValueError(No JSON object could be decoded) ValueError: No JSON object could be decoded The problem is the string needs to be properly formatted JSON, and single-quotes are not allowed -- you need to use double-quotes: p = json.loads('[=,$free_ram_mb,1024]') print p [u'=', u'$free_ram_mb', 1024] Try your command like this instead: nova --debug boot --image 827d564a-e636-4fc4-a376-d36f7ebe1747 --flavor 1 --hint query='[=,$free_ram_mb,1024]' server1 And I think you should be fine, as the following proof shows: jpipes@uberbox:~/repos/tempest$ echo '[=,$free_ram_mb,1024]' | python -mjson.tool [ =, $free_ram_mb, 1024 ] Best, -jay On 08/02/2012 02:09 PM, Heng Xu wrote: Hi, attached is the json_filter file I was used, but I it just came with devstack script installation, I did not even modify it. Heng From: Pengjun Pan [panpeng...@gmail.com] Sent: Thursday, August 02, 2012 6:07 PM To: Heng Xu Cc: openstack@lists.launchpad.net Subject: Re: [Openstack] Cannot pass hint to Nova Scheduler Post your filter file. Might be a typo. PJ On Thu, Aug 2, 2012 at 1:02 PM, Heng Xu shouhengzhang...@mail.utoronto.ca wrote: Hi, I recorded the error message, below 2012-08-02 13:51:02 TRACE nova.rpc.amqp File /opt/stack/nova/nova/scheduler/filters/json_filter.py, line 141, in host_passes 2012-08-02 13:51:02 TRACE nova.rpc.amqp result = self._process_filter(jsonutils.loads(query), host_state) 2012-08-02 13:51:02 TRACE nova.rpc.amqp File /opt/stack/nova/nova/openstack/common/jsonutils.py, line 123, in loads 2012-08-02 13:51:02 TRACE nova.rpc.amqp return json.loads(s) 2012-08-02 13:51:02 TRACE nova.rpc.amqp File /usr/lib/python2.7/json/__init__.py, line 326, in loads 2012-08-02 13:51:02 TRACE nova.rpc.amqp return _default_decoder.decode(s) 2012-08-02 13:51:02 TRACE nova.rpc.amqp File /usr/lib/python2.7/json/decoder.py, line 366, in decode 2012-08-02 13:51:02 TRACE nova.rpc.amqp obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 2012-08-02 13:51:02 TRACE nova.rpc.amqp File /usr/lib/python2.7/json/decoder.py, line 384, in raw_decode 2012-08-02 13:51:02 TRACE nova.rpc.amqp raise ValueError(No JSON object could be decoded) 2012-08-02 13:51:02 TRACE nova.rpc.amqp ValueError: No JSON object could be decoded 2012-08-02 13:51:02 TRACE nova.rpc.amqp it seems that the filter cannot find my json file, so although I was using the --hint functionality, whatever typed after the hint did not went to the filter host_passed function, so it could not locate the json object, any thoughts? Thanks. Heng From: openstack-bounces+shouhengzhang.xu=mail.utoronto...@lists.launchpad.net [openstack-bounces+shouhengzhang.xu=mail.utoronto...@lists.launchpad.net] on behalf of Heng Xu [shouhengzhang...@mail.utoronto.ca] Sent: Thursday, August 02, 2012 4:47 PM To: Pengjun Pan Cc: openstack@lists.launchpad.net Subject: Re: [Openstack] Cannot pass hint to Nova Scheduler Hi PJ I don't know what happen, I could not find the file in my Ubuntu filesystem, I searched for it, no result, but I just used ./stack.sh to install it, I it is just me could not find the file? Any thoughts? thank you Heng From: Pengjun Pan [panpeng...@gmail.com] Sent: Thursday, August 02, 2012 4:42 PM To: Heng Xu Cc: Joseph Suh; openstack@lists.launchpad.net Subject: Re: [Openstack] Cannot pass hint to Nova Scheduler Hi Heng, The log
Re: [Openstack] best practices for merging common into specific projects
On 08/02/2012 04:05 PM, Eric Windisch wrote: On Monday, July 23, 2012 at 12:04 PM, Doug Hellmann wrote: Sorry if this rekindles old arguments, but could someone summarize the reasons for an openstack-common PTL without voting rights? I would have defaulted to giving them a vote *especially* because the code in common is, well, common to all of the projects. So far, the PPB considered openstack-common to be driven by all PTLs, so it didn't have a specific PTL. As far as future governance is concerned (technical committee of the Foundation), openstack-common would technically be considered a supporting library (rather than a core project) -- those can have leads, but those do not get granted an automatic TC seat. OK, I can see the distinction there. I think the project needs an official leader, even if we don't call them a PTL in the sense meant for other projects. And I would expect anyone willing to take on the PTL role for common to be qualified to run for one of the open positions on the new TC, if they wanted to participate there. The scope of common is expanding. I believe it is time to seriously consider a proper PTL. Preferably, before the PTL elections. No disagreement from me. The RPC code is there now. We're talking about putting the membership services there too, for the sake of RPC, and even the low-level SQLAlchemy/MySQL access code for the sake of membership services. A wrapper around pyopenssl is likely to land there too, for the sake of RPC. These are just some of the changes that have already landed, or are expected to land within Folsom. What do you mean by membership services? Common contains essential pieces to the success of OpenStack which are currently lacking (official) leadership. Everyone's problem is nobody's problem. Consider this my +1 on assigning a PTL for common. Sure, me too. -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] python novaclient's response when extension is disabled
On 08/01/2012 10:10 AM, Jiang, Yunhong wrote: Currently, if an extension is disabled, there will be no clear information like extension is not support, instead, it will return ERROR: n/a (HTTP 404), like followed output in my devstack. yjiang5@yjiang5-linux1:~/work/openstack/devstack$ nova hypervisor-list ERROR: n/a (HTTP 404) I'm not sure if this is the expected response. Per my understanding, python nova-client should tell user clearly that the extension is not supported. This can be achieved by checking the v2/extensions API, either at nova-client start stage, or at the time that the extension get HTTP 404 error. Is my understand correct? If yes, we can create a blueprint and provide patches. I don't see anything wrong with your logic. I'd fully support you putting together patches that would produce a nice error message like Sorry, that extension is not available instead of the 404 error. Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [glance] legacy client removal and python-glanceclient
On 08/01/2012 03:18 PM, Kevin L. Mitchell wrote: On Wed, 2012-08-01 at 18:37 +, Gabriel Hurley wrote: As a rule of thumb, we need to start doing proper deprecation on all public interfaces, whether that's a CLI, client method signatures, APIs, etc. It's a little late for this on the old vs. new glance client/CLI (unless Brian feels the work can be reasonably done to make them compatible) but it's something we need to be really mindful of going forward. As an example of how it can be done properly, check out https://review.openstack.org/#/c/10577/ (at least, I believe I did it correctly ;) The library interface is one thing -- and frankly IMHO easier to properly deprecate and progress to newer API bindings. It's a little different for the CLI interfaces, as people are typically just using the CLI tool in shell scripts (as opposed to binding to the client library's API). For the case of shell scripts using the old glance client, the installation path for the old and new glance CLI tools is different, so in theory, if you wrote a shell script that used the old glance client, then installed the new python-glanceclient, you could simply edit your shell script to point a glance variable to the absolute path of the old glance CLI executable. Frankly, we do this is multiple places in devstack where, for example, a tool or utility has a different name or interface on RedHat vs. Debian systems. -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [glance] legacy client removal and python-glanceclient
On 08/01/2012 02:11 PM, Pete Zaitcev wrote: On Wed, 01 Aug 2012 01:06:10 -0400 Jay Pipes jaypi...@gmail.com wrote: I don't disagree with you. At the same time, I think Brian has a good point when he compares having two versions of SQLAlchemy installed on a system: it just doesn't make much sense. But having glance(1) and super-duper-incompatible-glance-ng(1) installed together makes a lot of sense. Count systems in the field that have both ifconfig and ip installed. Millions and millions of them. Brian's analogy with SQUalchemy is pretty amusing but fails to hide the fact that the new binary is incompatible. As mentioned in a previous post, you could always just alias glance to whichever absolute path to the specific glance CLI tool you use in your shell script. Would it be better if we named the CLI tools glance2 or something like that? Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Default reply to behavior for mailing list
On 07/31/2012 02:09 PM, Johannes Erdfelt wrote: On Tue, Jul 31, 2012, Bhuvaneswaran A bhu...@apache.org wrote: If a subscriber reply to a mailing list message, it's sent to the author only. Each subscriber should use Reply to All every time, to post a reply to mailing list. Can you please configure the mailing list and set reply-to header as mailing list address, openstack@lists.launchpad.net. With this setup, if the user click reply in his email client, the message is sent to mailing list, instead of the author. As a counter-point, I'd prefer to keep the list as it is and *not* munge Reply-To. Since Chip summarizes it better than I can, I'll link to his article 'Reply-To Munging Considered Harmful': http://www.unicom.com/pw/reply-to-harmful.html ++ Just use a decent email program (i.e. not Outlook) that gives you things like Reply to List. -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [glance] legacy client removal and python-glanceclient
On 08/01/2012 12:49 AM, Matt Joyce wrote: I think we're running out of opportunities to do stuff like this. This is exactly the sort of thing that will drive George Reese into a homocidal rage. More to the point its exactly the sort of thing our users are going to despise us for. And that hatred will outlive any benefit. Users only remember the bad stuff. We need to soften the blow on stuff like this. Hell we need to actively work our asses off to prevent stuff like this. You know it, but it needs to be said on list. Because people are going to complain that we didn't think. I don't disagree with you. At the same time, I think Brian has a good point when he compares having two versions of SQLAlchemy installed on a system: it just doesn't make much sense. One option that may alleviate some of this pain is documentation -- making a page that says things to the effect of: Hey, you probably got to this documentation page because you Googled for glance error argument: invalid choice: 'index' Let us show you what this means and what you need to change. 1) This has happened because either you installed the new version of the Glance client or you upgraded a system like Horizon which may depend on a newer version of the Glance client 2) The interface for Glance client has been improved, but that means a few changes. Most notably, where you were used to doing: glance index you now need to do: glance image-list Where you were used to doing: glance add disk_format=qcow2 container_format=bare is_public=True /path/to/my.img you now need to do: glance image-create --disk-format=qcow2 --container-format=bare --public /path/to/my.img These changes were made to align the Glance client with the other OpenStack core project clients. We hope these changes will actually make querying the OpenStack Images API much more similar to the Compute, Identity and Network APIs, which follow similar client calling patterns. Thanks for your patience and understanding. Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Qcow2 Details on base images
On 07/28/2012 11:10 AM, Gaurab Basu wrote: Another thing I would like to know is whether it uses snapshot mechanism over time. What is it you are referring to above? Are you asking whether Nova automatically takes snapshots of images over time? If so, no, it does not. If a user requests a snapshot of a launched instance, then Nova will issue snapshot commands -- in the case of the libvirt driver, these commands would be qemu-img snapshot -c SNAPSHOT_NAME IMAGE_PATH. I mean how does the copy on write functionality works. Does it keep the diff snapshots over time ( or something else ). Not sure here whether you are asking how QEMU's copy on write operations work or whether Nova keeps the base images separate from any VM images. If you are asking about the latter, the answer is that Nova will create the virtual machine images by creating a COW image based on the base image it pulls from Glance -- after making a resized copy of the base image if it needs to do so to meet the needs of the requested image size of the VM. Snapshots that are taken of virtual machine images on a host are stored by Nova in Glance. And does the diff work at the level of file or block level? AFAIK, CoW and snapshot actions with QEMU are block-level. What is the format that the image is converted to after it is fetched from glance. There may be no conversion needed at all... it depends on what the format of the original base image that was stored in Glance. Conversion between raw/iso and QCOW2 and vice versa is what you see in the code, and is what is done during migration as Mikal mentioned below. I am fairly new to openstack. Can you point me to the specific files in the code where all these things are coded. I want to know the details of the present state. grep for qemu-img in the nova/ directory. You'll see all the files that call qemu-img commands and then you can go look in those files. Best, -jay Thanks again for your help. Regards, Gaurab On Sat, Jul 28, 2012 at 11:52 AM, Michael Still michael.st...@canonical.com mailto:michael.st...@canonical.com wrote: On 28/07/12 05:42, Gaurab Basu wrote: Hi, I am trying to figure out the technology that openstack uses when multiple VM's having the*same *base image (OS) are provisioned on a physical server. Does it use as many copy as the number of VM's or does it use the same base image and then copy on write. I need to understand the complete details. Can anybody share some details or point me to some place where I can find the details. Its pretty hard to provide a complete description of what happens, because the code keeps changing. However, assuming you have copy on write turned on (which is the default IIRC), and assuming that all of the instances have the same disk size, then you end up with: - the image as fetched from glance, with possible format conversion - that image resized to the size the instance requested - a cow on write layer for each instance that is using that sized image The first should be smallish, the second can be quite large, and the third will really depend on how much writing the instances are doing. Note that this all falls apart if instances are migrated, because as part of the migration the copy on write layer is transformed into a full disk image, which is what is shipped over to the new machine. Hope this helps, Mikal ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [Nova] proposal to provide project specific instance type
On 07/28/2012 01:10 AM, unicell wrote: Hi, In our use case, there is a need to provide project-specific instance type. Meaning that this instance type is only visible and available for several projects. It's an idea kind of like private image concept for Glance project. Has this proposal been discussed before somewhere? Any comments? I'll be willing to fire a blueprint and work on patches if this concept is workable. Thanks! I'd love to see this functionality. It would definitely make putting the clamps down on abusive tenants a bit easier... :) I know folks have started (finished?) work on the per-user quota functionality. This would be a natural complement to that work IMHO. -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] VM High Availability and Floating IP
On 07/24/2012 04:29 AM, Alessandro Tagliapietra wrote: Hi guys, i've 2 missing pieces in my HA openstack install. Actually all openstack services are managed by pacemaker and i can succesfully start/stop vm etc. when the cloud controller is down (i've only 2 servers atm). 1 - how can i make a VM HA? Actually live-migration works fine, but if a host goes down, how can i restart the vm on the other host? Should i edit the 'host' column in the db and issue the restart of the vm? Any other way? Check out that HEAT API: https://github.com/heat-api/heat/wiki/ 2 - i've the servers hosted at Hetzner, for floating ip we've bought failover ip which are assigned to each host and can be changed via the api. So i have to make sure that if vm is on host1, floating ip associated to the vm is routed to host1. My idea was to run a job that checks the floating ip already associated to any vm, then queries the vm info, checks on which host it's running and if it's different from the other check, calls the hetzner api to switch the ip to the other server. Any other idea? See above :) Best, -jay Thanks in advance Best Regards -- Alessandro Tagliapietra | VISup srl piazza 4 novembre 7 20124 Milano http://www.visup.it ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [nova] a proposal to change metadata API data
Thanks Matt, comments inline... On 07/23/2012 05:25 PM, Matt Joyce wrote: I wish to add some data to the metadata server that can be found somewhere else. That a user could jump through a hoop or two to add to their instances. Esteemed personages are concerned that I would be crossing the rubicon in terms of opening up the metadata api for wanton abuse. They are not without a right or reason to be concerned. And that is why I am going to attempt to explicitly classify a new category of data that we might wish to allow into the metadata server. If we can be clear about what we are allowing we can avoid abuse. I want to provide a uniform ( standardized? ) way for instances in the openstack cloud to communicate back to the OpenStack APIs without having to be provided data by the users of the cloud services. Let's be clear here... are you talking about the OpenStack Compute API or are you talking about the OpenStack Metadata service which is merely the EC2 Metadata API? We already have the config-drive extension [1] that allows information and files to be injected into the instance and loaded as a readonly device. The information in the config-drive can include things like the Keystone URI for the cell/AZ in which an instance resides. I mean the OpenStack Metadata service. Sorry, I'm still confused. The only actual stand-alone service in OpenStack for metadata is the OpenStack EC2 Metadata service that runs on the fixed AWS 169.254.169.254 address. Are you referring to this, or are you referring to the /servers/SERVER_ID/metadata call in the OpenStack Compute API v2? The config drive extension does not as far as I am aware produce a uniform path for data like this. Absolutely correct. The community would need to come to a consensus on this uniformity, just as Amazon came to the decision to hard-code the 169.254.169.254 address. This API query should be the same from openstack deployment to openstack deployment to ensure portability of instances relying on this API query to figure out where the catalog service is. By uniform I mean it has all the love care and backwards versioning support as a traditional API query. Agree completely. The config-drive seems more intended to be user customized rather than considered a community supported API query. Well, that may be the case, but as mentioned above, I think we could *use* config-drive along with a community consensus on a uniform place to store lookup information for a real OpenStack metadata service -- things like a private key, info file containing the IP of the nearest metadata service, etc... Today the mechanism by which this is done is catastrophically difficult for a new user. Are you specifically referring here to the calls that, say, cloud-init makes to the (assumed to be running) EC2 metadata API service at http://169.254.169.254/latest/? Or something different? Just want to make sure I'm understanding what you are referring to as difficult. I am referring to the whole new user experience. Anything custom to a deployment of openstack is now outside of our control and is not portable. Sure, completely agree! Also a new user will not be prepared to inject user data properly. Well, I'm actually not suggesting having the user really be involved at all with the injection of keys/information into the config-drive :) That would done by Nova. When a user currently launches an image in OpenStack, that image connects to the EC2 metadata service automatically if cloud-init is installed in the image. I am picturing a similar scenario for this config-drive stuff -- only instead of cloud-init needing to be installed on the image, I'm suggesting Nova create a standard (uniform) config-drive (or part of a config-drive) that contained upstart/startup scripts, keys, and info for connecting to some OpenStack Metadata service. Going further and a bit onto an irate tangent. Horizon has a really round about and completely non intuitive way of providing users with info on where API servers are. IE you have to generate an openstack credentials file. download it. and look at it in a text editor and then know what it is you are looking at. To find your tenant_name you have to guess in the dark that horizon is referring to your tenant name as a project. Heh, well, you know where I stand on the whole tenant vs. project theme :) That said, it's a bit of a tangent, as you admit to above :) The whole thing is insane. What I am talking about here is a first step in allowing image builders to integrate into openstack in a uniform way across all installations ( or most ). And that will allow people to reduce the overall pain on new users of cloud at their pleasure. I am asking for this based on my experience trying to do this outside of
Re: [Openstack] VM High Availability and Floating IP
On 07/24/2012 12:52 PM, Alessandro Tagliapietra wrote: Thank you Jay, never read about that. Seems something like scalr/chef? WHich handles application and keeps a minimum number of vm running? Yeah, kinda.. just one more way of doing things... :) -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [nova] a proposal to change metadata API data
On 07/24/2012 12:47 PM, Martin Packman wrote: On 23/07/2012, Jay Pipes jaypi...@gmail.com wrote: This is only due to the asinine EC2 API -- or rather the asinine implementation in EC2 that doesn't create an instance ID before the instance is launched. So, I'm curious, how do you allocate a server id in advance using the openstack api so you can pass it in rather than relying on an external metadata service? I've not seen anything in the documentation describing how to do that. The OpenStack Compute API POST /servers command creates a server UUID that is passed back in the initial response and allows the user to query the status of the server throughout its launch sequence. http://docs.openstack.org/api/openstack-compute/2/content/CreateServers.html -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [nova] a proposal to change metadata API data
On 07/24/2012 01:10 PM, Martin Packman wrote: On 24/07/2012, Jay Pipes jaypi...@gmail.com wrote: The OpenStack Compute API POST /servers command creates a server UUID that is passed back in the initial response and allows the user to query the status of the server throughout its launch sequence. I'm not really seeing how that improves on the situation compared to the EC2 api. If a server needs to know its own id, it must either communicate with an external service or be able to use the compute api, which means putting credentials on the instance. Or am I missing a trick? All I am saying is that Nova knows the instance's ID at the time that a config-drive can be created and installed into the instance. You can't do that with the user data EC2 API stuff, but you can with config-drive. Which is why I was recommending using config-drive. Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [KeyStone] Requestid, context, notification in Keystone
On 07/21/2012 02:57 AM, Joseph Heck wrote: Hey Nachi If by this you mean the idea that a request ID is created at a user request action, and then propagated through all relevant systems and API calls to make tracing the distributed calls easier, I'm totally in favor of the idea. Distributed tracing through the calls has been a real pain in the a... I'm afraid I haven't been watching the other projects closely enough to realize that this was getting implemented - any chance you could point out the relevant change reviews so I could see where/how the other projects have been doing this? Hey Joe, Here is a relevant patch for Glance: https://review.openstack.org/#/c/9545/ Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Incremental Backup of Instances
On 07/22/2012 11:22 PM, Kobagana Kumar wrote: Hi All, I am working on *Delta Changes *of an instance. Can you please tell me The procedure to take *Incremental Backups (Delta Changes) *of VMs, instead of taking the snapshot of entire instance. The only non-commerical solution I know of for QEMU/KVM is livebackup: http://wiki.qemu.org/Features/Livebackup But AFAIK, no work has been done on integrating this into Nova's libvirt driver. Patches always welcome :) Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] High Available queues in rabbitmq
On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote: Hi guys, just an idea, i'm deploying Openstack trying to make it HA. The missing thing is rabbitmq, which can be easily started in active/active mode, but it needs to declare the queues adding an x-ha-policy entry. http://www.rabbitmq.com/ha.html It would be nice to add a config entry to be able to declare the queues in that way. If someone know where to edit the openstack code, else i'll try to do that in the next weeks maybe. https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py You'll need to add the config options there and the queue is declared here with the options supplied to the ConsumerBase constructor: https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114 Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [nova] core members
On 07/23/2012 02:31 PM, Vishvananda Ishaya wrote: Sean Dague: 2 I'm not nova-core, but I'd recommend Sean as a core committer. He's been active in both reviews and patches recently. Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] High Available queues in rabbitmq
On 07/23/2012 02:58 PM, Eugene Kirpichov wrote: The only problem is, it breaks backward compatibility a bit: my patch assumes you have a flag rabbit_addresses which should look like rmq-host1:5672,rmq-host2:5672 instead of the prior rabbit_host and rabbit_port flags. Guys, can you advise on a way to do this without being ugly and without breaking compatibility? Maybe have rabbit_host, rabbit_port be ListOpt's? But that sounds weird, as their names are in singular. Maybe have rabbit_host, rabbit_port and also rabbit_host2, rabbit_port2 (assuming we only have clusters of 2 nodes)? Something else? I think that the standard (in Nova at least) is to go with a single ListOpt flag that is a comma-delimited list of the URIs. We do that for Glance APi servers, for example, in the glance_api_servers flag: https://github.com/openstack/nova/blob/master/nova/flags.py#L138 So, perhaps you can add a rabbit_ha_servers ListOpt flag that, when filled, would be used instead of rabbit_host and rabbit_port. That way you won't break backwards compat? Best, -jay On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes jaypi...@gmail.com wrote: On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote: Hi guys, just an idea, i'm deploying Openstack trying to make it HA. The missing thing is rabbitmq, which can be easily started in active/active mode, but it needs to declare the queues adding an x-ha-policy entry. http://www.rabbitmq.com/ha.html It would be nice to add a config entry to be able to declare the queues in that way. If someone know where to edit the openstack code, else i'll try to do that in the next weeks maybe. https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py You'll need to add the config options there and the queue is declared here with the options supplied to the ConsumerBase constructor: https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114 Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [nova] a proposal to change metadata API data
On 07/21/2012 09:00 PM, Matt Joyce wrote: Preamble: Until now, all data that is made available by the metadata server has been data that cannot be found anywhere else at the time it may be needed. In short, an instance can't be passed it's instance id before it's instance id has been allocated so a user cannot pass it to an instance that is being started up. So whether a user wants to jump through a few hoops or not to pass their instance the instance id of itself... they simply cannot without metadata api being there to provide it at creation time. This is only due to the asinine EC2 API -- or rather the asinine implementation in EC2 that doesn't create an instance ID before the instance is launched. This means that the metadata server holds an uneasy place as a necessary clearing house ( evil? ) of data that just doesn't have another place to be. It's not secure, it's not authenticated, and it's a little scary that it exists at all. Agreed. I wish people didn't use the EC2 API at all, since it's a complete bag of fail and a beautiful example of a terribly thought-out API. That said, the OpenStack Compute API v2 has its share of pockmarks to be sure. But... unfortunately, if you're going to use the EC2 API this hard-coded 169.254.169.254 address is what we have to deal with. I wish to add some data to the metadata server that can be found somewhere else. That a user could jump through a hoop or two to add to their instances. Esteemed personages are concerned that I would be crossing the rubicon in terms of opening up the metadata api for wanton abuse. They are not without a right or reason to be concerned. And that is why I am going to attempt to explicitly classify a new category of data that we might wish to allow into the metadata server. If we can be clear about what we are allowing we can avoid abuse. I want to provide a uniform ( standardized? ) way for instances in the openstack cloud to communicate back to the OpenStack APIs without having to be provided data by the users of the cloud services. Let's be clear here... are you talking about the OpenStack Compute API or are you talking about the OpenStack Metadata service which is merely the EC2 Metadata API? We already have the config-drive extension [1] that allows information and files to be injected into the instance and loaded as a readonly device. The information in the config-drive can include things like the Keystone URI for the cell/AZ in which an instance resides. Today the mechanism by which this is done is catastrophically difficult for a new user. Are you specifically referring here to the calls that, say, cloud-init makes to the (assumed to be running) EC2 metadata API service at http://169.254.169.254/latest/? Or something different? Just want to make sure I'm understanding what you are referring to as difficult. This uniform way for instances to interact with the openstack API that I want already sort of exists in the keystone catalog service. The problem is that you need to know where the keystone server is in the world to access it. That of course changes from deployment to deployment. Especially with the way SSL endpoints are being handled. This can be done using config-drive and the OpenStack community coming up with a standard file or tool that would be injected into the config drive. This would be similar to the calls currently executed by cloud-init that are hard-coded to look for 169.254.169.254. Would that work? But the metadata API server is generally known as it uses a default ip address value that can be found on any amazon compatible deployment. In fact to my knowledge it is the only known way to query openstack for data relevant to interacting with it without user interaction. And that's the key to this whole thing. I want to direct users or automation baked into instances to the keystone api and catalog service. And the only way I know how to do that is the metadata service. As mentioned above, config-drive extension was built for just this purpose IIRC. Chris Macgown, who wrote the original extension, cc'd, should be able to comment on this further. This api data can be classified as being first and foremost OpenStack infrastructure related. Additionally it is not available without a user providing it anywhere else. And finally it is a catalog service. I'd love some more input on whether this makes sense, or can be improved upon as an idea and formalized as a rule for using the metadata api without abusing it. Well, we know we can't change the EC2 Metadata API since we don't own or have any control over the Amazon APIs. We can however come up with an OpenStack-centric tool using config-drive and a tool that would query a Keystone endpoint for a local OpenStack Compute API endpoint and then use the existing OpenStack Compute API calls for server metadata [2]? That sounds doable to you? Best, -jay [1] Config Drive extension:
[Openstack] [DIABLO] EC2 Metadata API service slow? Try this patch.
Hey all, A few deployers of Diablo, including Wikipedia, were experiencing very slow response times from the EC2 metadata service in Nova. Yesterday and today I tracked the bug down to a problem in the way the database queries for the metadata results were being generated. I'm not all that keen to submit the code to the stable/diablo branch -- for two reasons: I don't really want to spend the time to write test cases for the code and Essex and beyond do NOT have this problem. But, if you are experiencing very long response times from the metadata service in a Diablo deployment, you can check out the patch that fixes this issue here: https://github.com/jaypipes/nova/commit/8becb58127ce7a10e81c68cba1ea5469db5b17d1 Ryan Lane at Wikipedia reports that the patch does indeed reduce metadata response times for an individual guest -- such as when cloud-init makes calls to the metadata service -- in his case from 11-15 seconds per call down to 0.1 second. Administrative calls to get metadata for all instances also drops pretty dramatically, due to the removal of now-unnecessary joins in the database queries in question. Cheers, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Distributed quota manager concept
On 07/17/2012 06:08 PM, Everett Toews wrote: Setting aside any SQL/NoSQL religious debate or even the best tool for the job argument, I think you'd find this to be a hard sell to the operations crowd. Nobody is going to want to have all of their OpenStack data in an SQL DB (which they may have already gone through the trouble to make HA) but then have just the quota data in a NoSQL DB. I would urge you to consider starting with SQL and then make NoSQL an option if there is demand for it. I think both SQL and NoSQL solutions are warranted. Kevin, you could take the approach that Keystone does and have SQL and KVS drivers... Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Possible Glance Bug?
Try: (old glance client) glance index deleted=True to see image records that are marked deleted. or: glance index deleted=None to see ALL image records. The new glance client -- python-glanceclient -- does not yet support filtering for deleted image records, but it should be able to do: glance image-list --deleted=True Once https://bugs.launchpad.net/python-glanceclient/+bug/1026301 is completed. The scrubber does not delete image records in the registry, it only destroys the disk images from backend storage for any image in 'pending_delete status. Best, -jay On 07/18/2012 02:45 PM, Daneyon Hansen wrote: All, I'm questioning whether I have come across a bug in Glance. The image ending in 39d is a snapshot. It does not show-up under glance index or in the Horizon GUI, but appears in the database as active with the deleted bit set: mysql select id, name, status, deleted_at, deleted from images where name='proxy'; +--+--+-+-+-+ | id | name | status | deleted_at | deleted | +--+--+-+-+-+ | 03dbbcf0-2a11-435d-ba67-4de75276ba20 | name | deleted | 2012-07-06 20:37:33 | 1 | | 54366544-d758-4896-917a-7558866dc39d | name | active | 2012-07-06 15:56:09 | 1 | | 7bf7e523-f21e-40b0-80cf-421490868e56 | name | active | NULL | 0 | | 891bb288-fb85-4807-9485-ece6a377bb3d | name | deleted | 2012-07-06 21:01:38 | 1 | +--+--+-+-+-+ I tried using glance-scrubber but the image was not deleted. It appears like the delete operation may not be completing, leaving some db records for the image as deleted and other records indicating it is not. Whatever Glance is looking at, those records are showing it not deleted. Any feedback would be appreciated. --Daneyon Hansen ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom
On 07/17/2012 05:47 AM, Thomas, Duncan wrote: Jay Pipes on 16 July 2012 18:31 wrote: On 07/16/2012 09:55 AM, David Kranz wrote: Sure, although in this *particular* case the Cinder project is a bit-for-bit copy of nova-volumes. In fact, the only thing really of cause for concern are: * Providing a migration script for the database tables currently in the Nova database to the Cinder database * Ensuring that Keystone's service catalog exposes the volume endpoint along with the compute endpoint so that volume API calls are routed to the right endpoint (and there's nothing preventing a simple URL rewrite redirect for the existing /volumes calls in the Compute API to be routed directly to the new Volumes endpoint which has the same API Plus stand up a new rabbit HA server. Why? This has nothing to do with the nova-volumes - Cinder move... Plus stand up a new HA database server. This is up to you. Technically, you could use the same database server as Nova and point your Cinder service to it (exactly like you do for nova-volumes, which queries the Nova database for volume information). You could gradually move the data to another database server over time, but it's not required at the time you transition to Cinder. Plus understand the new availability constraints of the nova-cinder interface point You can act like Cinder is just nova-volumes and not really change anything in your environment at all. Wherever you are installing the nova-volume daemon now, you would be installing Cinder. You can decide to understand the availability constraints at some future point. And there are bug fixes and correctness fixes slowly going into Cinder, so it is not a bit--for-bit copy any longer... This whole conversation has been about whether to continue applying patches to **both** nova-volumes and Cinder. We are trying to avoid doing that for an extended period of time because it is a pain. Look, software changes over time. It's not a static thing. We're trying to discuss the transition from an internal nova-volumes to an external volume service, but let's not go overboard in making the transition out to be more than it actually is... Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [Quantum] Network, Subnet and Port names
On 07/17/2012 01:27 AM, Dan Wendlandt wrote: Hi Gary, this is an example of when I wish openstack APIs had a style-guide to try to ensure some consistency across projects. Yeah, we actually discussed this a long time ago on the PPB and, IIRC, the decision was made to not have some strict API committee or dictator and instead rely on PTLs to socialize the API with the community and other PTLs and find common ground where possible. For those new to the conversation, the original topic of discussion is whether names for API objects should be forced to be unique (presumably within a tenant?) or allowed to be duplicated. The general feeling from the meeting was that since UUIDs are unique, the API itself would not enforce name uniqueness. That also led to the point that names should then be optional, since they are really for informational/display purposes only. Personally, I tend to think that description tends to imply a sentence private network for tenant1, rather than a simple name tenant1-net. There's also the fact that other openstack services like nova and glance use the term name with the similar (I believe) model that a name need not be unique. Yes, I'm in the Name is merely a label camp. Not unique, not mandatory. Would be curious to hear what others think. The only thing I'm quite sure about is that there would be value in creating some notion of openstack API consistency best practices to give a more cohesive feel to APIs across different projects in the openstack family. There could be some value there, sure. Best, -jay Dan On Mon, Jul 16, 2012 at 10:05 PM, Gary Kotton gkot...@redhat.com mailto:gkot...@redhat.com wrote: Hi, If the name is intended to be a description then how about the idea of calling the field description instead. This is far more descriptive and does not lend the user to think that this should be unique. Thanks Gary _ Mailing list: https://launchpad.net/~__openstack https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~__openstack https://launchpad.net/~openstack More help : https://help.launchpad.net/__ListHelp https://help.launchpad.net/ListHelp -- ~~~ Dan Wendlandt Nicira, Inc: www.nicira.com http://www.nicira.com twitter: danwendlandt ~~~ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom
On 07/16/2012 09:55 AM, David Kranz wrote: An excellent idea. I believe that if the below message had been sent in April, the tenor of the discussion would have been much different. I think a main source of angst around this was that there was no mention at the Folsom summit of nova-volume being simply removed immediately, except perhaps inside the session devoted to this subject which many could not attend. Again, this was proposed, not decided. Vish and JohnG sent out the mailing list post to gather feedback. Stepping up a level, it is hard for a project to move from a developer-centric (no real customers) way of doing things to one driven by having real enterprise users/customers. I know this from past experience. At a certain point, we will have to live with APIs or code organizations that are sub-optimal because it is just too much of a burden on real users/operators to change them. Indeed. /see EC2 APIs. Obviously some members of the community believe this tipping point was the Essex release. It is also inevitable that development will slow down by some measures as the cost of regressions rises and what George Reese called technical debt has to be repaid. Sure, although in this *particular* case the Cinder project is a bit-for-bit copy of nova-volumes. In fact, the only thing really of cause for concern are: * Providing a migration script for the database tables currently in the Nova database to the Cinder database * Ensuring that Keystone's service catalog exposes the volume endpoint along with the compute endpoint so that volume API calls are routed to the right endpoint (and there's nothing preventing a simple URL rewrite redirect for the existing /volumes calls in the Compute API to be routed directly to the new Volumes endpoint which has the same API IMHO, it's not at all like the Keystone Light rework that was: * done in private with little community involvement * changed the way the API behaved Going forward, and this may be controversial, I think these kinds of issues would be best addressed by following these measures: 1. Require each blueprint that involves an API change or significant operational incompatibility to include a significant justification of why it is necessary, what the impact would be, and a plan for No disagreement here at all; though I will point out in the case of Cinder, there are no API changes. deprecation/migration. This justification should assume that the remedy will have to be applied to a large, running OpenStack system in its many possible variations, without having to shut down the system for some unknown amount of time. Yep, also agreed here too. And I think John's done a good job of highlighting this in http://wiki.openstack.org/Cinder 2. Require such blueprints to be approved by a technical committee that includes a significant representation of users/operators. The tradeoffs can be difficult and need to be discussed. Meh... I'm not sure this would actually prove useful. Frankly, we discussed this issue at the PPB meeting last week and the outcome of it was: make sure the mailing list is notified with a request for feedback on migration issues and that you work with the devstack folks to ensure smooth testing migration. IMHO, it's the domain of the PTLs of the projects that are affected that should gather feedback and find consensus. The PPB/technical committee should advise but I believe this is the purview of the PTLs to make the final decision on... 3. The technical committee should declare that the bar for incompatible changes is high, and getting higher. Again, in the case of Cinder, this isn't really an incompatible change in the sense that Keystone Light was. Nevertheless, the bar for incompatible changes SHOULD be high. Whether the technical committee is responsible for being the final arbiter for this or whether the affected projects' PTLs should be is a question for debate. Some might argue that this is too much of a burden and takes authority away from PTLs, but I think the statement of stability to the community (and others) would more than compensate for that. Sure, that's a good point, too. I don't think the technical committee should be involved as anything more than a group to bounce competing ideas off of -- the PTLs should be the final decisionmakers. But I see your point and respect it. Best, -jay -David On 7/16/2012 8:04 AM, Sean Dague wrote: On 07/12/2012 05:40 PM, Vishvananda Ishaya wrote: Excellent points. Let me make the following proposal: 1) Leave the code in nova-volume for now. 2) Document and test a clear migration path to cinder. 3) Take the working example upgrade to the operators list and ask them for opinions. 4) Decide based on their feedback whether it is acceptable to cut the nova-volume code out for folsom. +1 -Sean ___ Mailing list: https://launchpad.net/~openstack Post to
Re: [Openstack] Heat application for incubation
cc'ing the PPB mailing list... On 07/16/2012 06:25 PM, Steven Dake wrote: Dear members of the Project Policy Board: After four months of development on the Heat project[1], the developers voted[2] to apply for incubation. The developers feel Heat provides a feature rich user experience and is stable enough for more general evaluation by the OpenStack community. We did watch the Ceilometer incubation proposal, and noted the fact that the PPB would like projects to spend more time in initial development before requesting incubation. Still we feel our code is in great shape and ready for evaluation. As a result, we ask that the PPB take up our application a few weeks after the Grizzly summit has completed, providing more time for community evaluation. In the meantime, the developers would be happy to answer any questions the community or Project Policy board may have for us. Our pending application can be viewed at: http://wiki.openstack.org/Heat I will take a look at the proposal later on. However, I might point out that one of the things we asked Cinder to come up with was an easy public demo of their project. Perhaps you might want to work with Monty Taylor's team to put up an example working HEAT over OpenStack API domain on the stackforge.org domain? You could use the existing CI machines I'm sure for this kind of thing... Anyway, food for thought. :) Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Capacity based scheduling: What updated free_ram_mb in Folsom
Hi Phil, The nova.db.api.compute_node_update() call is what the individual virt drivers call to update the compute node stats. grep for that and you'll see where the calls to set the compute node data are called. Best, -jay On 07/13/2012 09:38 AM, Day, Phil wrote: Hi Folks, I was reviewing a code change to add generic retries for build failures ( https://review.openstack.org/#/c/9540/2 ), and wanted to be sure that it wouldn’t invalidate the capacity accounting used by the scheduler. However I've been sitting here for a while working through the Folsom scheduler code trying to understand how the capacity based scheduling now works, and I’m sure I’m missing something obvious but I just can’t work out where the free_ram_mb value in the compute_node table gets updated. I can see the database api method to update the values, compute_node_utilization_update(), it doesn’t look as if anything in the code ever calls that ? From when I last looked at this / various discussions here and at the design summits I thought the approach was that: - The scheduler would make a call (rather than a cast) to the compute manger, which would then do some verification work, update the DB table whilst in the context of that call, and then start a thread to complete the spawn. The need to go all the way to the compute node as a call was to avoid race conditions from multiple schedulers. (the change I’m looking at is part of a blueprint to avoid such a race, so maybe I imagined the change from cast to call ?) - On a delete, the capacity_notifer (which had to be configured into the list_notifier) would detect the delete message, and decrement the database values. But now I look through the code it looks as if the scheduler is still doing a cast (scheduler/driver), and although I can see the database api call to update the values, compute_node_utilization_update(), it doesn’t look as if anything in the code ever calls that ? The ram_filter scheduler seems to use the free_ram_mb value, and that value seems to come from the host_manager in the scheduler which is read from the Database, but I can't for the life of me work out where these values are updated in the Database. The capacity_notifier, which used to decrement values on a VM deletion only (according to the comments the increment was done in the scheduler) seems to have now disappeared altogether in the move of the notifier to openstack/common ? So I’m sure I’m missing some other even more cunning plan on how to keep the values current, but I can’t for the life of me work out what it is – can someone fill me in please ? Thanks, Phil ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [keystone] Rate limit middleware
On 07/11/2012 07:28 PM, Rafael Durán Castañeda wrote: Thank you guys for the info, I didn't know about some of the projects. However writing my on-house own stuff is not what I was considering but adding a middleware into Keystone, nothing fancy but extensible so it covers at least most basic use cases, pretty much like nova middleware. So , would you like to see something like that into keystone or you don't? I think that's what Kevin was trying to say you didn't need to do, since Turnstile can already do that for you :) You simply insert the Turnstile Python WSGI middleware into the Paste deploy pipeline of Keystone, and then you get rate limiting in Keystone. You'd just add this into the Keystone paste.ini file: [filter:turnstile] paste.filter_factory = turnstile.middleware:turnstile_filter redis.host = your Redis database host name or IP And then insert the turnstile middleware in the Keystone pipeline, like so: [pipeline:public_api] pipeline = stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug ec2_extension turnstile public_service The above should be a single line of course... And then configure Turnstile to your needs. See: http://code.activestate.com/pypm/turnstile/ If you wanted to do some custom stuff, check out the custom Nova Turnstile middleware for an example: http://code.activestate.com/pypm/nova-limits/ All the best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom
On 07/12/2012 10:36 AM, Thomas, Duncan wrote: We’ve got volumes in production, and while I’d be more comfortable with option 2 for the reasons you list below, plus the fact that cinder is fundamentally new code with totally new HA and reliability work needing to be done (particularly for the API endpoint), it sounds like the majority is strongly favouring option 1… Actually, I believe Cinder is essentially a bit-for-bit copy of nova-volumes. John G, is that correct? It's this similarity that really makes option 1 feasible. If the codebases (and API) were radically different, removal like this would be much more difficult IMHO. Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [keystone] Rate limit middleware
On 07/12/2012 12:26 PM, Rafael Durán Castañeda wrote: Unless I'm missing something, nova_limits is not applicable to Keystone since it takes the tenant_id from 'nova.context', which obiously is not available for Keystone; thought adapt/extend it to keystone should be trivial and probably is the way to go. Sure, though I'm pointing out that this could/should be an external project (like nova_limits) and not something to be proposed for merging into Keystone core... Best, -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom
On 07/12/2012 12:32 PM, George Reese wrote: This community just doesn't give a rat's ass about compatibility, does it? a) Please don't be inappropriate on the mailing list b) Vish sent the email below to the mailing list *precisely because* he cares about compatibility. He wants to discuss the options with the community and come up with a reasonable action plan with the Cinder PTL, John Griffith for the move Now, would you care to be constructive with your criticism? Thanks, -jay On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote: Hello Everyone, Now that the PPB has decided to promote Cinder to core for the Folsom release, we need to decide what happens to the existing Nova Volume code. As far as I can see it there are two basic strategies. I'm going to give an overview of each here: Option 1 -- Remove Nova Volume == Process --- * Remove all nova-volume code from the nova project * Leave the existing nova-volume database upgrades and tables in place for Folsom to allow for migration * Provide a simple script in cinder to copy data from the nova database to the cinder database (The schema for the tables in cinder are equivalent to the current nova tables) * Work with package maintainers to provide a package based upgrade from nova-volume packages to cinder packages * Remove the db tables immediately after Folsom Disadvantages - * Forces deployments to go through the process of migrating to cinder if they want to use volumes in the Folsom release Option 2 -- Deprecate Nova Volume = Process --- * Mark the nova-volume code deprecated but leave it in the project for the folsom release * Provide a migration path at folsom * Backport bugfixes to nova-volume throughout the G-cycle * Provide a second migration path at G * Package maintainers can decide when to migrate to cinder Disadvantages - * Extra maintenance effort * More confusion about storage in openstack * More complicated upgrade paths need to be supported Personally I think Option 1 is a much more manageable strategy because the volume code doesn't get a whole lot of attention. I want to keep things simple and clean with one deployment strategy. My opinion is that if we choose option 2 we will be sacrificing significant feature development in G in order to continue to maintain nova-volume for another release. But we really need to know if this is going to cause major pain to existing deployments out there. If it causes a bad experience for deployers we need to take our medicine and go with option 2. Keep in mind that it shouldn't make any difference to end users whether cinder or nova-volume is being used. The current nova-client can use either one. Vish ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- George Reese - Chief Technology Officer, enStratus e: george.re...@enstratus.com mailto:george.re...@enstratus.com Skype: nspollutiont: @GeorgeReesep: +1.207.956.0217 enStratus: Enterprise Cloud Management - @enStratus - http://www.enstratus.com http://www.enstratus.com/ To schedule a meeting with me: http://tungle.me/GeorgeReese ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [CHEF] Clarification on osops/chef-repo/roles/nova-compute.rb
On 07/11/2012 12:00 PM, Monty Taylor wrote: snip Let me know if there are any things that people are wanting related to any of these projects from the OpenStack CI infrastructure. Foodcritic/jsonlint seem pretty easy - deployments on to bare nodes using the chef stuff similar to our devstack-based installs might take a little more work and would need to be planned for. :) Yes, agreed, but this is REALLY what we need to be testing. It's totally cool to test for cookbook style and for JSON and Ruby correctness, but we need to test whether a set of roles/cookbooks in the repo can be successfully deployed into a bare-metal cluster since that is the whole point ;) Alternately, deploying into a virtualized cluster would also be fine... I look forward to working with you guys to set this up. -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp