Re: [Openstack-doc-core] Change in openstack/openstack-manuals[master]: Adding Fedora/RHEL/Centos instructions.
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 So currently you can't filter xi;includes in that way. You'd instead put the attribute on the contents of the included file. Btw., I know I used arch=ubuntu in my example, but I realized I was being silly. DocBook has an os attribute, so os=ubuntu makes more sense. I think how filtering works will be clearer when you see a working example. David On 05/21/2012 08:27 PM, Lorin Hochstein wrote: I've never done either, so I have no direct experience here. That being said, I like the small bucket approach because it seems like there are many cases where the differences across versions are small, and having large buckets would incur a lot of duplication. If there are sections where there are large differences across distributions, we can just write separate documents and conditionally include them, assuming this works: command arch=rhel;centos;fedora xi:include href=fedora-foo.xml //command command arch=debian;ubuntu xi:include href=ubuntu-foo.xml //command Take care, Lorin -- Lorin Hochstein Lead Architect - Cloud Services Nimbis Services, Inc. www.nimbisservices.com https://www.nimbisservices.com/ On May 21, 2012, at 10:18 AM, Anne Gentle wrote: So, yes, what is the best solution here? I can see it working another way, but there may be maintenance tradeoffs. Large buckets: Only chapter-level inclusion to indicate which distro. Each chapter contains normal markup according to our conventions. There would be two book files, one for ubuntu/deb, one for rhel/centos/fedora, with conditional includes only on the xi:include code in the book file. Small buckets: Keep the same chapter files we have now, but markup inside the files with command arch=rhel;centos;fedora on each command. I've maintained doc sets both ways - so for me, either way is reasonable. But I worry some about adding more markup within files that we have to explain and understand ourselves. Thoughts? Anne On Mon, May 21, 2012 at 8:58 AM, David Cramer david.cra...@rackspace.com mailto:david.cra...@rackspace.com wrote: On 05/20/2012 11:56 AM, Lorin Hochstein (Code Review) wrote: Lorin Hochstein has posted comments on this change. Change subject: Adding Fedora/RHEL/Centos instructions. .. Patch Set 4: Looks good to me, but someone else must approve (1 inline comment) This looks a good way to start. Ultimately, I think it would be really cool if we could use XML to mark up distribution-specific content and generate a separate manual for each distribution. For example, something like: distro ubuntuapt-get install foo/ubuntu fedorayum install foo/fedora /distro I'd suggest using attributes for that kind of thing. Depending on what you want to achieve you could do: command arch=rhelapt-get install foo/command command arch=ubuntuyum install foo/command Then create different versions of the guide by filtering out one or the other (by adding profile.archrhel/profile.arch or profile.archubuntu/profile.arch to your pom). If you have a more complex situation, you can even do things like: command arch=rhel;centos;fedoraapt-get install foo/command command arch=ubuntu;debyum install foo/command And in the pom things like profile.archrhel;deb/profile.arch. Alternatively, you could do something like: para arch=rhelBlah de blah./para para arch=ubuntuIpsum lorem./para and then, based on the attribute, have the xslts put an icon off to the side (or use some other mechanism) indicating that this information applies to rhel, ubuntu, or whatever. Those are just some examples to get discussion started. Figure out your needs and we can tweak the xslts to make it happen. David But I'd rather start getting this content in now. I'd also like to see a section at the beginning that discusses how well supported OpenStack is on different distributions. In particular, there are some distributions where OpenStack is a first-class citizen (Ubuntu, Fedora) in the sense that the distribution has official packages. There are other distros where there is package support provided by third parties (e.g., SLES). I have no idea what the state of OpenStack is on RHEL. Do we use official Fedora packages for that? GridDynamics packages? And are CentOS and Scientific Linux supported by being RHEL-alike, or are there people on those projects that look at OpenStack support? File doc/src/docbkx/openstack-install/ch_assumptions.xml Line 15: CentOS 6 + CR distributions./para/listitem What does +CR refer to? Also, what about Debian, openSUSE and SLES? -- To view, visit https://review.openstack.org/7431 To unsubscribe, visit https://review.openstack.org/settings Gerrit-MessageType: comment Gerrit-Change-Id:
[Openstack] is it posible to make openatack kickstart
Hi Openstack already support boot from ISO image, I am think if I can use kickstart install (or other auto installation) within openstack, cause openstack's image only support one root partition(or it can be multi partition, I just didn't know) not alway meet our need. what in my mind now is that, I can make a special iso image that auto read a ks file I provade and start installation, if I want different settings, I can change that ks file manually any good advs? Regards -- === William Herry williamherrych...@gmail.com ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Essex in production
Hi guys,as I'm working on the Essex validation, and document the migration from Diablo, I'd love to have there and that some feedbacks, that would also be usefull for all of us here, when you put Essex in production- Unexpected events- Ease-of-install ?- Stability- Erratic behaviour etc...there could be some in-case specific bugs, but if there would be something not to miss, thanks for letting us knowBest regards,Razique-- Nuage Co - Razique Mahroua razique.mahr...@gmail.com ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Essex in production
Note on Quantum with Nova. You should replace nova_ipam_lib.py and manager.py in the nova files if you're going to use that. Also notes on packaging for Quantum there are some bugs that requires a few work arounds on Ubuntu. 2012/5/22 Razique Mahroua razique.mahr...@gmail.com Hi guys, as I'm working on the Essex validation, and document the migration from Diablo, I'd love to have there and that some feedbacks, that would also be usefull for all of us here, when you put Essex in production - Unexpected events - Ease-of-install ? - Stability - Erratic behaviour etc... there could be some in-case specific bugs, but if there would be something not to miss, thanks for letting us know Best regards, Razique -- Nuage Co - Razique Mahroua razique.mahr...@gmail.com ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [metering] high-level design proposal
On 05/21/2012 10:52 PM, Doug Hellmann wrote: I have written up some of my thoughts on a proposed design for ceilometer in the wiki [1]. I'm sure there are missing details, but I wanted to start getting ideas into writing so they could be discussed here on the list, since I've talked about different parts with a couple of you separately. Let me know what you think, and especially if I am not clear or have left out any details. Thanks, Doug [1] http://wiki.openstack.org/EfficientMetering/ArchitectureProposalV1 Thanks a lot for putting this together Doug. A few questions: * The collector runs on one or more central management servers to monitor the message queues (for notifications and for metering data coming from the agent). Notification messages are processed and turned into metering messages and sent back out onto the message bus using the appropriate topic. Metering messages are written to the data store without modification. - Is the reason behind why collectors do not write directly to the database a way to allow db less implementations as Francis suggested earlier? In this case it may be useful to say it explicitly. * Plugins may require configuration options, so when the plugin is loaded it is asked to add options to the global flags object, and the results are made available to the plugin before it is asked to do any work. - I am not sure where the global flags object resides and how option are populated. I think it would make sense for this to be globally controlled, and therefore may require for a simple discovery exchange on the queue to retrieve values and set defaults if it does not exist yet. * Metering messages are signed using the hmac module in Python's standard library. A shared secret value can be provided in the ceilometer configuration settings. The messages are signed by feeding the message key names and values into the signature generator in sorted order. Non-string values are converted to unicode and then encoded as UTF-8. The message signature is included in the message for verification by the collector. - The signature is also kept in the database for future audit processes, maybe worth mentioning it here. - In addition to a signature, I think we would need a sequence number to be embedded by the agent for each message sent, so that loss of messages, or forgery of messages, can be detected by the collector and further audit process. Thanks again, Nick signature.asc Description: OpenPGP digital signature ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] SAN and Fibrechannel with OpenStack
Thank you for your answer, I am not saying that I need a clustered filesystem, with clvm I just have all lvm-volumes available on all machines, but only one machine would attach to the filesystem of a specific LV at a time. On 05/17/2012 05:07 AM, Narayan Desai wrote: I'm not sure that it would be particularly easy to make nova-volume support clustered filesystems; the current model only supports attaching a volume to a single instance at a time. Aside from that, it shouldn't be too hard to use fc as the data path instead of iscsi. We're looking at using iSER in a similar capacity. -nld On Wed, May 16, 2012 at 4:05 AM, Wolfgang Hennerbichler wolfgang.hennerbich...@risc-software.at wrote: dear openstack godfathers; I do plan to migrate from crappy vmware and some ibm based cloud stack to openstack and kvm. here's the thing: I am lucky enough to have decent hardware, all the compute nodes are interconnected via fibre channel. so I don't want and don't need iscsi. do you think I can make it with something like clvm? I read through the docs of openstack, but I am not really sure now if I can make clvm fly without hacking around in openstack (and nova-volume) too much, especially when it comes to live migration and so on... I realize OpenStack was not built for SAN and FC, but I would really like to hear your opinions on that. Thanks, Wolfgang -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC Software GmbH A company of the Johannes Kepler University Linz ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC Software GmbH A company of the Johannes Kepler University Linz IT-Center Softwarepark 35 4232 Hagenberg Austria Phone: +43 7236 3343 245 Fax: +43 7236 3343 250 wolfgang.hennerbich...@risc-software.at http://www.risc-software.at ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] SAN and Fibrechannel with OpenStack
Hi Diego, Thanks for your answer. I will definitely give openstack a try. Not only a try, I will try to make it work. Wolfgang On 05/17/2012 09:40 AM, Diego Parrilla Santamaría wrote: Hi Wolfgang, latest versions of our distro supports NFS as backend storage for instances, volumes and images. Basically a zone shares the same NFS mountpoint for instances and another mountpoint for volumes, and I guess it does not differ a lot from what you want to do with FC or iSCSI. It's in our immediate roadmap to use FC and iSCSI instead of NFS, but may be you can give a try to our NFS stuff until then ;-) Cheers -- Diego Parrilla http://www.stackops.com/*CEO* **www.stackops.com* http://www.stackops.com/ | *diego.parri...@stackops.com mailto:diego.parri...@stackops.com| +34 649 94 43 29 | skype:diegoparrilla* * http://www.stackops.com/ * * On Wed, May 16, 2012 at 11:05 AM, Wolfgang Hennerbichler wolfgang.hennerbich...@risc-software.at mailto:wolfgang.hennerbich...@risc-software.at wrote: dear openstack godfathers; I do plan to migrate from crappy vmware and some ibm based cloud stack to openstack and kvm. here's the thing: I am lucky enough to have decent hardware, all the compute nodes are interconnected via fibre channel. so I don't want and don't need iscsi. do you think I can make it with something like clvm? I read through the docs of openstack, but I am not really sure now if I can make clvm fly without hacking around in openstack (and nova-volume) too much, especially when it comes to live migration and so on... I realize OpenStack was not built for SAN and FC, but I would really like to hear your opinions on that. Thanks, Wolfgang -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC Software GmbH A company of the Johannes Kepler University Linz _ Mailing list: https://launchpad.net/~__openstack https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~__openstack https://launchpad.net/~openstack More help : https://help.launchpad.net/__ListHelp https://help.launchpad.net/ListHelp -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC Software GmbH A company of the Johannes Kepler University Linz IT-Center Softwarepark 35 4232 Hagenberg Austria Phone: +43 7236 3343 245 Fax: +43 7236 3343 250 wolfgang.hennerbich...@risc-software.at http://www.risc-software.at ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [metering] high-level design proposal
If I'm understanding this correctly, the Collector is kind of like a Agent in Qantum (It sits on a machine doing stuff and passing info upstream). If you look at the approach they have now in Quantum Agent it's writing directly to the DB. But looking at the next version they seem to be moving to having the Agent send data upstream to the Plugin in Quantum. Why not do something similar? I mean if you have a MQ cluster in a deployment I think it makes more sense to have 1 thing that handles the db stuff then having each Collector connect to the db.. Endre. 2012/5/22 Nick Barcet nick.bar...@canonical.com On 05/21/2012 10:52 PM, Doug Hellmann wrote: I have written up some of my thoughts on a proposed design for ceilometer in the wiki [1]. I'm sure there are missing details, but I wanted to start getting ideas into writing so they could be discussed here on the list, since I've talked about different parts with a couple of you separately. Let me know what you think, and especially if I am not clear or have left out any details. Thanks, Doug [1] http://wiki.openstack.org/EfficientMetering/ArchitectureProposalV1 Thanks a lot for putting this together Doug. A few questions: * The collector runs on one or more central management servers to monitor the message queues (for notifications and for metering data coming from the agent). Notification messages are processed and turned into metering messages and sent back out onto the message bus using the appropriate topic. Metering messages are written to the data store without modification. - Is the reason behind why collectors do not write directly to the database a way to allow db less implementations as Francis suggested earlier? In this case it may be useful to say it explicitly. * Plugins may require configuration options, so when the plugin is loaded it is asked to add options to the global flags object, and the results are made available to the plugin before it is asked to do any work. - I am not sure where the global flags object resides and how option are populated. I think it would make sense for this to be globally controlled, and therefore may require for a simple discovery exchange on the queue to retrieve values and set defaults if it does not exist yet. * Metering messages are signed using the hmac module in Python's standard library. A shared secret value can be provided in the ceilometer configuration settings. The messages are signed by feeding the message key names and values into the signature generator in sorted order. Non-string values are converted to unicode and then encoded as UTF-8. The message signature is included in the message for verification by the collector. - The signature is also kept in the database for future audit processes, maybe worth mentioning it here. - In addition to a signature, I think we would need a sequence number to be embedded by the agent for each message sent, so that loss of messages, or forgery of messages, can be detected by the collector and further audit process. Thanks again, Nick ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] centos 6 images
I am trying to put together an image for centos 6 that works like cloud-init on ubuntu does. Currently I have ssh keys getting imported but having some problems getting the disk to dynamically resize to the flavor template as well as the hostname set in horizon to be pushed into the image. Does anyone have any howtos or suggestions on how to get this done? Is there cloud-init for centos just like ubuntu? I would also be interested in how to do this with debian as well. Thanks! jason ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] swift3 middleware split.
Sam Morrison wrote: Would also be good to start thinking about how these are packaged up and added to ubuntu/epel etc. archives. Will the people who do the deb/rpm packaging for swift also be doing these plugins? Or are they entirely separate in that sense too? I expect the packaging teams in each distro to consider which plugins make the most sense and package them. Regards, -- Thierry Carrez (ttx) Release Manager, OpenStack ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Reminder: OpenStack Project meeting - 21:00 UTC
Hello everyone, Our weekly project release status meeting will take place at 21:00 UTC this Tuesday in #openstack-meeting on IRC. PTLs who can't make it should name a substitute on [2]. The milestone-proposed branch for Folsom-1 should be cut a few hours after the meeting, so we'll defer missed folsom-1 goals and refine the folsom-1 targeted bug lists. You can doublecheck what 21:00 UTC means for your timezone at [1]: [1] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20120522T21 See the meeting agenda, edit the wiki to add new topics for discussion: [2] http://wiki.openstack.org/Meetings/ProjectMeeting Cheers, -- Thierry Carrez (ttx) Release Manager, OpenStack ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Middleware packaging (was: swift3 middleware split)
On Tue, May 22, 2012 at 10:53 AM, Thierry Carrez thie...@openstack.org wrote: I expect the packaging teams in each distro to consider which plugins make the most sense and package them. +1, this is totally up to the distro to takes care of those things. Talking about packaging and middlewares, it would be nice if the packagers could do a split of the middlewares from a main project. For example in keystone the auth_token middleware is located in the python-keystone package for Ubuntu[1] it would be much nicer if this is splitted to its own package like python-keystone-auth-token and avoid end-user confusion like why do I need to install the full keystone[2] to get Nova/Swift/Glance+KeystoneAuth working I am not sure what's the process to get this forward, should I just report a bug against Fedora/Ubuntu package and attach a patch for the .spec, debian/control in there ? Thanks, Chmouel. [1] and seems to be the case as well for RedHat according to http://is.gd/MGMAZ1 [2] on the same server. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] nova-api error on centos (devstack installation)
Hello together, In the last days a played with multi-node installation of devstack: http://devstack.org/guides/multinode-lab.html However I tried to use CentOS 6.2 machines for Computes Nodes and a normal Ubuntu 12.04 for Head Node. On the compute nodes I want to run n-cpu (nova-compute) and n-net (nova-network). However after the stack.sh run there are still missing some API modules if I try to run nova-manage. If I try to install n-cpu, n-net AND n-api, the stack.sh script will crash because of the nova-api service with the following message: CRITICAL nova [-] Could not load paste app 'ec2' from /etc/nova/api-paste.ini TRACE nova Traceback (most recent call last): TRACE nova File /usr/bin/nova-api, line 51, in module TRACE nova servers.append(service.WSGIService(api)) TRACE nova File /usr/lib/python2.6/site-packages/nova/service.py, line 326, in __init__ TRACE nova self.app = self.loader.load_app(name) TRACE nova File /usr/lib/python2.6/site-packages/nova/wsgi.py, line 391, in load_app TRACE nova raise exception.PasteAppNotFound(name=name, path=self.config_path) TRACE nova PasteAppNotFound: Could not load paste app 'ec2' from /etc/nova/api-paste.ini Does anybody have an idea, what is going wrong? Cheers Viktor smime.p7s Description: S/MIME cryptographic signature ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] python-swiftclient in gerrit
On Tue, May 22, 2012 at 3:38 AM, Lorin Hochstein lo...@nimbisservices.comwrote: Are you planning on making it available through PyPi once it's broken out? Yes, I just asked monty if he can do that and when this is done i'll send[1] the removal request from swift so other projects can use it straight away. Chmouel. [1]https://review.openstack.org/#/c/7659/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [Swift] Swift3 Github pages
On Tue, May 22, 2012 at 12:55 AM, FUJITA Tomonori fujita.tomon...@lab.ntt.co.jp wrote: Thanks, before pulling the request, I would like to discuss the usage of github pages and wiki. Which do we want? Or both? I think we should favour as much possible RST documentation to follow what we have in core swift. The github pages feature is just a nice landing page for users that list most of the information about the project (i.e: latest release, documentation, tarball download etc..) Chmouel. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [Swift] Swift3 Github pages
On Mon, May 21, 2012 at 10:33 PM, Oleg Gelbukh ogelb...@mirantis.com wrote: We have a feature for swift3 middleware that we'd like to propose for merge. How we can do this now, when it is split into associated project? How has the procedure changed? I would expect this is going to be a typical Github (i.e: pull requests with multiple commits) workflow instead of a OpenStack gerrit workflow. This is maintained by fujita who's deciding what should go in or not. Chmouel. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] [OpenStack][Keystone][LDAP] Does LDAP driver support for validating subtree user?
Hi Folks , I have try with keystone backend by LDAP and Windows AD. It looks fine . Just want to clarify one point. For my test result , LDAP driver could only validate users in the particular container (OU,CN etc.) and does not include the subtree users. [ldap] tree_dn = dc=taiwan,dc=com user_tree_dn = ou=foo,dc=taiwan,dc=com For example User1 : cn=jeremy,ou=foo,dc=taiwan,dc=com User2 : cn=jordan,ou=bar,ou=foo,dc=taiwan,dc=com User1 could be validated , and get the token generated by keystone. User2 could not be validated Is there any way to validate both User1 and User2 in current design ? -- +Hugo Kuo+ tonyt...@gmail.com + tonyt...@gmail.com886 935004793 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] nova-api error on centos (devstack installation)
Verify the permissions on all the files in /etc/nova including api-paste.ini. I've seen errors like that when nova can't read the conf file due to permissions. Nate On May 22, 2012 5:42 AM, Mauch, Viktor (SCC) ma...@kit.edu wrote: Hello together, In the last days a played with multi-node installation of devstack: http://devstack.org/guides/multinode-lab.html However I tried to use CentOS 6.2 machines for Computes Nodes and a normal Ubuntu 12.04 for Head Node. On the compute nodes I want to run n-cpu (nova-compute) and n-net (nova-network). However after the stack.sh run there are still missing some API modules if I try to run nova-manage. If I try to install n-cpu, n-net AND n-api, the stack.sh script will crash because of the nova-api service with the following message: CRITICAL nova [-] Could not load paste app 'ec2' from /etc/nova/api-paste.ini TRACE nova Traceback (most recent call last): TRACE nova File /usr/bin/nova-api, line 51, in module TRACE nova servers.append(service.WSGIService(api)) TRACE nova File /usr/lib/python2.6/site-packages/nova/service.py, line 326, in __init__ TRACE nova self.app = self.loader.load_app(name) TRACE nova File /usr/lib/python2.6/site-packages/nova/wsgi.py, line 391, in load_app TRACE nova raise exception.PasteAppNotFound(name=name, path=self.config_path) TRACE nova PasteAppNotFound: Could not load paste app 'ec2' from /etc/nova/api-paste.ini Does anybody have an idea, what is going wrong? Cheers Viktor ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] centos 6 images
Maybe this would help you: https://forums.aws.amazon.com/thread.jspa?threadID=87599 Regards On Tue, May 22, 2012 at 12:07 AM, Jason Ford ja...@chatinara.com wrote: I am trying to put together an image for centos 6 that works like cloud-init on ubuntu does. Currently I have ssh keys getting imported but having some problems getting the disk to dynamically resize to the flavor template as well as the hostname set in horizon to be pushed into the image. Does anyone have any howtos or suggestions on how to get this done? Is there cloud-init for centos just like ubuntu? I would also be interested in how to do this with debian as well. Thanks! jason ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Middleware packaging (was: swift3 middleware split)
Chmouel Boudjnah wrote: On Tue, May 22, 2012 at 10:53 AM, Thierry Carrez thie...@openstack.org wrote: I expect the packaging teams in each distro to consider which plugins make the most sense and package them. +1, this is totally up to the distro to takes care of those things. Talking about packaging and middlewares, it would be nice if the packagers could do a split of the middlewares from a main project. For example in keystone the auth_token middleware is located in the python-keystone package for Ubuntu[1] it would be much nicer if this is splitted to its own package like python-keystone-auth-token and avoid end-user confusion like why do I need to install the full keystone[2] to get Nova/Swift/Glance+KeystoneAuth working I am not sure what's the process to get this forward, should I just report a bug against Fedora/Ubuntu package and attach a patch for the .spec, debian/control in there ? Yes, that should definitely be installable without pulling the whole thing. I would file a bug against the relevant packaging, for example for Keystone in Ubuntu: https://bugs.launchpad.net/ubuntu/+source/keystone/+filebug That said, in that particular case, we should probably first address the wider question of where the keystone/swift middleware should actually live. Looks like for the other projects this is shipped as part of the core project code, and having some consistency there would probably be good. -- Thierry Carrez (ttx) Release Manager, OpenStack ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [Swift] Swift3 Github pages
Chmouel Boudjnah wrote: On Mon, May 21, 2012 at 10:33 PM, Oleg Gelbukh ogelb...@mirantis.com wrote: We have a feature for swift3 middleware that we'd like to propose for merge. How we can do this now, when it is split into associated project? How has the procedure changed? I would expect this is going to be a typical Github (i.e: pull requests with multiple commits) workflow instead of a OpenStack gerrit workflow. This is maintained by fujita who's deciding what should go in or not. Note that if at some point that doesn't scale, there is always the option to use stackforge instead, which is the Gerrit instance that the CI team set up for non-core OpenStack projects. Cheers, -- Thierry Carrez (ttx) Release Manager, OpenStack ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Middleware packaging (was: swift3 middleware split)
On Tue, May 22, 2012 at 2:10 PM, Thierry Carrez thie...@openstack.org wrote: That said, in that particular case, we should probably first address the wider question of where the keystone/swift middleware should actually live. Looks like for the other projects this is shipped as part of the core project code, and having some consistency there would probably be good. At the last swift meeting[1] it was decided to be moved to swift, but I was talking about auth_token middleware which is shipped by keystone and used by most of OpenStack projects. Chmouel. [1] http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-05-16-20.31.html ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] keystone error (python setup)
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi, I am trying to setup keystone with LDAP and noticing these errors. I have python-ldap installed. What else do I need? # python Python 2.7.3 (default, Apr 20 2012, 22:39:59) [GCC 4.6.3] on linux2 Type help, copyright, credits or license for more information. import ldap import keystone import keystone.identity (root): 2012-05-21 18:52:57,854 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,028 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,206 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,376 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,554 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,730 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,904 CRITICAL No module named ldap (root): 2012-05-21 18:52:59,079 CRITICAL No module named ldap (root): 2012-05-21 18:52:59,258 CRITICAL No module named ldap (root): 2012-05-21 18:53:41,042 CRITICAL No module named ldap - --sharif -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJPu4oGAAoJEACffes9SivFRboH/Rdt7lC257AV7IDauHgEOOhn gGlfI5rGMmrcCyKmPXYHwtl4yvBYTU+vdJ7Y9OTGSHMQR0712EAMAGxzefJA6o3U xpqpa/eTnBI/NhrkCtolTo6deFT8TLeOLXH7D2OBwl3DQ0u9+MwRZSjv/jLhtSw6 8n7CiuMXT6ozGTGxlLrmW9BPZ6qnANtaV52qhUTncEhAnHNvxnWgfi94szWwNavV 1PLUkGpX8CpD/Q9u6GGaLUOzROmQkgAI71KDu5NW64zKRNrC42vUYhH3NCu/gi4g FQ3npfvnfETk6nPDDDFZBxDrv53ikUy3QKj7e+y/nBB3zu959tBC9+hzgS728ro= =CKJy -END PGP SIGNATURE- ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] centos 6 images
On 05/22/2012 04:07 AM, Jason Ford wrote: I am trying to put together an image for centos 6 that works like cloud-init on ubuntu does. Currently I have ssh keys getting imported but having some problems getting the disk to dynamically resize to the flavor template as well as the hostname set in horizon to be pushed into the image. Does anyone have any howtos or suggestions on how to get this done? Is there cloud-init for centos just like ubuntu? I would also be interested in how to do this with debian as well. Well I notice there is no cloud-init package for EPEL. I took a quick stab at it here: http://pbrady.fedorapeople.org/cloud-init-el6/ cheers, Pádraig. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [OpenStack][Keystone][LDAP] Does LDAP driver support for validating subtree user?
On 05/22/2012 07:07 AM, Kuo Hugo wrote: Hi Folks , I have try with keystone backend by LDAP and Windows AD. It looks fine . Just want to clarify one point. For my test result , LDAP driver could only validate users in the particular container (OU,CN etc.) and does not include the subtree users. [ldap] tree_dn = dc=taiwan,dc=com user_tree_dn = ou=foo,dc=taiwan,dc=com For example User1 : cn=jeremy,ou=foo,dc=taiwan,dc=com User2 : cn=jordan,ou=bar,ou=foo,dc=taiwan,dc=com User1 could be validated , and get the token generated by keystone. User2 could not be validated Is there any way to validate both User1 and User2 in current design ? No, there is not. Queries are not done against subtrees. If this is important to you, please file a ticket: https://bugs.launchpad.net/keystone/+filebug -- +Hugo Kuo+ tonyt...@gmail.com mailto:tonyt...@gmail.com + mailto:tonyt...@gmail.com886 935004793 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] keystone error (python setup)
I'm not sure if you have a weird copy/paste there or not, but the line with multiple imports shouldn't work at all (it should work as three separate lines). import ldap import keystone import keystone.identity If python-ldap is correctly installed, you should definitely be able to do something like: import ldap help(ldap) Or, from the command prompt: # python -c import ldap; help(ldap) Another caveat: LDAP requires binaries (on most systems?) that can't be installed by python-specific tools (e.g. pip) alone. -Dolph On Tue, May 22, 2012 at 7:43 AM, Sharif Islam isla...@indiana.edu wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi, I am trying to setup keystone with LDAP and noticing these errors. I have python-ldap installed. What else do I need? # python Python 2.7.3 (default, Apr 20 2012, 22:39:59) [GCC 4.6.3] on linux2 Type help, copyright, credits or license for more information. import ldap import keystone import keystone.identity (root): 2012-05-21 18:52:57,854 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,028 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,206 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,376 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,554 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,730 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,904 CRITICAL No module named ldap (root): 2012-05-21 18:52:59,079 CRITICAL No module named ldap (root): 2012-05-21 18:52:59,258 CRITICAL No module named ldap (root): 2012-05-21 18:53:41,042 CRITICAL No module named ldap - --sharif -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJPu4oGAAoJEACffes9SivFRboH/Rdt7lC257AV7IDauHgEOOhn gGlfI5rGMmrcCyKmPXYHwtl4yvBYTU+vdJ7Y9OTGSHMQR0712EAMAGxzefJA6o3U xpqpa/eTnBI/NhrkCtolTo6deFT8TLeOLXH7D2OBwl3DQ0u9+MwRZSjv/jLhtSw6 8n7CiuMXT6ozGTGxlLrmW9BPZ6qnANtaV52qhUTncEhAnHNvxnWgfi94szWwNavV 1PLUkGpX8CpD/Q9u6GGaLUOzROmQkgAI71KDu5NW64zKRNrC42vUYhH3NCu/gi4g FQ3npfvnfETk6nPDDDFZBxDrv53ikUy3QKj7e+y/nBB3zu959tBC9+hzgS728ro= =CKJy -END PGP SIGNATURE- ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [metering] high-level design proposal
On May 22, 2012, at 5:15 AM, Tom tom.gal...@hp.com wrote: On 05/21/2012 10:52 PM, Doug Hellmann wrote: I have written up some of my thoughts on a proposed design for ceilometer in the wiki [1]. I'm sure there are missing details, but I wanted to start getting ideas into writing so they could be discussed here on the list, since I've talked about different parts with a couple of you separately. Let me know what you think, and especially if I am not clear or have left out any details. Hi Doug That looks nice t but I'm wondering why you've chosen to poll over an event driven approach? (libvirt supports events as far as I can tell). Using events only gets us some of the data we want. It isn't enough know that the vim was launched, we also want to know about the resources it uses over time. If we can get that without polling, then we should investigate that approach. Doug Thanks Tom ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Middleware packaging (was: swift3 middleware split)
Chmouel Boudjnah wrote: On Tue, May 22, 2012 at 2:10 PM, Thierry Carrez thie...@openstack.org wrote: That said, in that particular case, we should probably first address the wider question of where the keystone/swift middleware should actually live. Looks like for the other projects this is shipped as part of the core project code, and having some consistency there would probably be good. At the last swift meeting[1] it was decided to be moved to swift, but I was talking about auth_token middleware which is shipped by keystone and used by most of OpenStack projects. That one should definitely be packaged as a separate binary package (produced from the same keystone source package) so that you can pull it in without getting all keystone. -- Thierry Carrez (ttx) Release Manager, OpenStack ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [metering] high-level design proposal
On Tue, May 22, 2012 at 3:40 AM, Nick Barcet nick.bar...@canonical.comwrote: On 05/21/2012 10:52 PM, Doug Hellmann wrote: I have written up some of my thoughts on a proposed design for ceilometer in the wiki [1]. I'm sure there are missing details, but I wanted to start getting ideas into writing so they could be discussed here on the list, since I've talked about different parts with a couple of you separately. Let me know what you think, and especially if I am not clear or have left out any details. Thanks, Doug [1] http://wiki.openstack.org/EfficientMetering/ArchitectureProposalV1 Thanks a lot for putting this together Doug. A few questions: * The collector runs on one or more central management servers to monitor the message queues (for notifications and for metering data coming from the agent). Notification messages are processed and turned into metering messages and sent back out onto the message bus using the appropriate topic. Metering messages are written to the data store without modification. - Is the reason behind why collectors do not write directly to the database a way to allow db less implementations as Francis suggested earlier? In this case it may be useful to say it explicitly. Yes, that's right. I have updated the wiki page to be more explicit on that point. http://wiki.openstack.org/EfficientMetering/ArchitectureProposalV1?action=diffrev2=8rev1=7 * Plugins may require configuration options, so when the plugin is loaded it is asked to add options to the global flags object, and the results are made available to the plugin before it is asked to do any work. - I am not sure where the global flags object resides and how option are populated. I think it would make sense for this to be globally controlled, and therefore may require for a simple discovery exchange on the queue to retrieve values and set defaults if it does not exist yet. I was referring to the config object created by nova.flags (although I think that module is moving to the common library, if it hasn't already). * Metering messages are signed using the hmac module in Python's standard library. A shared secret value can be provided in the ceilometer configuration settings. The messages are signed by feeding the message key names and values into the signature generator in sorted order. Non-string values are converted to unicode and then encoded as UTF-8. The message signature is included in the message for verification by the collector. - The signature is also kept in the database for future audit processes, maybe worth mentioning it here. Yes, good point. http://wiki.openstack.org/EfficientMetering/ArchitectureProposalV1?action=diffrev2=9rev1=8 - In addition to a signature, I think we would need a sequence number to be embedded by the agent for each message sent, so that loss of messages, or forgery of messages, can be detected by the collector and further audit process. OK. We have a message id, but I assumed those would be used to eliminate duplicates so this sounds like something different or new. It implies that the agent knows its own id (not hard) and keeps up with a sequence counter (more difficult, though not impossible). Did you have something in mind for how to implement that? Thanks again, Nick ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [metering] high-level design proposal
On Tue, May 22, 2012 at 4:05 AM, Endre Karlson endre.karl...@gmail.comwrote: If I'm understanding this correctly, the Collector is kind of like a Agent in Qantum (It sits on a machine doing stuff and passing info upstream). If you look at the approach they have now in Quantum Agent it's writing directly to the DB. But looking at the next version they seem to be moving to having the Agent send data upstream to the Plugin in Quantum. Why not do something similar? I mean if you have a MQ cluster in a deployment I think it makes more sense to have 1 thing that handles the db stuff then having each Collector connect to the db.. That was the goal, but I may have swapped the terminology around. For ceilometer, the agent runs on the compute node and writes only to the message queue. The collector runs in a central location and writes to the database. The number of collectors you need will depend on the number of messages being generated, but the architecture supports running several in parallel in a way that each instance does not need to be aware of the others. Doug Endre. 2012/5/22 Nick Barcet nick.bar...@canonical.com On 05/21/2012 10:52 PM, Doug Hellmann wrote: I have written up some of my thoughts on a proposed design for ceilometer in the wiki [1]. I'm sure there are missing details, but I wanted to start getting ideas into writing so they could be discussed here on the list, since I've talked about different parts with a couple of you separately. Let me know what you think, and especially if I am not clear or have left out any details. Thanks, Doug [1] http://wiki.openstack.org/EfficientMetering/ArchitectureProposalV1 Thanks a lot for putting this together Doug. A few questions: * The collector runs on one or more central management servers to monitor the message queues (for notifications and for metering data coming from the agent). Notification messages are processed and turned into metering messages and sent back out onto the message bus using the appropriate topic. Metering messages are written to the data store without modification. - Is the reason behind why collectors do not write directly to the database a way to allow db less implementations as Francis suggested earlier? In this case it may be useful to say it explicitly. * Plugins may require configuration options, so when the plugin is loaded it is asked to add options to the global flags object, and the results are made available to the plugin before it is asked to do any work. - I am not sure where the global flags object resides and how option are populated. I think it would make sense for this to be globally controlled, and therefore may require for a simple discovery exchange on the queue to retrieve values and set defaults if it does not exist yet. * Metering messages are signed using the hmac module in Python's standard library. A shared secret value can be provided in the ceilometer configuration settings. The messages are signed by feeding the message key names and values into the signature generator in sorted order. Non-string values are converted to unicode and then encoded as UTF-8. The message signature is included in the message for verification by the collector. - The signature is also kept in the database for future audit processes, maybe worth mentioning it here. - In addition to a signature, I think we would need a sequence number to be embedded by the agent for each message sent, so that loss of messages, or forgery of messages, can be detected by the collector and further audit process. Thanks again, Nick ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] adding a worker pool notion for RPC consumers
ceilometer is going to need to subscribe several worker processes to the notifications.info topic for the other services like nova, glance, and quantum. The pool of workers needs to be assured of receiving all messages, without interference from other clients listening for notifications (such as metering, audit logging, monitoring, etc.). The drivers implement create_consumer() for topic consumers by always making the topic and queue name the same. That supports the consumer patterns we have encountered so far, but does not allow for load balancing between workers as we want for the ceilometer collector. The fanout argument is used by the existing drivers to control the breadth of distribution. With fanout=True, every consumer receives a copy of the event because every consumer has its own queue. This gives us no load balancing benefit for having multiple consumers. With fanout=False only one consumer *at all* will get a copy of the event, since they all listen on the same queue. This means if metering, audit logging, and monitoring are all listening for notifications they will each see only some of the events. The way to achieve load balancing is to have the ceilometer consumers connect to the same exchange and topic using a shared, well-known, queue name that is different from the name used by non-ceilometer consumers. Unfortunately, the only parameter that cannot be controlled by the caller of create_consumer() is the queue name. I have a patch ready for review [1] to add a new method, create_worker() to the RPC Connection class, to allow a group of consumer to share a queue to manage load. The worker pool allows multiple consumers to receive a given message (by subscribing to separate queues), but it also allows several consumers to declare that they are collaborating so that only one of the subset receives a copy (by subscribing to the same queue). That means that multiple types of consumers can be listening to notifications and each type of consumer can have a load balanced pool of workers so that messages are only processed once for that type (once for metering and once for logging, for example). Two separate implementations were discussed: Adding a queue_name argument to create_consumer() and creating a new method create_worker(). After considering both options, I chose to add a new method because it clarified the right way to combine the inputs to set up workers (fanout must always be true and the queue name must always be provided). The code is up for review, so please have a look and let me know what you think. Doug [1] https://review.openstack.org/#/c/7590/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] adding a worker pool notion for RPC consumers
Bringing my conversation with Doug back on-list... In nova.rpc with fanout=True every consumer gets a copy of the event because every consumer has its own queue. With fanout=False only one consumer *at all* will get a copy of the event, since they all listen on the same queue. The changes I made come somewhere in between that. It allows multiple consumers to receive a given message, but it also allows several consumers to declare that they are collaborating so that only one of the subset receives a copy. That means that multiple types of consumers can be listening to notifications (metering and audit logging, for example) and each type of consumer can have a load balanced pool of workers so that messages are only processed once for metering and once for logging. We can do this today with the Matchmaker. You can use a standard fanout, but make one of the hosts a DNS entry with multiple A or CNAME records for round-robin DNS, where that host will act as a pool of workers. It would be trivial to update the matchmaker to support nested lists to support this with IP addresses as well, doing round-robin or random-selection of hosts without a pool of workers. Unfortunately, doing this in the AMQP fashion of registering workers is difficult to do via the matchmaker. Not impossible, but it requires that the matchmakers have a (de)centralized datastore. This could be solved by having get_workers and/or create_consumer communicate to the matchmaker and update mysql, zookeeper, redis, etc. While I think this is a viable approach, I've avoided /requiring/ this paradigm as the alternatives of using hash maps and/or DNS are significantly less complex and easier to scale and keep available. We should consider to what degree dynamic vs static configuration is necessary, if dynamic is truly required, and how a method like get_workers should behave on a statically configured system. Regards, Eric Windisch ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] adding a worker pool notion for RPC consumers
On Tue, May 22, 2012 at 11:02 AM, Eric Windisch e...@cloudscaling.comwrote: Bringing my conversation with Doug back on-list... In nova.rpc with fanout=True every consumer gets a copy of the event because every consumer has its own queue. With fanout=False only one consumer *at all* will get a copy of the event, since they all listen on the same queue. The changes I made come somewhere in between that. It allows multiple consumers to receive a given message, but it also allows several consumers to declare that they are collaborating so that only one of the subset receives a copy. That means that multiple types of consumers can be listening to notifications (metering and audit logging, for example) and each type of consumer can have a load balanced pool of workers so that messages are only processed once for metering and once for logging. We can do this today with the Matchmaker. You can use a standard fanout, but make one of the hosts a DNS entry with multiple A or CNAME records for round-robin DNS, where that host will act as a pool of workers. It would be trivial to update the matchmaker to support nested lists to support this with IP addresses as well, doing round-robin or random-selection of hosts without a pool of workers. That sounds like a lot like a traditional load-balancing approach. Unfortunately, doing this in the AMQP fashion of registering workers is difficult to do via the matchmaker. Not impossible, but it requires that the matchmakers have a (de)centralized datastore. This could be solved by having get_workers and/or create_consumer communicate to the matchmaker and update mysql, zookeeper, redis, etc. While I think this is a viable approach, I've avoided /requiring/ this paradigm as the alternatives of using hash maps and/or DNS are significantly less complex and easier to scale and keep available. We should consider to what degree dynamic vs static configuration is necessary, if dynamic is truly required, and how a method like get_workers should behave on a statically configured system. I wanted our ops team to be able to bring more collector service instances online when our cloud starts seeing an increase in the sorts of activity that generates metering events, without having to explicitly register the new workers in a configuration file. It sounds like having the zeromq driver (optionally?) communicate to a central registry would let it reproduce some of the features built into AMQP to achieve that sort of dynamic self-configuration. I mentioned off-list that I'm not a messaging expert, and I wasn't around when the zeromq driver work was started. Is the goal of that work to eventually permanently replace AMQP, or just to provide a compatible alternative? Doug ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] keystone error (python setup)
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Thanks. I think the mail client got rid of the new line. In any case, the issue was with my LDAP setup, not python. - --sharif On 05/22/2012 09:58 AM, Dolph Mathews wrote: I'm not sure if you have a weird copy/paste there or not, but the line with multiple imports shouldn't work at all (it should work as three separate lines). import ldap import keystone import keystone.identity If python-ldap is correctly installed, you should definitely be able to do something like: import ldap help(ldap) Or, from the command prompt: # python -c import ldap; help(ldap) Another caveat: LDAP requires binaries (on most systems?) that can't be installed by python-specific tools (e.g. pip) alone. -Dolph On Tue, May 22, 2012 at 7:43 AM, Sharif Islam isla...@indiana.edu mailto:isla...@indiana.edu wrote: Hi, I am trying to setup keystone with LDAP and noticing these errors. I have python-ldap installed. What else do I need? # python Python 2.7.3 (default, Apr 20 2012, 22:39:59) [GCC 4.6.3] on linux2 Type help, copyright, credits or license for more information. import ldap import keystone import keystone.identity (root): 2012-05-21 18:52:57,854 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,028 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,206 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,376 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,554 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,730 CRITICAL No module named ldap (root): 2012-05-21 18:52:58,904 CRITICAL No module named ldap (root): 2012-05-21 18:52:59,079 CRITICAL No module named ldap (root): 2012-05-21 18:52:59,258 CRITICAL No module named ldap (root): 2012-05-21 18:53:41,042 CRITICAL No module named ldap --sharif ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp - -- Sharif Islam Senior Systems Analyst/Programmer FutureGrid (http://futuregrid.org) Pervasive Technology Institute, Indiana University Bloomington -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJPu7QkAAoJEACffes9SivFFM0IALybSXMFzb9x9PUFgmlnePzi bgDiu69NbTHVewrQCzSVCCi2QWQzwTYaQvXB601QgBtccluJrxJIC/Lur6u/VVhQ YzEtV6kRrRyZtIvzqIb0x1PJvZHTUgaxUVbOP8PkeJPJbgxiEtYwzoAkXcoF0ACW WeyU6zv2fJsn6pHK6tUYd41KbOXy1jmbzC6s0Y2YGJtxDvRmqzu9nOfqFBfFD7Z4 rTEWm9Zo3rTMTH9pHIPegR3rmM14LRzRc8wT5bGoPHG4R7c3nK8Vz3KhbzObzL25 XgCcTkW1SmxJdGOE1gzWu46El2BO+aYdAp1k8b65yca7UmJiaJNec7wV77jVdys= =5rxQ -END PGP SIGNATURE- ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] centos 6 images
On 05/22/2012 03:39 PM, Andy Grimm wrote: On Tue, May 22, 2012 at 9:38 AM, Pádraig Brady p...@draigbrady.com wrote: On 05/22/2012 04:07 AM, Jason Ford wrote: I am trying to put together an image for centos 6 that works like cloud-init on ubuntu does. Currently I have ssh keys getting imported but having some problems getting the disk to dynamically resize to the flavor template as well as the hostname set in horizon to be pushed into the image. Does anyone have any howtos or suggestions on how to get this done? Is there cloud-init for centos just like ubuntu? I would also be interested in how to do this with debian as well. Well I notice there is no cloud-init package for EPEL. I took a quick stab at it here: http://pbrady.fedorapeople.org/cloud-init-el6/ I've already responded in IRC, but it wouldn't hurt to have a response in the mail archive. In short, the reason there isn't already a cloud-init for EL6 (or EL5, for that matter) is that upstream has been using python 2.7-only calls for a while now. In particular, a couple of calls to subprocess.check_output need to be replaced, and I think there are a few other issues as well. I don't think it's a huge amount of work to make it functional, but it hasn't been high on anyone's list. It would be cool if you have time to fix / test it, though. Ok I've fixed the check_output calls at the above URL. cheers, Pádraig. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] enforce admin_required with LDAP admin user
I think my LDAP bind is working by tenant-list and user-list gives me admin_required error. Looks like the LDAP admin user does not have any roles. is that the issue? # keystone discover Keystone found at http://localhost:5000/v2.0/ - supports version v2.0 (beta) here http://149.165.159.121:5000/v2.0/ root@i121:~# keystone service-list ++--+--+-+ | id | name | type | description | ++--+--+-+ ++--+--+-+ root@i121:~# keystone user-list No handlers could be found for logger keystoneclient.client You are not authorized to perform the requested action: admin_required (HTTP 403) root@i121:~# keystone tenant-list No handlers could be found for logger keystoneclient.client You are not authorized to perform the requested action: admin_required (HTTP 403) keystone.common.ldap.core): 2012-05-22 11:36:02,263 DEBUG LDAP init: url=ldap://ldap.project.org (keystone.common.ldap.core): 2012-05-22 11:36:02,263 DEBUG LDAP bind: dn=uid=user,ou=People,dc=project,dc=org (keystone.common.ldap.core): 2012-05-22 11:36:02,271 DEBUG LDAP search: dn=ou=ostenants,dc=project,dc=org, scope=1, query=((member=uid=admin,ou=People,dc=project,dc=org)(objectClass=groupOfNames)) (root): 2012-05-22 11:36:02,425 DEBUG TOKEN_REF {'id': 'dfc4b2ecexxxd014x280d91efeecda06', 'expires': datetime.datetime(2012, 5, 23, 15, 36, 2, 274565), 'user': {'id': 'admin', 'name': 'admin'}, 'tenant': {'id': 'admin', 'name': 'admin'}, 'metadata': {}} (eventlet.wsgi.server): 2012-05-22 11:36:02,426 DEBUG 127.0.0.1 - - [22/May/2012 11:36:02] POST /v2.0/tokens HTTP/1.1 200 1762 0.166139 (keystone.policy.backends.rules): 2012-05-22 11:36:02,439 DEBUG enforce admin_required: {'tenant_id': u'admin', 'user_id': u'admin', 'roles': []} --sharif ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Openstack Beginners guide for Ubuntu 12.04/Essex
Hi Atul I have meet a problem, seem the document bug. please correct me if no right. In page 21 2.2.5.7 Creating Endpoints create endpoint for nova-compute keystone endpoint-create --region myregion --service_id 1e93ee6c70f8468c88a5cb1b106753f3 -- -publicurl ’ http://10.10.10.2:8774/v2/$(tenant_id)s’ --adminurl ’http://10.10.10.2:8774/ -v2/$(tenant_id)s’ --internalurl ’http://10.10.10.2:8774/v2/$(tenant_id)s’ the command seem have two problem 1: http://10.10.10.2:8774/v2/ for nova-compute, the api should v1.1. 2: if run the command would show error tenant_id: command not found we need add ( ) so I think the correct is below: keystone endpoint-create --region myregion --service_id 7ee472012dfa4f01b35507a7ef2aa9cb --publicurl http://10.10.10.2:8774/v1.1/$;(tenant_id)s --adminurl http://10.10.10.2:8774/v1.1/$;(tenant_id)s --internalurl http://10.10.10.2:8774/v1.1/$;(tenant_id)s the below is output +-+--+ | Property | Value | +-+--+ | adminurl| http://10.10.10.2:8774/v1.1/(tenant_id)s | | id | a3443ac7103745e29e931d5b6c48e245 | | internalurl | http://10.10.10.2:8774/v1.1/(tenant_id)s | | publicurl | http://10.10.10.2:8774/v1.1/(tenant_id)s | | region | myregion | | service_id | 7ee472012dfa4f01b35507a7ef2aa9cb | +-+--+ On Thu, May 10, 2012 at 10:33 PM, Atul Jha atul@csscorp.com wrote: Hi all, We at Csscorp have been publishing series of beginners guide on Ubuntu/Openstack (versions), in continuation with that we have released the latest version of our book with Essex and Ubuntu 12.04. http://cssoss.wordpress.com/2012/05/07/openstack-beginners-guide-v3-0-for-essex-on-ubuntu-12-04-precise-pangolin/ The code can be found at https://code.launchpad.net/openstackbook We would love to see the book localized in some other languages too, say Chinese/Japanese/German to reach to as many people as possible. :) Suggestion/criticism would be highly appreciated. Cheers!! Atul Jha Application Specialist Csscorp pvt ltd, Chennai, India http://www.csscorp.com/common/email-disclaimer.php ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- Shake Chen ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] adding a worker pool notion for RPC consumers
I wanted our ops team to be able to bring more collector service instances online when our cloud starts seeing an increase in the sorts of activity that generates metering events, without having to explicitly register the new workers in a configuration file. It sounds like having the zeromq driver (optionally?) communicate to a central registry would let it reproduce some of the features built into AMQP to achieve that sort of dynamic self-configuration. Understood. Supporting the dynamic case is viable, I just don't want to (blindly) do it at the expense of static configurations. Here, I think we can simply warn/error if a static configuration is in place. I'm thinking that the zeromq driver would support create_workers by passing the call into the matchmaker. Some matchmakers would support it, others would not (and would be static), logging a message. The question might be if we should create an exception and raise this as well, or not, but I'm leaning toward not. I mentioned off-list that I'm not a messaging expert, and I wasn't around when the zeromq driver work was started. Is the goal of that work to eventually permanently replace AMQP, or just to provide a compatible alternative? It is currently a compatible alternative. We do intend for this to remain compatible, and for the abstraction to be useful across all the available messaging plugins. It remains to be seen which, if any, messaging platform will be the /default/ in Nova/OpenStack long-term. Currently, RabbitMQ is the default, but Essex introduced Qpid messaging, and we'll have ZeroMQ messaging if we can get it out of review ;-) Regards, Eric Windisch ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] centos 6 images
On Tue, 22 May 2012, Pádraig Brady wrote: On 05/22/2012 03:39 PM, Andy Grimm wrote: On Tue, May 22, 2012 at 9:38 AM, Pádraig Brady p...@draigbrady.com wrote: On 05/22/2012 04:07 AM, Jason Ford wrote: I am trying to put together an image for centos 6 that works like cloud-init on ubuntu does. Currently I have ssh keys getting imported but having some problems getting the disk to dynamically resize to the flavor template as well as the hostname set in horizon to be pushed into the image. Does anyone have any howtos or suggestions on how to get this done? Is there cloud-init for centos just like ubuntu? I would also be interested in how to do this with debian as well. Well I notice there is no cloud-init package for EPEL. I took a quick stab at it here: http://pbrady.fedorapeople.org/cloud-init-el6/ I've already responded in IRC, but it wouldn't hurt to have a response in the mail archive. In short, the reason there isn't already a cloud-init for EL6 (or EL5, for that matter) is that upstream has been using python 2.7-only calls for a while now. In particular, a couple of calls to subprocess.check_output need to be replaced, and I think there are a few other issues as well. I don't think it's a huge It would help if you'd bring that up with upstream :) I'm interested in cloud-init working in the most places it can. I'll try to pull in the sysvinit scripts that Pádraig added and grab other changes that are there. amount of work to make it functional, but it hasn't been high on anyone's list. It would be cool if you have time to fix / test it, though. Ok I've fixed the check_output calls at the above URL. If anyone has features / issues they'd like addressed in cloud-init, please feel free to ping me (smoser). I'll most likely ask you to open a bug at http://bugs.launchpad.net/cloud-init , and may even invite you to submit a patch. One way or another, though, I'm interested in making cloud-init better, so comments/concerns/participation is welcome and encouraged.___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [metering] high-level design proposal
My point of concern. \ If an agent is being built into the compute nodes, that would best be a split out project. Two major reasons. First and foremost sub projects should not be spinning up their own agents. Secondly, there is a use case of agents outside of metering. If an agent is to be built it is a not insignificant change in architecture for openstack. -Matt On Tue, May 22, 2012 at 7:30 AM, Doug Hellmann doug.hellm...@dreamhost.comwrote: On Tue, May 22, 2012 at 4:05 AM, Endre Karlson endre.karl...@gmail.comwrote: If I'm understanding this correctly, the Collector is kind of like a Agent in Qantum (It sits on a machine doing stuff and passing info upstream). If you look at the approach they have now in Quantum Agent it's writing directly to the DB. But looking at the next version they seem to be moving to having the Agent send data upstream to the Plugin in Quantum. Why not do something similar? I mean if you have a MQ cluster in a deployment I think it makes more sense to have 1 thing that handles the db stuff then having each Collector connect to the db.. That was the goal, but I may have swapped the terminology around. For ceilometer, the agent runs on the compute node and writes only to the message queue. The collector runs in a central location and writes to the database. The number of collectors you need will depend on the number of messages being generated, but the architecture supports running several in parallel in a way that each instance does not need to be aware of the others. Doug Endre. 2012/5/22 Nick Barcet nick.bar...@canonical.com On 05/21/2012 10:52 PM, Doug Hellmann wrote: I have written up some of my thoughts on a proposed design for ceilometer in the wiki [1]. I'm sure there are missing details, but I wanted to start getting ideas into writing so they could be discussed here on the list, since I've talked about different parts with a couple of you separately. Let me know what you think, and especially if I am not clear or have left out any details. Thanks, Doug [1] http://wiki.openstack.org/EfficientMetering/ArchitectureProposalV1 Thanks a lot for putting this together Doug. A few questions: * The collector runs on one or more central management servers to monitor the message queues (for notifications and for metering data coming from the agent). Notification messages are processed and turned into metering messages and sent back out onto the message bus using the appropriate topic. Metering messages are written to the data store without modification. - Is the reason behind why collectors do not write directly to the database a way to allow db less implementations as Francis suggested earlier? In this case it may be useful to say it explicitly. * Plugins may require configuration options, so when the plugin is loaded it is asked to add options to the global flags object, and the results are made available to the plugin before it is asked to do any work. - I am not sure where the global flags object resides and how option are populated. I think it would make sense for this to be globally controlled, and therefore may require for a simple discovery exchange on the queue to retrieve values and set defaults if it does not exist yet. * Metering messages are signed using the hmac module in Python's standard library. A shared secret value can be provided in the ceilometer configuration settings. The messages are signed by feeding the message key names and values into the signature generator in sorted order. Non-string values are converted to unicode and then encoded as UTF-8. The message signature is included in the message for verification by the collector. - The signature is also kept in the database for future audit processes, maybe worth mentioning it here. - In addition to a signature, I think we would need a sequence number to be embedded by the agent for each message sent, so that loss of messages, or forgery of messages, can be detected by the collector and further audit process. Thanks again, Nick ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe :
Re: [Openstack] Swift performance for very small objects
Remember that when an object is written to swift, it's not written just to the object server, the container and account servers are updated as well... the container for object listings (and timestmaps) and the account for overall statistics. Also, the proxy ensures a quorum for the newly written object - there will be 2/3 of the replicas written before the request is ack'd to the client. If you're trying to find ways to optimize swift for performance, especially large clusters, I'd probably focus on performance optimization of the account and container servers. A few more thoughts: * swift is designed to scale out very well - both across machines and across disks. You effectively defeat that scaling when you use loopback devices - since your effectively force all the disk activity onto the same physical disk. * you might want to prime your environment before your performance tests. Things like ARP caches, and DNS name resolution. Also, make sure to prime your accounts and containers, and not have them be created as part of the test. * There are some caches in swift that run around 128K entries (in the account / container servers). You might want to run larger tests, to make sure you get those flashed once in a while. * once you have real disks, you might want to play around with disk to zone ratios. Replicas are guaranteed to go to different zonesso the number of disk-spindles in a zone will affect the overall performance of your cluster. It will be interesting to hear more about your results ! Oh... persistent connections. I believe the python httplib will auto-negotiate persistent connections, so no app level code is required (good thought though ;) On Sat, May 19, 2012 at 9:34 PM, Paulo Ricardo Motta Gomes pauloricard...@gmail.com wrote: Hello, I'm doing some experiments in a Swift cluster testbed of 9 nodes/devices and 3 zones (3 nodes on each zone). In one of my tests, I noticed that PUTs of very small objects are extremely inefficient. - 5000 PUTs of objects with an average size of 40K - total of 195MB - took 67s (avg time per request: 0.0135s) - 5000 PUTS of objects with an average size of 190 bytes - total of 930KB - took 60s (avg time per request: 0.0123s) I plotted object size vs request time and found that there is significant difference in request times only after 200KB. When objects are smaller than this PUT requests have a minimum execution time of 0.01s, no matter the object size. I suppose swift is not optimized for such small objects, but I wonder what is the main cause for this, if it's the HTTP overhead or disk writing. I checked the log of the object servers and requests are taking an average of 0.006s, whether objects are 40K or 190 bytes, which indicate part of the bottleneck could be at the disk. Curently I'm using a loopback device for storage. I thought that maybe this could be improved a bit if the proxy server maintained persistent connections to the storage nodes instead of opening a new one for each request? It would be great if you could share your thoughts on this and how could the performance of this special case be improved. Cheers, Paulo -- European Master in Distributed Computing Royal Institute of Technology - KTH Instituto Superior Técnico - IST http://paulormg.com ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] centos 6 images
I will give these a shot later today and reply with feedback. Thanks for looking into this! Jason On May 22, 2012, at 11:44 AM, Pádraig Brady p...@draigbrady.com wrote: On 05/22/2012 03:39 PM, Andy Grimm wrote: On Tue, May 22, 2012 at 9:38 AM, Pádraig Brady p...@draigbrady.com wrote: On 05/22/2012 04:07 AM, Jason Ford wrote: I am trying to put together an image for centos 6 that works like cloud-init on ubuntu does. Currently I have ssh keys getting imported but having some problems getting the disk to dynamically resize to the flavor template as well as the hostname set in horizon to be pushed into the image. Does anyone have any howtos or suggestions on how to get this done? Is there cloud-init for centos just like ubuntu? I would also be interested in how to do this with debian as well. Well I notice there is no cloud-init package for EPEL. I took a quick stab at it here: http://pbrady.fedorapeople.org/cloud-init-el6/ I've already responded in IRC, but it wouldn't hurt to have a response in the mail archive. In short, the reason there isn't already a cloud-init for EL6 (or EL5, for that matter) is that upstream has been using python 2.7-only calls for a while now. In particular, a couple of calls to subprocess.check_output need to be replaced, and I think there are a few other issues as well. I don't think it's a huge amount of work to make it functional, but it hasn't been high on anyone's list. It would be cool if you have time to fix / test it, though. Ok I've fixed the check_output calls at the above URL. cheers, Pádraig. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [metering] high-level design proposal
On 05/22/2012 03:26 PM, Doug Hellmann wrote: - In addition to a signature, I think we would need a sequence number to be embedded by the agent for each message sent, so that loss of messages, or forgery of messages, can be detected by the collector and further audit process. OK. We have a message id, but I assumed those would be used to eliminate duplicates so this sounds like something different or new. It implies that the agent knows its own id (not hard) and keeps up with a sequence counter (more difficult, though not impossible). Did you have something in mind for how to implement that? Actually, this was my intent in the original blueprint when I specified the message_id field then a couple lines bellow: a process may verify that messages were not lost. On the implementation side, I was thinking that each agent would maintain its own sequence count, as a global instance count would be pricier. In my mind, non repudiation was built from the message_signature + message_id which should be unique for each agent. Nick signature.asc Description: OpenPGP digital signature ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] centos 6 images
On 05/22/2012 05:51 PM, Scott Moser wrote: On Tue, 22 May 2012, Pádraig Brady wrote: On 05/22/2012 03:39 PM, Andy Grimm wrote: On Tue, May 22, 2012 at 9:38 AM, Pádraig Brady p...@draigbrady.com wrote: On 05/22/2012 04:07 AM, Jason Ford wrote: I am trying to put together an image for centos 6 that works like cloud-init on ubuntu does. Currently I have ssh keys getting imported but having some problems getting the disk to dynamically resize to the flavor template as well as the hostname set in horizon to be pushed into the image. Does anyone have any howtos or suggestions on how to get this done? Is there cloud-init for centos just like ubuntu? I would also be interested in how to do this with debian as well. Well I notice there is no cloud-init package for EPEL. I took a quick stab at it here: http://pbrady.fedorapeople.org/cloud-init-el6/ I've already responded in IRC, but it wouldn't hurt to have a response in the mail archive. In short, the reason there isn't already a cloud-init for EL6 (or EL5, for that matter) is that upstream has been using python 2.7-only calls for a while now. In particular, a couple of calls to subprocess.check_output need to be replaced, and I think there are a few other issues as well. I don't think it's a huge It would help if you'd bring that up with upstream :) I'm interested in cloud-init working in the most places it can. I'll try to pull in the sysvinit scripts that Pádraig added and grab other changes that are there. Excellent. I'll submit as much as I can upstream anyway after some more testing. cheers, Pádraig. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] centos 6 images
U might want to check out, https://github.com/yahoo/Openstack-Condense Its a stripped down/cleaned up/... version of cloud-init that I know works on RHEL6. I tried to improve the following: 1. Code cleanliness (constants being uppercase, paths using os.path.join and so-on) 2. Stripping out some of the odd handlers (byobu, right-scale and such) 3. Improving logging by a lot (so that u can debug this thing) 4. Making what handlers I left work on RH and ubuntu... Might be useful if u want to try it. I know just from doing the above work that the cloud-init for ubuntu, requires some work to get it to work on RH, but not tons, eventually I hope that I can merge this back, but for now its forked so that I could focus on getting it working and cleaned up, rather than pushing code through some review process via launchpad and such (ie the slow as molasses approach). On 5/22/12 10:05 AM, Jason ja...@chatinara.com wrote: I will give these a shot later today and reply with feedback. Thanks for looking into this! Jason On May 22, 2012, at 11:44 AM, Pádraig Brady p...@draigbrady.com wrote: On 05/22/2012 03:39 PM, Andy Grimm wrote: On Tue, May 22, 2012 at 9:38 AM, Pádraig Brady p...@draigbrady.com wrote: On 05/22/2012 04:07 AM, Jason Ford wrote: I am trying to put together an image for centos 6 that works like cloud-init on ubuntu does. Currently I have ssh keys getting imported but having some problems getting the disk to dynamically resize to the flavor template as well as the hostname set in horizon to be pushed into the image. Does anyone have any howtos or suggestions on how to get this done? Is there cloud-init for centos just like ubuntu? I would also be interested in how to do this with debian as well. Well I notice there is no cloud-init package for EPEL. I took a quick stab at it here: http://pbrady.fedorapeople.org/cloud-init-el6/ I've already responded in IRC, but it wouldn't hurt to have a response in the mail archive. In short, the reason there isn't already a cloud-init for EL6 (or EL5, for that matter) is that upstream has been using python 2.7-only calls for a while now. In particular, a couple of calls to subprocess.check_output need to be replaced, and I think there are a few other issues as well. I don't think it's a huge amount of work to make it functional, but it hasn't been high on anyone's list. It would be cool if you have time to fix / test it, though. Ok I've fixed the check_output calls at the above URL. cheers, Pádraig. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [metering] high-level design proposal
On Tue, May 22, 2012 at 12:53 PM, Matt Joyce matt.jo...@cloudscaling.comwrote: My point of concern. \ If an agent is being built into the compute nodes, that would best be a split out project. Two major reasons. First and foremost sub projects should not be spinning up their own agents. Secondly, there is a use case of agents outside of metering. If an agent is to be built it is a not insignificant change in architecture for openstack. This agent will run on the compute host, next to nova-compute, not inside the VM. Is that still a concern? -Matt On Tue, May 22, 2012 at 7:30 AM, Doug Hellmann doug.hellm...@dreamhost.com wrote: On Tue, May 22, 2012 at 4:05 AM, Endre Karlson endre.karl...@gmail.comwrote: If I'm understanding this correctly, the Collector is kind of like a Agent in Qantum (It sits on a machine doing stuff and passing info upstream). If you look at the approach they have now in Quantum Agent it's writing directly to the DB. But looking at the next version they seem to be moving to having the Agent send data upstream to the Plugin in Quantum. Why not do something similar? I mean if you have a MQ cluster in a deployment I think it makes more sense to have 1 thing that handles the db stuff then having each Collector connect to the db.. That was the goal, but I may have swapped the terminology around. For ceilometer, the agent runs on the compute node and writes only to the message queue. The collector runs in a central location and writes to the database. The number of collectors you need will depend on the number of messages being generated, but the architecture supports running several in parallel in a way that each instance does not need to be aware of the others. Doug Endre. 2012/5/22 Nick Barcet nick.bar...@canonical.com On 05/21/2012 10:52 PM, Doug Hellmann wrote: I have written up some of my thoughts on a proposed design for ceilometer in the wiki [1]. I'm sure there are missing details, but I wanted to start getting ideas into writing so they could be discussed here on the list, since I've talked about different parts with a couple of you separately. Let me know what you think, and especially if I am not clear or have left out any details. Thanks, Doug [1] http://wiki.openstack.org/EfficientMetering/ArchitectureProposalV1 Thanks a lot for putting this together Doug. A few questions: * The collector runs on one or more central management servers to monitor the message queues (for notifications and for metering data coming from the agent). Notification messages are processed and turned into metering messages and sent back out onto the message bus using the appropriate topic. Metering messages are written to the data store without modification. - Is the reason behind why collectors do not write directly to the database a way to allow db less implementations as Francis suggested earlier? In this case it may be useful to say it explicitly. * Plugins may require configuration options, so when the plugin is loaded it is asked to add options to the global flags object, and the results are made available to the plugin before it is asked to do any work. - I am not sure where the global flags object resides and how option are populated. I think it would make sense for this to be globally controlled, and therefore may require for a simple discovery exchange on the queue to retrieve values and set defaults if it does not exist yet. * Metering messages are signed using the hmac module in Python's standard library. A shared secret value can be provided in the ceilometer configuration settings. The messages are signed by feeding the message key names and values into the signature generator in sorted order. Non-string values are converted to unicode and then encoded as UTF-8. The message signature is included in the message for verification by the collector. - The signature is also kept in the database for future audit processes, maybe worth mentioning it here. - In addition to a signature, I think we would need a sequence number to be embedded by the agent for each message sent, so that loss of messages, or forgery of messages, can be detected by the collector and further audit process. Thanks again, Nick ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack
Re: [Openstack] [metering] high-level design proposal
On Tue, May 22, 2012 at 1:25 PM, Nick Barcet nick.bar...@canonical.comwrote: On 05/22/2012 03:26 PM, Doug Hellmann wrote: - In addition to a signature, I think we would need a sequence number to be embedded by the agent for each message sent, so that loss of messages, or forgery of messages, can be detected by the collector and further audit process. OK. We have a message id, but I assumed those would be used to eliminate duplicates so this sounds like something different or new. It implies that the agent knows its own id (not hard) and keeps up with a sequence counter (more difficult, though not impossible). Did you have something in mind for how to implement that? Actually, this was my intent in the original blueprint when I specified the message_id field then a couple lines bellow: a process may verify that messages were not lost. On the implementation side, I was thinking that each agent would maintain its own sequence count, as a global instance count would be pricier. In my mind, non repudiation was built from the message_signature + message_id which should be unique for each agent. OK. That brings a couple of more specific questions to mind: Does the agent save its sequence counter through a restart? How and where? What about an upgrade? What would the down-stream consumer of the data do if it discovered there was a missing event? Who should do that detection work? Nick ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] adding a worker pool notion for RPC consumers
On Tue, May 22, 2012 at 12:00 PM, Eric Windisch e...@cloudscaling.comwrote: I wanted our ops team to be able to bring more collector service instances online when our cloud starts seeing an increase in the sorts of activity that generates metering events, without having to explicitly register the new workers in a configuration file. It sounds like having the zeromq driver (optionally?) communicate to a central registry would let it reproduce some of the features built into AMQP to achieve that sort of dynamic self-configuration. Understood. Supporting the dynamic case is viable, I just don't want to (blindly) do it at the expense of static configurations. Here, I think we can simply warn/error if a static configuration is in place. I'm thinking that the zeromq driver would support create_workers by passing the call into the matchmaker. Some matchmakers would support it, others would not (and would be static), logging a message. The question might be if we should create an exception and raise this as well, or not, but I'm leaning toward not. If a consumer is trying to subscribe to a worker pool but the underlying implementation for the messaging system does not support those semantics, we should fail loudly and explicitly instead of configuring the consumer using other semantics that may result in subtle bugs or data corruption. I mentioned off-list that I'm not a messaging expert, and I wasn't around when the zeromq driver work was started. Is the goal of that work to eventually permanently replace AMQP, or just to provide a compatible alternative? It is currently a compatible alternative. We do intend for this to remain compatible, and for the abstraction to be useful across all the available messaging plugins. It remains to be seen which, if any, messaging platform will be the /default/ in Nova/OpenStack long-term. Currently, RabbitMQ is the default, but Essex introduced Qpid messaging, and we'll have ZeroMQ messaging if we can get it out of review ;-) It's definitely good to have options. I'm sure we can find a way to add this feature and maintain compatibility. Doug ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] adding a worker pool notion for RPC consumers
If a consumer is trying to subscribe to a worker pool but the underlying implementation for the messaging system does not support those semantics, we should fail loudly and explicitly instead of configuring the consumer using other semantics that may result in subtle bugs or data corruption. If we were doing that right now with the ZeroMQ driver, we'd be raising some ugly exceptions up without any benefit. It only consumes the 'service.host' topics. Fanout and round-robin of direct exchanges (bare topics without a dot-character) are handled by the *sender* and are thus not consumed, which I realize is 180-degrees from how this is handled in AMQP. My suggestion is that for static matchmakers, on the registration of a consumer, we do a host lookup in the matchmaker to see if that host has been pre-registered. If it is not in the map/lookup, then we raise an ugly Exception. Regards, Eric Windisch ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [metering] high-level design proposal
- In addition to a signature, I think we would need a sequence number to be embedded by the agent for each message sent, so that loss of messages, or forgery of messages, can be detected by the collector and further audit process. OK. We have a message id, but I assumed those would be used to eliminate duplicates so this sounds like something different or new. It implies that the agent knows its own id (not hard) and keeps up with a sequence counter (more difficult, though not impossible). Did you have something in mind for how to implement that? If we're submitting messages every given node with a predictable frequency, we should be able to determine that a message was lost simply by noting a gap in the timestamps. Also, if we're sending cumulative statistics then the loss of a single message (or even a fair number of them) shouldn't impact our ability to meter too much. -James ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] auto_assign_floating_ip in essex
Hi everybody, I've noticed that the behavior changed in essex regarding automatic assignation of floating ips : * In Diablo, as soon as the instance was spawned, the floating ip was showing in nova and horizon. * In Essex, the instance first spawns and then, later, as much as 60 seconds later the floating IP gets attached. This is not so bad but sometimes, the floating IP never appears, neither in horizon nor nova list. The strange thing is that the IP is there, attached to the interface of the nova-network server and usable, but since it does not appear in nova or horizon, the user will never get to know it. anyone else noticed this problem? I had this problem with AND without multi_host. Boris ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] auto_assign_floating_ip in essex
OK' I've found this here which seems to be related (for those who experience the same problem): https://bugs.launchpad.net/nova/+bug/967166 De : openstack-bounces+boris-michel.deschenes=ubisoft@lists.launchpad.net [mailto:openstack-bounces+boris-michel.deschenes=ubisoft@lists.launchpad.net] De la part de Boris-Michel Deschenes Envoyé : 22 mai 2012 17:09 À : openstack@lists.launchpad.net Objet : [Openstack] auto_assign_floating_ip in essex Hi everybody, I've noticed that the behavior changed in essex regarding automatic assignation of floating ips : · In Diablo, as soon as the instance was spawned, the floating ip was showing in nova and horizon. · In Essex, the instance first spawns and then, later, as much as 60 seconds later the floating IP gets attached. This is not so bad but sometimes, the floating IP never appears, neither in horizon nor nova list. The strange thing is that the IP is there, attached to the interface of the nova-network server and usable, but since it does not appear in nova or horizon, the user will never get to know it. anyone else noticed this problem? I had this problem with AND without multi_host. Boris ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [metering] high-level design proposal
On Tue, May 22, 2012 at 4:35 PM, James R Penick pen...@yahoo-inc.comwrote: - In addition to a signature, I think we would need a sequence number to be embedded by the agent for each message sent, so that loss of messages, or forgery of messages, can be detected by the collector and further audit process. OK. We have a message id, but I assumed those would be used to eliminate duplicates so this sounds like something different or new. It implies that the agent knows its own id (not hard) and keeps up with a sequence counter (more difficult, though not impossible). Did you have something in mind for how to implement that? If we're submitting messages every given node with a predictable frequency, we should be able to determine that a message was lost simply by noting a gap in the timestamps. Also, if we're sending cumulative statistics then the loss of a single message (or even a fair number of them) shouldn't impact our ability to meter too much. I don't know if we have cumulative statistics. What does libvirt actually give us? -James ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [metering] high-level design proposal
[copying the list] On Tue, May 22, 2012 at 5:02 PM, Matt Joyce matt.jo...@cloudscaling.comwrote: If the agent is simply passively passing data up stream to the collectors I really don't care. As long as it is never accepting commands remotely. Once we do that it becomes something else. Either it's tied into an API or it's acting as an independent agent of execution. Netstack / quantum folks NEED a highly dynamic remote agent they can pass execution parameters to. And they will build one. But if we keep spinning off these types of directly addressable agents of execution we are creating a lot of new things we need to maintain and vet. If we have a single agent of execution project we can pool development efforts and focus on hardening it for all. That would be my primary concern in the agent. That makes sense. In our case we have not identified a need for the agent to receive commands. We are using the nova.service module to run the daemon right now, so although we have created a new daemon we haven't done a lot of work to invent anything new. If there is other work going on to create a single daemon to run on the compute node and if we can tap into that daemon and ensure that periodic tasks are triggered at regular (and relatively dependable) intervals then we could move the agent portion of ceilometer into the common agent. Is there a project for creating that? Or a blueprint I could subscribe to? -Matt On Tue, May 22, 2012 at 11:08 AM, Doug Hellmann doug.hellm...@dreamhost.com wrote: On Tue, May 22, 2012 at 12:53 PM, Matt Joyce matt.jo...@cloudscaling.com wrote: My point of concern. \ If an agent is being built into the compute nodes, that would best be a split out project. Two major reasons. First and foremost sub projects should not be spinning up their own agents. Secondly, there is a use case of agents outside of metering. If an agent is to be built it is a not insignificant change in architecture for openstack. This agent will run on the compute host, next to nova-compute, not inside the VM. Is that still a concern? -Matt On Tue, May 22, 2012 at 7:30 AM, Doug Hellmann doug.hellm...@dreamhost.com wrote: On Tue, May 22, 2012 at 4:05 AM, Endre Karlson endre.karl...@gmail.com wrote: If I'm understanding this correctly, the Collector is kind of like a Agent in Qantum (It sits on a machine doing stuff and passing info upstream). If you look at the approach they have now in Quantum Agent it's writing directly to the DB. But looking at the next version they seem to be moving to having the Agent send data upstream to the Plugin in Quantum. Why not do something similar? I mean if you have a MQ cluster in a deployment I think it makes more sense to have 1 thing that handles the db stuff then having each Collector connect to the db.. That was the goal, but I may have swapped the terminology around. For ceilometer, the agent runs on the compute node and writes only to the message queue. The collector runs in a central location and writes to the database. The number of collectors you need will depend on the number of messages being generated, but the architecture supports running several in parallel in a way that each instance does not need to be aware of the others. Doug Endre. 2012/5/22 Nick Barcet nick.bar...@canonical.com On 05/21/2012 10:52 PM, Doug Hellmann wrote: I have written up some of my thoughts on a proposed design for ceilometer in the wiki [1]. I'm sure there are missing details, but I wanted to start getting ideas into writing so they could be discussed here on the list, since I've talked about different parts with a couple of you separately. Let me know what you think, and especially if I am not clear or have left out any details. Thanks, Doug [1] http://wiki.openstack.org/EfficientMetering/ArchitectureProposalV1 Thanks a lot for putting this together Doug. A few questions: * The collector runs on one or more central management servers to monitor the message queues (for notifications and for metering data coming from the agent). Notification messages are processed and turned into metering messages and sent back out onto the message bus using the appropriate topic. Metering messages are written to the data store without modification. - Is the reason behind why collectors do not write directly to the database a way to allow db less implementations as Francis suggested earlier? In this case it may be useful to say it explicitly. * Plugins may require configuration options, so when the plugin is loaded it is asked to add options to the global flags object, and the results are made available to the plugin before it is asked to do any work. - I am not sure where the global flags object resides and how option are populated. I think it would make sense for this to be globally controlled, and therefore may require for a simple discovery
Re: [Openstack] [metering] high-level design proposal
libvirt can pull hard cpu stats. which can be useful. for instance it can pick out the cpu generation names. -matt On Tue, May 22, 2012 at 2:32 PM, Doug Hellmann doug.hellm...@dreamhost.comwrote: On Tue, May 22, 2012 at 4:35 PM, James R Penick pen...@yahoo-inc.comwrote: - In addition to a signature, I think we would need a sequence number to be embedded by the agent for each message sent, so that loss of messages, or forgery of messages, can be detected by the collector and further audit process. OK. We have a message id, but I assumed those would be used to eliminate duplicates so this sounds like something different or new. It implies that the agent knows its own id (not hard) and keeps up with a sequence counter (more difficult, though not impossible). Did you have something in mind for how to implement that? If we're submitting messages every given node with a predictable frequency, we should be able to determine that a message was lost simply by noting a gap in the timestamps. Also, if we're sending cumulative statistics then the loss of a single message (or even a fair number of them) shouldn't impact our ability to meter too much. I don't know if we have cumulative statistics. What does libvirt actually give us? -James ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [metering] high-level design proposal
[redirecting to the mailing list] On Tue, May 22, 2012 at 5:38 PM, Doug Hellmann doug.hellm...@dreamhost.comwrote: Cool. That's not what I was expecting, but I hadn't gotten far enough to dig into libvirt yet. We can definitely make cumulative messages work, and that does eliminate a lot of my concern about missing a message here or there. On Tue, May 22, 2012 at 5:36 PM, James R Penick pen...@yahoo-inc.comwrote: Libvirt statistics are all cumulative, my collection agent stores them all in a sqlite db, and my monitoring agent compares deltas to determine resource utilization and send alerts. -James -Original Message- *From:* Doug Hellmann [doug.hellm...@dreamhost.com] *Received:* Tuesday, 22 May 2012, 2:32pm *To:* James R Penick [pen...@yahoo-inc.com] *CC:* Nick Barcet [nick.bar...@canonical.com]; openstack@lists.launchpad.net [openstack@lists.launchpad.net] *Subject:* Re: [Openstack] [metering] high-level design proposal On Tue, May 22, 2012 at 4:35 PM, James R Penick pen...@yahoo-inc.comwrote: - In addition to a signature, I think we would need a sequence number to be embedded by the agent for each message sent, so that loss of messages, or forgery of messages, can be detected by the collector and further audit process. OK. We have a message id, but I assumed those would be used to eliminate duplicates so this sounds like something different or new. It implies that the agent knows its own id (not hard) and keeps up with a sequence counter (more difficult, though not impossible). Did you have something in mind for how to implement that? If we're submitting messages every given node with a predictable frequency, we should be able to determine that a message was lost simply by noting a gap in the timestamps. Also, if we're sending cumulative statistics then the loss of a single message (or even a fair number of them) shouldn't impact our ability to meter too much. I don't know if we have cumulative statistics. What does libvirt actually give us? -James ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [metering] high-level design proposal
After experimenting with some of the implementation today, I modified the way the notification plugins list the event_types they want to see. http://wiki.openstack.org/EfficientMetering/ArchitectureProposalV1?action=diffrev2=10rev1=9 On Mon, May 21, 2012 at 5:52 PM, Doug Hellmann doug.hellm...@dreamhost.comwrote: I have written up some of my thoughts on a proposed design for ceilometer in the wiki [1]. I'm sure there are missing details, but I wanted to start getting ideas into writing so they could be discussed here on the list, since I've talked about different parts with a couple of you separately. Let me know what you think, and especially if I am not clear or have left out any details. Thanks, Doug [1] http://wiki.openstack.org/EfficientMetering/ArchitectureProposalV1 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [nova-compute] vm migration problem
2012/5/21 Lorin Hochstein lo...@nimbisservices.com: Has anybody ever written a script that grabs the host public key from the instance's console and updates the .ssh/config/known_hosts file accordingly, instead of throwing away host key checking? That would be a handy little thing if it was out there. Ubuntu's cloud-utils package has a cloud-run-instances utility that does this. It's not exactly in the do-one-thing-and-do-it-well sort of category, but perhaps it's just what you need. -- Soren Hansen | http://linux2go.dk/ Senior Software Engineer | http://www.cisco.com/ Ubuntu Developer | http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] keystone user-list (The action you have requested has not been implemented)
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 After some tweaking I got LDAP working with keystone but there are still some issues/questions. I hope someone can shed some light. Here's my settings (using essex). keystone.conf: [ldap] url=ldap://ldap.myproject.org tree_dn=dc=myproject,dc=org user_tree_dn=ou=People,dc=myproject,dc=org user_objectclass=inetOrgPerson user_id_attribute=uid role_tree_dn=ou=Roles,dc=myproject,dc=org role_objectclass=organizationalRole role_id_attribute=cn role_member_attribute=roleOccupant tenant_tree_dn=ou=ostenants,dc=myproject,dc=org tenant_objectclass=groupOfNames tenant_id_attribute=cn tenant_member_attribute=member user=uid=ldapuser,ou=People,dc=myproject,dc=org password=secret backend_entities=['Tenant', 'User', 'UserRoleAssociation', 'Role'] suffix=dc=myproject,dc=org In LDAP, I created an user called admin: dn: uid=admin,ou=People,dc=myproject,dc=org ufn: admin, People, myproject.org uid: admin cn: admin objectClass: top objectClass: inetOrgPerson givenName: Admin sn: admin and added this user's info (OS_USERNAME, OS_TENANT_NAME and OS_PASSWORD) and OS_AUTH_URL=http://localhost:5000/v2.0/; SERVICE_ENDPOINT=http://localhost:35357/v2.0; and SERVICE_TOKEN in a rc file. I also created an OU call ostenants: dn: ou=ostenants,dc=myproject,dc=org ufn: ostenants, myproject.org ou: ostenants description: Tenants For OpenStack objectClass: organizationalUnit I have an OU called Roles but I am not using this yet for role assignment: dn: ou=Roles,dc=myproject,dc=org ufn: Roles, myproject.org ou: Roles description: Roles for OpenStack Users and Tenants objectClass: organizationalUnit Then I created an entry as groupOfNames called fg82. I added admin and myself to that group as a member. As I have tenant_tree_dn=ou=ostenants,dc=myproject,dc=org my goal is to get the group fg82 as a tenant in keystone. dn: cn=fg82,ou=ostenants,dc=myproject,dc=org ufn: fg82, ostenants, myproject.org objectClass: groupOfNames cn: fg82 member: uid=admin,ou=People,dc=myproject,dc=org member: uid=sharif,ou=People,dc=myproject,dc=org Now, as admin user, from the keystone server when I run this, I can see this tenant: # keystone tenant-list No handlers could be found for logger keystoneclient.v2_0.client +--+--+-+ | id | name | enabled | +--+--+-+ | fg82 | | True| +--+--+-+ but # keystone user-list No handlers could be found for logger keystoneclient.client The action you have requested has not been implemented. (HTTP 501) I can now get details about all the users in LDAP not just these two which is really cool: # keystone user-get admin +--+---+ | Property | Value | +--+---+ | id | admin | | name | admin | +--+---+ # keystone user-get sharif +--++ | Property | Value | +--++ | id | sharif | | name | Islam | +--++ (Note: using sn here might create some problems with people with the same last name). But tenant-get only shows the tenant name. # keystone tenant-get fg82 +--+---+ | Property | Value | +--+---+ | id | fg82 | +--+---+ How can get a list of all the users who are in tenant fg82? I know the message says The action you have requested has not been implemented but as keystone can talk to LDAP, there should be a way to retrieve the list. - --sharif - -- Sharif Islam Senior Systems Analyst/Programmer FutureGrid (http://www.futuregrid.org) Pervasive Technology Institute, Indiana University Bloomington -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJPvBSdAAoJEACffes9SivFzgQH/j6TSsf4nUq73PvBuT/wUY77 XqtehiQvZiiQNT1Xn3m3pmxI0rzL9b8MWD6S7WSh0gqTDpY1Z+Iyvas/8vHyADCy aome92I6EMLtyzcWbueBxL4OctEZqUPbgHx4G5OS2sbl3dajeOoID7Ro2kf6Hs8/ 8l+/GTftVjKtW+/1F2DuCzc2HY+dZTRl6Rtsg2WcjE6uXFoN77bKdhX4y1cg1Egz 8RuhvpRRFe22Hxyggnoz+MNVmV9FLOkijVzYB3RKG7D0L73hs/CU4TBPUG7jsJAs UNF3JG7QyrZ6IsbEIsjDpCYIG5/vI5k2Y1uzox/llo9mD+SLXu8+rg69DTS24ew= =q/6w -END PGP SIGNATURE- ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] auto_assign_floating_ip in essex
They've already fixed that in trunk: https://bugs.launchpad.net/nova/+bug/939122 []'s On Tue, May 22, 2012 at 6:21 PM, Boris-Michel Deschenes boris-michel.desche...@ubisoft.com wrote: OK’ I’ve found this here which seems to be related (for those who experience the same problem): ** ** https://bugs.launchpad.net/nova/+bug/967166 ** ** *De :* openstack-bounces+boris-michel.deschenes= ubisoft@lists.launchpad.net [mailto: openstack-bounces+boris-michel.deschenes=ubisoft@lists.launchpad.net] *De la part de* Boris-Michel Deschenes *Envoyé :* 22 mai 2012 17:09 *À :* openstack@lists.launchpad.net *Objet :* [Openstack] auto_assign_floating_ip in essex ** ** Hi everybody, ** ** I’ve noticed that the behavior changed in essex regarding automatic assignation of floating ips : ** ** **· **In Diablo, as soon as the instance was spawned, the floating ip was showing in nova and horizon. **· **In Essex, the instance first spawns and then, later, as much as 60 seconds later the floating IP gets attached. ** ** This is not so bad but sometimes, the floating IP never appears, neither in horizon nor “nova list”. The strange thing is that the IP is there, attached to the interface of the nova-network server and usable, but since it does not appear in nova or horizon, the user will never get to know it. ** ** anyone else noticed this problem? ** ** I had this problem with AND without multi_host. ** ** Boris ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- Flavia ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] upload file from a specific directory
On 05/22/2012 11:43 AM, khabou imen wrote: Hi everybody, when runnig this command swift -v -V 2.0 -A http://192.168.1.5:5000/v2.0/ -U service:swift -K swiftpass upload Containera doc1.pdf the file doc1.pdf is well uploaded only if it's placed in the home directory How can I upload a file from a different directory such as /home/imen/Desktop/img1.jpg? Referencing arbitrary files works fine here: $ rpm -qf $(which swift) openstack-swift-1.4.8-2.el6.noarch $ swift upload c1 /etc/issue etc/issue $ (cd /etc; swift upload c2 resolv.conf) resolv.conf $ mkdir t $ cd t $ swift download --all c1/etc/issue c2/resolv.conf $ swift download c1 etc/issue $ swift download c2 resolv.conf $ find -ls 2770274 drwxrwxr-x 5 padraig padraig 4096 May 23 00:29 . 2775564 drwxrwxr-x 2 padraig padraig 4096 May 23 00:29 ./etc 2775584 -rw-rw-r-- 1 padraig padraig85 Mar 8 12:31 ./etc/issue 2774374 drwxrwxr-x 2 padraig padraig 4096 May 23 00:29 ./c2 2774384 -rw-rw-r-- 1 padraig padraig55 May 22 13:41 ./c2/resolv.conf 2774344 drwxrwxr-x 3 padraig padraig 4096 May 23 00:29 ./c1 2774354 drwxrwxr-x 2 padraig padraig 4096 May 23 00:29 ./c1/etc 2774364 -rw-rw-r-- 1 padraig padraig85 Mar 8 12:31 ./c1/etc/issue 2776464 -rw-rw-r-- 1 padraig padraig55 May 22 13:41 ./resolv.conf Note the pseudo hierarchical directories are supported through: http://docs.openstack.org/api/openstack-object-storage/1.0/content/pseudo-hierarchical-folders-directories.html cheers, Pádraig. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [OpenStack][Keystone][LDAP] Does LDAP driver support for validating subtree user?
Thanks for your quick reply . I'll review the necessary of subtree query . It's really depends on user's demand. I did some more research of AD or LDAP structure design. I found that if an enterprise has an existing AD server and the structure as follow dc=foo,dc=com |__OU-HR | |_cn:hr-user1 | |_cn:hr-user2 | |_cn:hr-user3 | |__OU-IT |_cn:it-user1 |_cn:it-user2 |_cn:it-user3 For such LDAP structure , only HR or IT users cound be validated . Is there any exist approach within LDAP to import users from an OU to another OU like below's diagram dc=foo,dc=com |__OU-HR | |_cn:hr-user1 | |_cn:hr-user2 | |_cn:hr-user3 | |__OU-IT | |_cn:it-user1 | |_cn:it-user2 | |_cn:it-user3 | | |__OU-Keystone-Users |_cn:it-user1 |_cn:hr-user1 If so , I can specify user_tree_dn to ou=OU-Keystone-Users . any suggestions ? Cheers 2012/5/22 Adam Young ayo...@redhat.com On 05/22/2012 07:07 AM, Kuo Hugo wrote: Hi Folks , I have try with keystone backend by LDAP and Windows AD. It looks fine . Just want to clarify one point. For my test result , LDAP driver could only validate users in the particular container (OU,CN etc.) and does not include the subtree users. [ldap] tree_dn = dc=taiwan,dc=com user_tree_dn = ou=foo,dc=taiwan,dc=com For example User1 : cn=jeremy,ou=foo,dc=taiwan,dc=com User2 : cn=jordan,ou=bar,ou=foo,dc=taiwan,dc=com User1 could be validated , and get the token generated by keystone. User2 could not be validated Is there any way to validate both User1 and User2 in current design ? No, there is not. Queries are not done against subtrees. If this is important to you, please file a ticket: https://bugs.launchpad.net/keystone/+filebug -- +Hugo Kuo+ tonyt...@gmail.com + tonyt...@gmail.com886 935004793 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- +Hugo Kuo+ tonyt...@gmail.com + tonyt...@gmail.com886 935004793 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] [OpenStack][Keystone]Does legacy_auth v1.0 exist in Keystone Essex ?
Hi folks , Does legacy_auth v1.0 exist in Keystone Essex ? For several client tools , still using v1.0 authentication method for auth. Such as cyberduck or Gladinet. These applications look for X-AUTH-TOKEN and X-Storage-Url headers for accessing swift. Does this method live in Keystone Essex ? -- +Hugo Kuo+ tonyt...@gmail.com + tonyt...@gmail.com886 935004793 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] centos 6 images
Joshua, Do you have some basic instructions on how to push this into an image and configure it? Any information about what you have here would be great! jason - Original Message - From: Joshua Harlow harlo...@yahoo-inc.com To: Jason ja...@chatinara.com, Pádraig Brady p...@draigbrady.com Cc: Fedora Cloud SIG cl...@lists.fedoraproject.org, Andy Grimm agr...@gmail.com, openstack openstack@lists.launchpad.net Sent: Tuesday, May 22, 2012 1:49:06 PM Subject: Re: [Openstack] centos 6 images U might want to check out, https://github.com/yahoo/Openstack-Condense Its a stripped down/cleaned up/... version of cloud-init that I know works on RHEL6. I tried to improve the following: 1. Code cleanliness (constants being uppercase, paths using os.path.join and so-on) 2. Stripping out some of the odd handlers (byobu, right-scale and such) 3. Improving logging by a lot (so that u can debug this thing) 4. Making what handlers I left work on RH and ubuntu... Might be useful if u want to try it. I know just from doing the above work that the cloud-init for ubuntu, requires some work to get it to work on RH, but not tons, eventually I hope that I can merge this back, but for now its forked so that I could focus on getting it working and cleaned up, rather than pushing code through some review process via launchpad and such (ie the slow as molasses approach). On 5/22/12 10:05 AM, Jason ja...@chatinara.com wrote: I will give these a shot later today and reply with feedback. Thanks for looking into this! Jason On May 22, 2012, at 11:44 AM, Pádraig Brady p...@draigbrady.com wrote: On 05/22/2012 03:39 PM, Andy Grimm wrote: On Tue, May 22, 2012 at 9:38 AM, Pádraig Brady p...@draigbrady.com wrote: On 05/22/2012 04:07 AM, Jason Ford wrote: I am trying to put together an image for centos 6 that works like cloud-init on ubuntu does. Currently I have ssh keys getting imported but having some problems getting the disk to dynamically resize to the flavor template as well as the hostname set in horizon to be pushed into the image. Does anyone have any howtos or suggestions on how to get this done? Is there cloud-init for centos just like ubuntu? I would also be interested in how to do this with debian as well. Well I notice there is no cloud-init package for EPEL. I took a quick stab at it here: http://pbrady.fedorapeople.org/cloud-init-el6/ I've already responded in IRC, but it wouldn't hurt to have a response in the mail archive. In short, the reason there isn't already a cloud-init for EL6 (or EL5, for that matter) is that upstream has been using python 2.7-only calls for a while now. In particular, a couple of calls to subprocess.check_output need to be replaced, and I think there are a few other issues as well. I don't think it's a huge amount of work to make it functional, but it hasn't been high on anyone's list. It would be cool if you have time to fix / test it, though. Ok I've fixed the check_output calls at the above URL. cheers, Pádraig. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [metering] high-level design proposal
A version of the code demonstrating using plugins in this way is up for review at https://review.stackforge.org/#/c/45/ On Tue, May 22, 2012 at 5:59 PM, Doug Hellmann doug.hellm...@dreamhost.comwrote: After experimenting with some of the implementation today, I modified the way the notification plugins list the event_types they want to see. http://wiki.openstack.org/EfficientMetering/ArchitectureProposalV1?action=diffrev2=10rev1=9 On Mon, May 21, 2012 at 5:52 PM, Doug Hellmann doug.hellm...@dreamhost.com wrote: I have written up some of my thoughts on a proposed design for ceilometer in the wiki [1]. I'm sure there are missing details, but I wanted to start getting ideas into writing so they could be discussed here on the list, since I've talked about different parts with a couple of you separately. Let me know what you think, and especially if I am not clear or have left out any details. Thanks, Doug [1] http://wiki.openstack.org/EfficientMetering/ArchitectureProposalV1 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] centos 6 images
Let me write something up that should explain this. Its not that hard. On 5/22/12 6:31 PM, Jason Ford ja...@chatinara.com wrote: Joshua, Do you have some basic instructions on how to push this into an image and configure it? Any information about what you have here would be great! jason - Original Message - From: Joshua Harlow harlo...@yahoo-inc.com To: Jason ja...@chatinara.com, Pádraig Brady p...@draigbrady.com Cc: Fedora Cloud SIG cl...@lists.fedoraproject.org, Andy Grimm agr...@gmail.com, openstack openstack@lists.launchpad.net Sent: Tuesday, May 22, 2012 1:49:06 PM Subject: Re: [Openstack] centos 6 images U might want to check out, https://github.com/yahoo/Openstack-Condense Its a stripped down/cleaned up/... version of cloud-init that I know works on RHEL6. I tried to improve the following: 1. Code cleanliness (constants being uppercase, paths using os.path.join and so-on) 2. Stripping out some of the odd handlers (byobu, right-scale and such) 3. Improving logging by a lot (so that u can debug this thing) 4. Making what handlers I left work on RH and ubuntu... Might be useful if u want to try it. I know just from doing the above work that the cloud-init for ubuntu, requires some work to get it to work on RH, but not tons, eventually I hope that I can merge this back, but for now its forked so that I could focus on getting it working and cleaned up, rather than pushing code through some review process via launchpad and such (ie the slow as molasses approach). On 5/22/12 10:05 AM, Jason ja...@chatinara.com wrote: I will give these a shot later today and reply with feedback. Thanks for looking into this! Jason On May 22, 2012, at 11:44 AM, Pádraig Brady p...@draigbrady.com wrote: On 05/22/2012 03:39 PM, Andy Grimm wrote: On Tue, May 22, 2012 at 9:38 AM, Pádraig Brady p...@draigbrady.com wrote: On 05/22/2012 04:07 AM, Jason Ford wrote: I am trying to put together an image for centos 6 that works like cloud-init on ubuntu does. Currently I have ssh keys getting imported but having some problems getting the disk to dynamically resize to the flavor template as well as the hostname set in horizon to be pushed into the image. Does anyone have any howtos or suggestions on how to get this done? Is there cloud-init for centos just like ubuntu? I would also be interested in how to do this with debian as well. Well I notice there is no cloud-init package for EPEL. I took a quick stab at it here: http://pbrady.fedorapeople.org/cloud-init-el6/ I've already responded in IRC, but it wouldn't hurt to have a response in the mail archive. In short, the reason there isn't already a cloud-init for EL6 (or EL5, for that matter) is that upstream has been using python 2.7-only calls for a while now. In particular, a couple of calls to subprocess.check_output need to be replaced, and I think there are a few other issues as well. I don't think it's a huge amount of work to make it functional, but it hasn't been high on anyone's list. It would be cool if you have time to fix / test it, though. Ok I've fixed the check_output calls at the above URL. cheers, Pádraig. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] centos 6 images
Scott, If you need someone to test your changes, I would be happy to do it. Please just give me some basic instructions on how to put it in place and I will get it working. As for your request for comments/features, personally I would like to see the following parts done initially: - hostname set to instance name - disk space resize to flavor size - ssh-key pull - report a random password for root user (or default user) if this is possible Thanks for looking at this. jason - Original Message - From: Scott Moser smo...@ubuntu.com To: Pádraig Brady p...@draigbrady.com Cc: Fedora Cloud SIG cl...@lists.fedoraproject.org, Andy Grimm agr...@gmail.com, openstack openstack@lists.launchpad.net Sent: Tuesday, May 22, 2012 12:51:42 PM Subject: Re: [Openstack] centos 6 images On Tue, 22 May 2012, Pádraig Brady wrote: On 05/22/2012 03:39 PM, Andy Grimm wrote: On Tue, May 22, 2012 at 9:38 AM, Pádraig Brady p...@draigbrady.com wrote: On 05/22/2012 04:07 AM, Jason Ford wrote: I am trying to put together an image for centos 6 that works like cloud-init on ubuntu does. Currently I have ssh keys getting imported but having some problems getting the disk to dynamically resize to the flavor template as well as the hostname set in horizon to be pushed into the image. Does anyone have any howtos or suggestions on how to get this done? Is there cloud-init for centos just like ubuntu? I would also be interested in how to do this with debian as well. Well I notice there is no cloud-init package for EPEL. I took a quick stab at it here: http://pbrady.fedorapeople.org/cloud-init-el6/ I've already responded in IRC, but it wouldn't hurt to have a response in the mail archive. In short, the reason there isn't already a cloud-init for EL6 (or EL5, for that matter) is that upstream has been using python 2.7-only calls for a while now. In particular, a couple of calls to subprocess.check_output need to be replaced, and I think there are a few other issues as well. I don't think it's a huge It would help if you'd bring that up with upstream :) I'm interested in cloud-init working in the most places it can. I'll try to pull in the sysvinit scripts that Pádraig added and grab other changes that are there. amount of work to make it functional, but it hasn't been high on anyone's list. It would be cool if you have time to fix / test it, though. Ok I've fixed the check_output calls at the above URL. If anyone has features / issues they'd like addressed in cloud-init, please feel free to ping me (smoser). I'll most likely ask you to open a bug at http://bugs.launchpad.net/cloud-init , and may even invite you to submit a patch. One way or another, though, I'm interested in making cloud-init better, so comments/concerns/participation is welcome and encouraged. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp