Re: [openstack-dev] [qa] Moving the QA meeting time
Hey, guys. We've got a team in China that is focused mainly on Test and at least one or two would like to attend the meetings. 22:00 UTC is a bit early for them so I think it would be better to alternate to get reasonable participation. The team wants to get more involved in the test effort and we have some senior Test Engineers on the OpenStack project. I'm in PST, so if the meeting alternated, we could theoretically cover them all. Thanks, --Rocky -Original Message- From: David Kranz [mailto:dkr...@redhat.com] Sent: Thursday, December 05, 2013 6:46 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [qa] Moving the QA meeting time On 12/05/2013 07:16 AM, Sean Dague wrote: On 12/05/2013 02:37 AM, Koderer, Marc wrote: Hi all! -Ursprüngliche Nachricht- Von: Kenichi Oomichi [mailto:oomi...@mxs.nes.nec.co.jp] Gesendet: Donnerstag, 5. Dezember 2013 01:37 An: OpenStack Development Mailing List (not for usage questions) Betreff: Re: [openstack-dev] [qa] Moving the QA meeting time Hi Matthew, Thank you for picking this up. -Original Message- From: Matthew Treinish [mailto:mtrein...@kortar.org] Sent: Thursday, December 05, 2013 6:04 AM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [qa] Moving the QA meeting time Hi everyone, I'm looking at changing our weekly QA meeting time to make it more globally attendable. Right now the current time of 17:00 UTC doesn't really work for people who live in Asia Pacific timezones. (which includes a third of the current core review team) There are 2 approaches that I can see taking here: 1. We could either move the meeting time later so that it makes it easier for people in the Asia Pacific region to attend. 2. Or we move to a alternating meeting time, where every other week the meeting time changes. So we keep the current slot and alternate with something more friendly for other regions. I think trying to stick to a single meeting time would be a better call just for simplicity. But it gets difficult to appease everyone that way which is where the appeal of the 2nd approach comes in. Looking at the available time slots here: https://wiki.openstack.org/wiki/Meetings there are plenty of open slots before 1500 UTC which would be early for people in the US and late for people in the Asia Pacific region. There are plenty of slots starting at 2300 UTC which is late for people in Europe. Would something like 2200 UTC on Wed. or Thurs work for everyone? What are people's opinions on this? I am in JST. Is Chris in CST, and Marc in CET? Yes, Giulio and I are in CET. And Attila too, right? Here is timezone difference. 15:00 UTC - 07:00 PST - 01:30 CST - 16:00 CET - 24:00 JST 22:00 UTC - 14:00 PST - 08:30 CST - 23:00 CET - 07:00 JST 23:00 UTC - 15:00 PST - 09:30 CST - 24:00 CET - 08:00 JST I feel 22:00 would be nice. I'd prefer to have two slots since 22 UTC is quite late. But I am ok with it if all others are fine. The other option would be to oscillate on opposite weeks with Ceilometer - https://wiki.openstack.org/wiki/Meetings/Ceilometer they already have a well defined every other cadence. -Sean Either option works for me. -David ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack][qa][Tempest][Network] Test for external connectivity
For External connectivity beyond the network gateway, rather than pinging google.com, configuring the vm for an external DNS server and pinging it by IPaddress would be a good initial test of external connectivity. --Rocky From: Tomoe Sugihara [mailto:to...@midokura.com] Sent: Tuesday, November 19, 2013 7:31 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Rami Vaknin Subject: Re: [openstack-dev] [Openstack][qa][Tempest][Network] Test for external connectivity Hi Salvatore, et al, On Mon, Nov 18, 2013 at 9:19 PM, Salvatore Orlando sorla...@nicira.commailto:sorla...@nicira.com wrote: Hi Yair, I had in mind of doing something similar. I also registered a tempest blueprint for it. Basically I think we can assume test machines have access to the Internet, but the devstack deployment might not be able to route packets from VMs to the Internet. Being able to ping the external network gateway, which by default is 172.24.4.225 is a valuable connectivity test IMHO (and that's your #1 item) For items #2 and #3 I'm not sure of your intentions; precisely so far I'm not sure if we're adding any coverage to Neutron. I assume you want to check servers such as www.google.comhttp://www.google.com are reachable, but the routing from the external_gateway_ip to the final destination is beyond's Neutron control. DNS resolution might be interesting, but however I think resolution of external names is too beyond Neutron's control. Two more things to consider on external network connectivity tests: 1) SNAT can be enabled or not. In this case we need a test that can tell us the SRC IP of the host connecting to the public external gateway, because I think that if SNAT kicks in, it should be an IP on the ext network, otherwise it should be an IP on the internal network. In this case we can use netcat to this aim, emulating a web server and use verbose output to print the source IP 2) When the connection happens from a port associated with a floating IP it is important that the SNAT happens with the floating IP address, and not with the default SNAT address. This is actually a test which would have avoided us a regression in the havana release cycle. As far as I know from the code (I'm new to tempest and might be missing something), test_network_basic_ops launches a single VM with a floating IP associated and test is performed by accessing from the tempest host to the guest VM using floating ip. So, I have some questions: - How can we test the internal network connectivity (when the tenant networks are not accessible from the host, which I believe is the case for most of the plugins)? - For external connectivity, how can we test connectivity without floating ip? - should we have another vm and control that from the access VM e.g. by ssh remote command? or - spawn specific VMs which sends traffic upon boot (e.g. metadata server + userdata with cloud init installed VM, etc) to public and assert traffics on the tempest host side? Thanks, Tomoe Regards, Salvatore On 18 November 2013 13:13, Giulio Fidente gfide...@redhat.commailto:gfide...@redhat.com wrote: On 11/18/2013 11:41 AM, Yair Fried wrote: I'm editing tempest/scenario/test_network_basic_ops.py for external connectivity as the TODO listed in its docstring. the test cases are for pinging against external ip and url to test connectivity and dns respectivly. since default deployement (devstack gate) doesn't have external connectivity I was thinking on one or all of the following I think it's a nice thing to have! 2. add fields in tempest.conf for * external connectivity = False/True * external ip to test against (ie 8.8.8.8) I like this option. One can easily disable it entirely OR pick a more relevant ip address if needed. Seems to me it would give the greatest flexibility. -- Giulio Fidente GPG KEY: 08D733BA | IRC: giulivo ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Propose project story wiki idea
+1 I'd also love to see a tag or keyword associated with items that (affects {project x,y,z}) or (possibly affects {project x,y,z} to highlight areas in need of collaboration between teams. There is so much going on cross-project these days, that if the project team thinks the change has side effects or interaction changes beyond the internal project, they should raise a flag. --Rocky From: Boris Pavlovic [mailto:bpavlo...@mirantis.com] Sent: Tuesday, November 19, 2013 9:33 PM To: OpenStack Development Mailing List Subject: [openstack-dev] Propose project story wiki idea Hi stackers, Currently what I see is growing amount of interesting projects, that at least I would like to track. But reading all mailing lists, and reviewing all patches in all interesting projects to get high level understanding of what is happing in project now, is quite hard or even impossible task (at least for me). Especially after 2 weeks vacation =) The idea of this proposal is that every OpenStack project should have story wiki page. It means to publish every week one short message that contains most interesting updates for the last week, and high level road map for future week. So reading this for 10-15 minutes you can see what changed in project, and get better understanding of high level road map of the project. E.g. we start doing this in Rally: https://wiki.openstack.org/wiki/Rally/Updates I think that the best way to organize this, is to have person (or few persons) that will track all changes in project and prepare such updates each week. Best regards, Boris Pavlovic -- Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] How to best make User Experience a priority in every project
Anne Gentle wrote: On Wed, Nov 20, 2013 at 9:09 AM, Thierry Carrez thie...@openstack.orgmailto:thie...@openstack.org wrote: Hi everyone, How should we proceed to make sure UX (user experience) is properly taken into account into OpenStack development ? Historically it was hard for UX sessions (especially the ones that affect multiple projects, like CLI / API experience) to get session time at our design summits. This visibility issue prompted the recent request by UX-minded folks to make UX an official OpenStack program. However, as was apparent in the Technical Committee meeting discussion about it yesterday, most of us are not convinced that establishing and blessing a separate team is the most efficient way to give UX the attention it deserves. Ideally, UX-minded folks would get active *within* existing project teams rather than form some sort of counter-power as a separate team. In the same way we want scalability and security mindset to be present in every project, we want UX to be present in every project. It's more of an advocacy group than a program imho. I'm not sure most of us is accurate. Mostly you and Robert Collins were unconvinced. Here's my take. It's nigh-impossible with the UX resources there now (four core) for them to attend all the project meetings with an eye to UX. Docs are in a similar situation. We also want docs to be present in every project. Docs as a program makes sense, and to me, UX as a program makes sense as well. The UX program can then prioritize what to focus on with the resources they have. +1 UX is, in SW parlance, the next layer above the mechanics of the projects. It is separate from all the projects yet informs them all. To be able to inform all projects consistently, there needs to be a place where all the project based UXs come together to create a consistent, overarching environment. This is what UX does, and this is why it works better than having each project do their own thing. You don't get an environment when you don't have someone architecting an environment. You just get a bunch of projects glued together with more code. Since the current team is so small, as Anne points out, the team, working with the TC should decide which and how many individual projects need their attention first, and they also can prioritize what parts of the UX environment get defined/specified first. However, as pointed out in the meeting, the UX resources now are mostly focused on Horizon. It'd be nice to have a group aiming to take the big picture of the entire OpenStack experience. Maybe this group is the one, maybe they're not. The big picture would be: Dashboard experience CLI experience logging consistency troubleshooting consistency consistency across APIs like pagination behavior Just like QA ends up focusing on tempest, UX might end up focusing on Dashboard, CLI and API experience. That'd be fine with me and would give measurable trackable points. What's more interesting is how does the user committee fit into this? There's an interesting discussion already about how to get user concerns worked on by developers, is it actually through product managers? What would an Experience program look like if it were about productization? The most efficient and effective way to get enduser concerns and issues addressed systematically is through one point of contact, not one for every project. By having one place to collect up all inputs other than bugs and missing features in one place, problem areas are spotted much sooner, but also areas of excellence. So my recommendation would be to encourage UX folks to get involved within projects and during project-specific weekly meetings to efficiently drive better UX there, as a direct project contributor. If all the UX-minded folks need a forum to coordinate, I think [UX] ML threads and, maybe, a UX weekly meeting would be an interesting first step. I think a weekly UX meeting and a mailing list (which is probably already their Google Plus group) would be a good way to gather more people as contributors. Then we get an idea of what contributions look like. To summarize my take -- UX is a lot like docs in that it's tough to get devs to care, and also the work should be done with an eye towards the big picture and with resources from member companies. +1000 Devs tend to think they know how endusers are going to want to use and interact with their code. Most don't care that some endusers find the interface(s) confusing, opaque, inflexible or unforgiving. The best way to get a unified user experience is to have a *Program* (like Docs and QA) that gives UX legitimacy and some authority beyond just the responsibility they feel to the usability and usefulness of the projects they are unifying. Program status would also bring in more UX participants because it acknowledges that OpenStack is serious about UX and understands its importance (especially in reducing pilot error and
Re: [openstack-dev] Split of the openstack-dev list (summary so far)
Coming from QA/Ops, I agree that there are horizontal teams that need to get info from the mailing list(s) across the spectrum. I also agree with Clint's and Adrian's statements about the synergies and serendipities of all the developers on one list. But I also understand the feeling of drowning in email. I would like to present a solution that was employed on another development project I participated in. We are already using key words for projects, and I've seen the use of [RFC]. In the other project, we had key words for the stage each thread was in: Proposed Discussion Decision Request Info These tags (no brackets but all caps) allowed those of us who needed to know details but not follow the discussion to get the resolved decision easily. And, yes, it made filtering pretty easy. Perhaps a collection of keywords for status as well as project could help in reducing the noise for various participants. Just a humble observation. --Rocky From: Stefano Maffulli [mailto:stef...@openstack.org] On 11/15/2013 02:06 AM, Thierry Carrez wrote: Arguments in favor of splitting openstack-dev / stackforge-dev * People can easily filter out all non-openstack discussions * Traffic would drop by about 25% I'm not so convinced about this figure, as others pointed out. * Removes confusion as to which projects are actually in openstack Arguments in favor of keeping it the same * Provides a cross-pollination forum where external projects can learn * More chaos creates more innovation chaos creates just chaos in this context :) I don't buy Clint's rhetoric applied to this case :) Anyway, I've looked at my folder and it looks like 90% of the messages to openstack-dev have topics in the subject line. Filtering on the client side should be easy to do and I'd like to have a few volunteers run an experiment over one week to see if filters can ease the pain. I'd also like to get to an agreement that support requests sent to openstack-dev should not be answered and instead should be redirected gently to openstack@lists. and/or ask.openstack.org. Maybe we can restart this conversation in a week and see how things are going? /stef -- Ask and answer questions on https://ask.openstack.org ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week of January, Montreal, QC, Canada
Perhaps all this sound and fury will get a some entity to step up and provide Neutron project t-shirts for the sprint;-) Then maybe the participants will *want* to wear the Neutron Team shirt. And as for errors, Anita, I fully understand, sympathize and have experience in that area. --Rocky From: Anita Kuno On 11/15/2013 12:58 PM, Russell Bryant wrote: On 11/15/2013 12:47 PM, Anita Kuno wrote: I will also note, that while you clearly stated the Neutron is being considered for deprecation - t-shirts prevail as an issue on this thread. I consider that rather interesting to observe. Believe me, I'd much rather be able to focus on what actually matters. That's my suggestion. Focus on what matters (future of Neutron) and not clothing requirements. Retract all of that from the sprint announcement and then perhaps we can do so. :-) Okay, I can do that. Consider it retracted. Saves me having to source t-shirts anyway. /me crosses that item off the list. I hope that focusing on the future of Neutron is the energy that permeates the code sprint. Thanks Russell, Anita. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest
I agree that parametric testing, with input generators is the way to go for the API testing. Both positive and negative. I've looked at a number of frameworks in the past and the one that until recently was the highest on my list is Robot: http://code.google.com/p/robotframework/ I had looked at it in the past for doing parametric testing for APIs. It doesn't seem to have the generators, but it has a fair amount of infrastructure. But in my search in preparation for responding to this email, I stumbled upon a test framework I had not seen before that looks promising: http://www.squashtest.org/index.php/en/squash-ta/squash-ta-overview It does the data generation separate from the test code, the setup, tear down. It actually looks quite interesting, and it is open source. It might not pan out, but it's worth a look. Another page by the same group: http://www.squashtest.org/index.php/en/what-is-squash/tools-and-functionalities/squash-data is the data generators. I'm not sure just how much of the project is open source, but I suspect enough for our purposes. The other question is whether the licensing is acceptable for OpenStack.org. I'm willing to jump in and help on this as this sort of stuff is my bailiwick. A subgroup maybe? I also want to get some of the QA/Test lore written down so newbies can come up to speed sooner and we reduce some of the vagueness that causes reviews to thrash a bit. I started a blueprint: https://blueprints.launchpad.net/tempest/+spec/test-developer-documentation and being pretty much a newbie myself, wasn't sure how to start (I have only limited access to IRC), but realized I should start an Etherpad with strawman sections and let people edit there. Hope this is useful. --Rocky -Original Message- From: pcrews [mailto:glee...@gmail.com] Sent: Tuesday, November 12, 2013 2:03 PM To: Monty Taylor; openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest On 11/12/2013 12:20 PM, Monty Taylor wrote: On 11/12/2013 02:33 PM, David Kranz wrote: On 11/12/2013 01:36 PM, Clint Byrum wrote: Excerpts from Sean Dague's message of 2013-11-12 10:01:06 -0800: During the freeze phase of Havana we got a ton of new contributors coming on board to Tempest, which was super cool. However it meant we had this new influx of negative tests (i.e. tests which push invalid parameters looking for error codes) which made us realize that human creation and review of negative tests really doesn't scale. David Kranz is working on a generative model for this now. Are there some notes or other source material we can follow to understand this line of thinking? I don't agree or disagree with it, as I don't really understand, so it would be helpful to have the problems enumerated and the solution hypothesis stated. Thanks! ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I am working on this with Marc Koderer but we only just started and are not quite ready. But since you asked now... The problem is that the current implementation of negative tests is that each case is represented as code in a method and targets a particular set of api arguments and expected result. In most (but not all) of these tests there is boilerplate code surrounding the real content which is the actual arguments being passed and the value expected. That boilerplate code has to be written correctly and reviewed. The general form of the solution has to be worked out but basically would involve expressing these tests declaratively, perhaps in a yaml file. In order to do this we will need some kind of json schema for each api. The main implementation around this is defining the yaml attributes that make it easy to express the test cases, and somehow coming up with the json schema for each api. In addition, we would like to support fuzz testing where arguments are, at least partially, randomly generated and the return values are only examined for 4xx vs something else. This would be possible if we had json schemas. The main work is to write a generator and methods for creating bad values including boundary conditions for types with ranges. I had thought a bit about this last year and poked around for an existing framework. I didn't find anything that seemed to make the job much easier but if any one knows of such a thing (python, hopefully) please let me know. The negative tests for each api would be some combination of declaratively specified cases and auto-generated ones. With regard to the json schema, there have been various attempts at this in the past, including some ideas of how wsme/pecan will help, and it might be helpful to have more project coordination. I can see a few options: 1. Tempest keeps its own json schema data 2. Each project keeps its own json schema
Re: [openstack-dev] L3 advanced features blueprint mapping to IETF and IEEE standards
From: Pedro Roque Marques [mailto:pedro.r.marq...@gmail.com] Colin, The nice thing about standards is that there are so many of them to choose from. For instance, if you take this Internet Draft: http://tools.ietf.org/html/draft-ietf-l3vpn-end-system-02 which is based on RFC4364. It has already been implemented as a Neutron plugin via OpenContrail (http://juniper.github.io/contrail-vnc/README.html); With this implementation each OpenStack cluster can be configured as its own Autonomous System. There is a blueprint https://blueprints.launchpad.net/neutron/+spec/neutron-bgp-mpls-vpn that is discussing adding the provisioning of the autonomous system and peering to Neutron. Please note that the work above does interoperate with 4364 using option B. Option C is possible but not that practical (as an operator you probably don't want to expose your internal topology between clusters). If you want to give it a try you can use this devstack fork: https://github.com/dsetia/devstack. You can use it to interoperate with a standard router that implements 4364 and support MPLS over GRE. Products from cisco/juniper/ALU/huwawei etc do. I believe that the work i'm referencing implements interoperability while having very minimal changes to Neutron. It is based on the same concept of neutron virtual network and it hides the BGP/MPLS functionality from the user by translating policies that establish connectivity between virtual networks into RFC 4364 concepts. Please refer to: https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron Would it make sense to have an IRC/Web meeting around interoperability with RFC4364 an OpenStack managed clusters ? I believe that there is a lot of work that has already been done there by multiple vendors as well as some carriers. +1 And it should be scheduled and announced a reasonable time in advance developers can plan to participate. --Rocky Pedro. On Nov 7, 2013, at 12:35 AM, Colin McNamara co...@2cups.commailto:co...@2cups.com wrote: I have a couple concerns that I don't feel I clearly communicated during the L3 advanced features session. I'd like to take this opportunity to both clearly communicate my thoughts, as well as start a discussion around them. Building to the edge of the autonomous system The current state of neutron implementation is functionally the l2 domain and simple l3 services that are part of a larger autonomous system. The routers and switches northbound of the OpenStack networking layer handled the abstraction and integration of the components. Note, I use the term Autonomous System to describe more then the notion of BGP AS, but more broadly in the term of a system that is controlled within a common framework and methodology, and integrates with a peer system that doesn't not share that same scope or method of control These components that composed the autonomous system boundary implement protocols and standards that map into IETF and IEEE standards. The reasoning for this is interoperability. Before vendors utilize IETF for interoperability at this layer, the provider experience was horrible (this was my personal experience in the late 90's). Wednesdays discussions in the Neutron Design Sessions A couple of the discussions, most notably the extension of l3 functionality fell within the scope of starting the process of extending Neutron with functionality that will result (eventually) in the ability for an OpenStack installation to operate as it's own Autonomous System. The discussions that occurred to support L3 advanced functionality (northbound boundary), and the QOS extension functionality both fell into the scope of Northbound and Southbound boundaries of this system. My comments in the session My comments in the session, while clouded with jet-lag were specifically around two concepts that are used when integrating other types of systems 1. In a simple (1-8) tenant environment integration with a northbound AS is normally done in a PE-CE model that generally centers around mapping dot1q tags into the appropriate northbound l3 segments and then handling the availability of the L2 path that traverses with port channeling, MLAG, STP, Etc. 2. In a complex environment (8+ for discussion) different Carrier Supporting Carrier (CSC) methods defined in IETF RFC 4364 Section 10 type A, B or C are used. These allow the mapping of segregated tenant networks together and synchronizing between distributed systems. This normally extends the tagging or tunneling mechanism and then allows for BGP to synchronize NLRI information between AS's. These are the standard ways of integrating between carriers, but also components of these implementations are used to integrate and scale inside of a single web scale data center. Commonly when you scale beyond a certain physical port boundary (1000is edge ports in many implementations, much larger in current implementations) the same designs
Re: [openstack-dev] Fwd: [Openstack-Dev][Compass] Announcement of the Compass Deployment project
The demo session is: Wednesday, November 6 * 1:20pm - 1:35pm in the Demo Theatre The presentation is: Thursday, November 7 * 4:30pm - 5:10pm in Sky City Meeting Rm 4 (Marriot) We are also trying for an unconference session to do some brainstorming with interested developers. And, our schedules should be on the website so you can find the team members at the conference. Folks to look for: Shuo Yang Weidong Shao Haiying Wang Thanks, Rocky Grober -Original Message- From: Robert Collins [mailto:robe...@robertcollins.net] Sent: Friday, November 01, 2013 12:53 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] Fwd: [Openstack-Dev] Announcement of the Compass Deployment project On 1 November 2013 20:41, Rochelle Grober roc...@gmail.com wrote: A message from my associate as he wings to the Icehouse OpenStack summit (and yes, we're psyched): Our project, code named Compass is a Restful API driven deployment platform that performs discovery of the physical machines attached to a specified set of switches. It then customizes configurations for machines you identify and installs the systems and networks to your configuration specs. Besides presenting the technical internals and design decisions of Compass at the Icehouse summit, we will also have a demo session. Cool - when is it? I'd like to get along. ... We look forward to showing the community our project, receiving and incorporating, brainstorming what else it could do, and integrating it into the OpenStack family . We are a part of the OpenStack community and want to support it both with core participation and with Compass. I'm /particularly/ interested in the interaction with Neutron and network modelling - do you use Neutron for the physical switch interrogation, do you inform Neutron about the topology and so on. Anyhow, lets make sure we can connect and see where we can collaborate! Cheers, Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Fwd: [Openstack-Dev] [Compass] Announcement of the Compass Deployment project
From: Dmitry Mescheryakov [mailto:dmescherya...@mirantis.com] I've noticed you list Remote install and configure a Hadoop cluster (synergy with Savanna?) among possible use cases. Recently there was a discussion about Savanna on bare metal provisioning through Nova (see thread [1]). Nobody tested that yet, but it was concluded that it should work without any changes in Savanna code. So if Compass could set up baremetal provisioning with Nova, possibly Savanna will work on top of that out of the box. The referenced thread is one of the threads that got us wondering whether Compass could be of use here. If you're going to the summit, you can brainstorm with our guys. Otherwise, we can take up this discussion after the summit. Compass should be able to build baremetal based installs or VM-based instances starting from baremetal. The key is to make sure the Compass design and implementation meets the needs/requirements of Savanna and other OpenStack projects. --Rocky Dmitry [1] http://lists.openstack.org/pipermail/openstack-dev/2013-October/017438.html 2013/11/1 Robert Collins robe...@robertcollins.netmailto:robe...@robertcollins.net On 1 November 2013 20:41, Rochelle Grober roc...@gmail.commailto:roc...@gmail.com wrote: A message from my associate as he wings to the Icehouse OpenStack summit (and yes, we're psyched): Our project, code named Compass is a Restful API driven deployment platform that performs discovery of the physical machines attached to a specified set of switches. It then customizes configurations for machines you identify and installs the systems and networks to your configuration specs. Besides presenting the technical internals and design decisions of Compass at the Icehouse summit, we will also have a demo session. Cool - when is it? I'd like to get along. ... We look forward to showing the community our project, receiving and incorporating, brainstorming what else it could do, and integrating it into the OpenStack family . We are a part of the OpenStack community and want to support it both with core participation and with Compass. I'm /particularly/ interested in the interaction with Neutron and network modelling - do you use Neutron for the physical switch interrogation, do you inform Neutron about the topology and so on. Anyhow, lets make sure we can connect and see where we can collaborate! Cheers, Rob -- Robert Collins rbtcoll...@hp.commailto:rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer] [qa] Ceilometer ERRORS in normal runs
John Griffith wrote: On Wed, Oct 23, 2013 at 8:47 AM, Sean Dague s...@dague.netmailto:s...@dague.net wrote: On 10/23/2013 10:40 AM, John Griffith wrote: On Sun, Oct 20, 2013 at 7:38 AM, Sean Dague s...@dague.netmailto:s...@dague.net mailto:s...@dague.netmailto:s...@dague.net wrote: Dave Kranz has been building a system so that we can ensure that during a Tempest run services don't spew ERRORs in the logs. Eventually, we're going to gate on this, because there is nothing that Tempest does to the system that should cause any OpenStack service to ERROR or stack trace (Errors should actually be exceptional events that something is wrong with the system, not regular events). So I have to disagree with the approach being taken here. Particularly in the case of Cinder and the negative tests that are in place. When I read this last week I assumed you actually meant that Exceptions were exceptional and nothing in Tempest should cause Exceptions. It turns out you apparently did mean Errors. I completely disagree here, Errors happen, some are recovered, some are expected by the tests etc. Having a policy and especially a gate that says NO ERROR MESSAGE in logs makes absolutely no sense to me. Something like NO TRACE/EXCEPTION MESSAGE in logs I can agree with, but this makes no sense to me. By the way, here's a perfect example: https://bugs.launchpad.net/cinder/+bug/1243485 As long as we have Tempest tests that do things like show non-existent volume you're going to get an Error message and I think that you should quite frankly. Ok, I guess that's where we probably need to clarify what Not Found is. Because Not Found to me seems like it should be a request at INFO level, not ERROR. ERROR from an admin perspective should really be something that would suitable for sending an alert to an administrator for them to come and fix the cloud. From my perspective as someone who has done Ops in the past, a Volume Not Found can be either info or an error. It all depends on the context. That said, we need to be able to test ERROR conditions and ensure that they report properly as ERROR, else the poor Ops folks will always be on the spot for not knowing that there is a problem. A volume that has gone missing is a problem. Ops would like an immediate report. They would trigger on the ERROR statement in the log. On the other hand, if someone/thing fatfingers an input and requests something that has never existed, then that's just info. We need to be able to test for correctness of errors and process logs with errors in them as part of the test verification. Perhaps a switch in the test that indicates log needs post processing, or a way to redirect the log during a specific error test, or some such? The question is, how do we keep test system logs clean of ERRORs and still test system logs for intentionally triggered ERRORs? --Rocky TRACE is actually a lower level of severity in our log systems than ERROR is. Sorry, by Trace I was referring to unhandled stack/exception trace messages in the logs. -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Hyper-V] Havana status
When you do, have a beer for me. I'll be looking for what you guys come up with. And I don't think a separate project would be a second class project. The driver guys could be so successful that all the drivers end up there and the interfaces between Nova and the drivers get *real* clean and fast. --Rocky Grober -Original Message- From: Russell Bryant [mailto:rbry...@redhat.com] Sent: Friday, October 11, 2013 3:59 PM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Hyper-V] Havana status On 10/11/2013 05:09 PM, Alessandro Pilotti wrote: My suggestion is to bring this discussion to HK, possibly with a few beers in front and sort it out :-) Sounds like a good plan to me! Thanks, -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Stats on blueprint design info / creation times
+100 If one blueprint points to another, then the pointers should be present and available in both blueprints. Dependency linking, folks. --Rocky From: Mike Spreitzer [mailto:mspre...@us.ibm.com] Sent: Wednesday, August 21, 2013 9:04 AM To: Daniel P. Berrange Cc: OpenStack Development Mailing List Subject: Re: [openstack-dev] Stats on blueprint design info / creation times For the case of an item that has no significant doc of its own but is related to an extensive blueprint, how about linking to that extensive blueprint? ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev