Re: [openstack-dev] Thoughts on ReleaseNoteImpact git commit message tag
2015-07-08 18:45 GMT+02:00 Thierry Carrez : > Soren Hansen wrote: >> Putting release notes in the git repo so that they simply will get >> merged along with the corresponding code seems like a no-brainer. What >> am I missing? > For stable branches it prevents us from just backporting master changes > using straight cherry-picks. [...] > Doesn't mean it's not an option, but it adds pain compared to the > current situation. This very thread seems to suggest that the current situation isn't exactly painless as it is. :) -- Soren Hansen __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Thoughts on ReleaseNoteImpact git commit message tag
Putting release notes in the git repo so that they simply will get merged along with the corresponding code seems like a no-brainer. What am I missing? Soren Hansen | http://linux2go.dk/ Ubuntu Developer | http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/ 2015-07-08 12:13 GMT+02:00 Thierry Carrez : > Matt Riedemann wrote: >> [...] >> So we kicked around the idea of a ReleaseNoteImpact tag so that we can >> search for those at the end of the release in addition to UpgradeImpact. > > I like the idea -- could be a commit message tag or waiting for the wiki > update to approve (as Dan suggested), I think both would work. > > As a sidenote, we are reconsidering stable release notes with the plan > we came up with for stable branches[1] (we decided to switch to > versioning and releasing every stable branch commit). To produce valid > release notes for any of those, we need to autogenerate them and ship > them as part of the tarball. > > That means for stable branches we'll need a way to specify release notes > snippets in the git repository itself. Something like a header in commit > messages that would get added as a bullet point to the auto-generated > release notes in the tarball. > > We still need to discuss the details, but maybe whatever we come up with > for stable branches could be reusable for master branches in the future. > > [1] http://lists.openstack.org/pipermail/openstack-dev/2015-July/068400.html > > -- > Thierry Carrez (ttx) > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release
As I've said a couple of times in the past, I think the architecturally sound approach is to keep this inside Nova. The two main reasons are: * Having multiple frontend API's keeps us honest in terms of separation between the different layers in Nova. * Having the EC2 API inside Nova ensures the internal data model is rich enough to "feed" the EC2 API. If some field's only use is to enable the EC2 API and the EC2 API is a separate component, it's not hard to imagine it being deprecated as well. I fear that deprecation is a one way street and I would like to ask one more chance to resucitate it in its current home. I could be open to a discussion about putting it into a separate repository, but having it functionally remain in its current place, if that's somehow easier to swallow. Soren Hansen | http://linux2go.dk/ Ubuntu Developer | http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/ 2015-01-28 20:56 GMT+01:00 Sean Dague : > The following review for Kilo deprecates the EC2 API in Nova - > https://review.openstack.org/#/c/150929/ > > There are a number of reasons for this. The EC2 API has been slowly > rotting in the Nova tree, never was highly tested, implements a > substantially older version of what AWS has, and currently can't work > with any recent releases of the boto library (due to implementing > extremely old version of auth). This has given the misunderstanding that > it's a first class supported feature in OpenStack, which it hasn't been > in quite sometime. Deprecating honestly communicates where we stand. > > There is a new stackforge project which is getting some activity now - > https://github.com/stackforge/ec2-api. The intent and hope is that is > the path forward for the portion of the community that wants this > feature, and that efforts will be focused there. > > Comments are welcomed, but we've attempted to get more people engaged to > address these issues over the last 18 months, and never really had > anyone step up. Without some real maintainers of this code in Nova (and > tests somewhere in the community) it's really no longer viable. > > -Sean > > -- > Sean Dague > http://dague.net > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting
2014-10-03 9:00 GMT+02:00 Michael Chapman : > On Fri, Oct 3, 2014 at 4:05 AM, Soren Hansen > wrote: >> That said, there will certainly be situations where there'll be a >> need for some sort of anti-entropy mechanism. It just so happens that >> those situations already exist. We're dealing with about a complex >> distributed system. We're kidding ourselves if we think that any >> kind of consistency is guaranteed, just because our data store >> favours consistency over availability. > I apologize if I'm missing something, but doesn't denormalization to > add join support put the same value in many places, such that an > update to that value is no longer a single atomic transaction? Yes. > This would appear to counteract the requirement for strong > consistency. What requirement for strong consistency? > If updating a single value is atomic (as in Riak's consistent mode) Admittedly, I'm not 100% up-to-date on Riak, but last I looked, there wasn't any "consistent mode". However, when writing a value, you can specify that you want all (or a quorum of) replicas to be written to disk before you get a succesful response. However, this does not imply transactional support. In other words, if one of the writes fail, it doesn't get rolled back on the other nodes. You just don't get a succesful response. > I also don't really see how a NoSQL system in strong consistency mode > is any different from running MySQL with galera in its failure modes. I agree. I never meant to imply that we should run anything in "strong consistency mode". There might be a few operations that require strong consistency, but they should be exceptional. Quotas sound like a good example. > The requirement for quorum makes the addition of nodes increase the > potential latency of writes (and reads in some cases) so having large > scale doesn't grant much benefit, if any. I agree about the requirement for quorum having those effects (also for e.g. Galera). I think you are missing my point, though. My concerns are not whether MySQL can handle the data volume of a large scale OpenStack deployment. I'm sure it can. Without even breaking a sweat. MySQL has been used in countless deployments to handle data sets vastly bigger than what we're dealing with. My concern is reliability. > Quorum will also prevent nodes on the wrong side of a partition from > being able to access system state (or it will give them stale state, > which is probably just as bad in our case). This problem exists today. Suppose you have a 5 node Galera cluster. Would you refuse reads on the wrong side of the partition to avoid providing stale data? With e.g. Riak it's perfectly possible to accept both reads and writes on both sides of the partition. No matter what we do, we need accept the fact that when we handle the data, it is by definition out of date. It can have changed the millisecond after we read it from there and started using it. > I think your goal of having state management that's able to handle > network partitions is a good one, but I don't think the solution is as > simple as swapping out where the state is stored. It kinda is, and it kinda isn't. I never meant to suggest that just replacing the datastore would solve everything. We need to carefully look at our use of the data from the datastore and consider the impact of eventual consistency on this use. On the other hand, as I just mentioned above, this is a problem that exists right now, today. We're just ignoring it, because we happen to have a consistent datastore. > Maybe in some cases like split-racks the system needs to react to a > network partition by forming its own independent cell with its own > state storage, and when the network heals it then merges back into the > other cluster cleanly? That would be very difficult to implement, but > fun (for some definition of fun). Fun, but possible. Riak was designed for this. With an RDBMS I don't even know how to begin solving something like that. > As a thought experiment, a while ago I considered what would happen if > instead of using a central store, I put a sqlite database behind every > daemon and allowed them to query each other for the data they needed, > and cluster if needed (using raft). > Services like nova-scheduler need strong consistency No, it doesn't. :) > and would have to cluster to perform their role, but services like > nova-compute would simply need to store the data concerning the > resources they are responsible for. This follows the 'place state at > the edge' kind of design principles that have been discussed in > various circles. It falls down in a number of pretty obvious ways, > and ultimately it would require more
Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting
I'm sorry about my slow responses. For some reason, gmail didn't think this was an important e-mail :( 2014-09-30 18:41 GMT+02:00 Jay Pipes : > On 09/30/2014 08:03 AM, Soren Hansen wrote: >> 2014-09-12 1:05 GMT+02:00 Jay Pipes : > How would I go about getting the associated fixed IPs for a network? > The query to get associated fixed IPs for a network [1] in Nova looks > like this: > > SELECT > fip.address, > fip.instance_uuid, [...] > AND fip.instance_uuid IS NOT NULL > AND i.host = :host > > would I have a Riak container for virtual_interfaces that would also > have instance information, network information, fixed_ip information? > How would I accomplish the query against a derived table that gets the > minimum virtual interface ID for each instance UUID? What's a minimum virtual interface ID? Anyway, I think Clint answered this quite well. >>> I've said it before, and I'll say it again. In Nova at least, the >>> SQL schema is complex because the problem domain is complex. That >>> means lots of relations, lots of JOINs, and that means the best way >>> to query for that data is via an RDBMS. [...] >> I don't think relying on a central data store is in any conceivable >> way appropriate for a project like OpenStack. Least of all Nova. >> >> I don't see how we can build a highly available, distributed service >> on top of a centralized data store like MySQL. [...] > I don't disagree with anything you say above. At all. Really? How can you agree that we can't "build a highly available, distributed service on top of a centralized data store like MySQL" while also saying that the best way to handle data in Nova is in an RDBMS? >>> For complex control plane software like Nova, though, an RDBMS is >>> the best tool for the job given the current lay of the land in open >>> source data storage solutions matched with Nova's complex query and >>> transactional requirements. >> What transactional requirements? > https://github.com/openstack/nova/blob/stable/icehouse/nova/db/sqlalchemy/api.py#L1654 > When you delete an instance, you don't want the delete to just stop > half-way through the transaction and leave around a bunch of orphaned > children. Similarly, when you reserve something, it helps to not have > a half-finished state change that you need to go clean up if something > goes boom. Looking at that particular example, it's about deleting an instance and all its associated metadata. As we established earlier, these are things that would just be in the same key as the instance itself, so it'd just be a single key that would get deleted. Easy. That said, there will certainly be situations where there'll be a need for some sort of anti-entropy mechanism. It just so happens that those situations already exist. We're dealing with about a complex distributed system. We're kidding ourselves if we think that any kind of consistency is guaranteed, just because our data store favours consistency over availability. > https://github.com/openstack/nova/blob/stable/icehouse/nova/db/sqlalchemy/api.py#L3054 Sure, quotas will require stronger consistency. Any NoSQL data store worth its salt gives you primitives to implement that. >>> Folks in these other programs have actually, you know, thought about >>> these kinds of things and had serious discussions about >>> alternatives. It would be nice to have someone acknowledge that >>> instead of snarky comments implying everyone else "has it wrong". >> I'm terribly sorry, but repeating over and over that an RDBMS is "the >> best tool" without further qualification than "Nova's data model is >> really complex" reads *exactly* like a snarky comment implying >> everyone else "has it wrong". > Sorry if I sound snarky. I thought your blog post was the definition > of snark. I don't see the relevance of the tone of my blog post? You say it would be nice if people did something other than offer snarky comments implying everyone else "has it wrong". I'm just pointing out that such requests ring really hollow when put forth in the very e-mail where you snarkily tell everyone else that they have it wrong. Since you did bring up my blog post, I really am astounded you find it snarky. It was intended to be constructive and forward looking. The first one in the series, perhaps, but certainly not the one linked in this thread. Perhaps I need to take writing classes. -- Soren Hansen | http://linux2go.dk/ Ubuntu Developer | http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting
2014-09-12 1:05 GMT+02:00 Jay Pipes : > If Nova was to take Soren's advice and implement its data-access layer > on top of Cassandra or Riak, we would just end up re-inventing SQL > Joins in Python-land. I may very well be wrong(!), but this statement makes it sound like you've never used e.g. Riak. Or, if you have, not done so in the way it's supposed to be used. If you embrace an alternative way of storing your data, you wouldn't just blindly create a container for each table in your RDBMS. For example: In Nova's SQL-based datastore we have a table for security groups and another for security group rules. Rows in the security group rules table have a foreign key referencing the security group to which they belong. In a datastore like Riak, you could have a security group container where each value contains not just the security group information, but also all the security group rules. No joins in Python-land necessary. > I've said it before, and I'll say it again. In Nova at least, the SQL > schema is complex because the problem domain is complex. That means > lots of relations, lots of JOINs, and that means the best way to query > for that data is via an RDBMS. I was really hoping you could be more specific than "best"/"most appropriate" so that we could have a focused discussion. I don't think relying on a central data store is in any conceivable way appropriate for a project like OpenStack. Least of all Nova. I don't see how we can build a highly available, distributed service on top of a centralized data store like MySQL. Tens or hundreds of thousands of nodes, spread across many, many racks and datacentre halls are going to experience connectivity problems[1]. This means that some percentage of your infrastructure (possibly many thousands of nodes, affecting many, many thousands of customers) will find certain functionality not working on account of your datastore not being reachable from the part of the control plane they're attempting to use (or possibly only being able to read from it). I say over and over again that people should own their own uptime. Expect things to fail all the time. Do whatever you need to do to ensure your service keeps working even when something goes wrong. Of course this applies to our customers too. Even if we take the greatest care to avoid downtime, customers should spread their workloads across multiple availability zones and/or regions and probably even multiple cloud providers. Their service towards their users is their responsibility. However, our service towards our users is our responsibility. We should take the greatest care to avoid having internal problems affect our users. Building a massively distributed system like Nova on top of a centralized data store is practically a guarantee of the opposite. > For complex control plane software like Nova, though, an RDBMS is the > best tool for the job given the current lay of the land in open source > data storage solutions matched with Nova's complex query and > transactional requirements. What transactional requirements? > Folks in these other programs have actually, you know, thought about > these kinds of things and had serious discussions about alternatives. > It would be nice to have someone acknowledge that instead of snarky > comments implying everyone else "has it wrong". I'm terribly sorry, but repeating over and over that an RDBMS is "the best tool" without further qualification than "Nova's data model is really complex" reads *exactly* like a snarky comment implying everyone else "has it wrong". [1]: http://aphyr.com/posts/288-the-network-is-reliable -- Soren Hansen | http://linux2go.dk/ Ubuntu Developer | http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting
2014-09-26 17:11 GMT+02:00 Jay Pipes : > On 09/26/2014 06:45 AM, Soren Hansen wrote: >> Define "best". > best == most appropriate. #copout -- Soren Hansen | http://linux2go.dk/ Ubuntu Developer | http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting
2014-09-12 1:05 GMT+02:00 Jay Pipes : > If Nova was to take Soren's advice and implement its data-access layer on > top of Cassandra or Riak, we would just end up re-inventing SQL Joins in > Python-land. I've said it before, and I'll say it again. In Nova at least, > the SQL schema is complex because the problem domain is complex. That means > lots of relations, lots of JOINs, and that means the best way to query for > that data is via an RDBMS. Define "best". /Soren ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Devstack] add support for ceph
st as I undertand it) to exactly be the tool to let us test things in a reproducable fashion, but it seems to me that you're saying that things need to already be covered by other methods of testing before it can get into devstack? Had devstack had support for Ceph, someone could have much more easily been running these tests in an automated, continuous fashion and have raised these issues at a more appropriate time. > If the user is pulling the devstack plugin from a 3rd party location, > then it's clear where the support needs to come from. If it's coming > from devstack, people are going to be private message pinging me on > IRC when it doesn't work (which happens all the time). Is your suggestion for people to maintain forks of devstack for things like this? That's certainly a solution, but not what I'd have expected. -- Soren Hansen | http://linux2go.dk/ Ubuntu Developer | http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] User Signup
Den 15/02/2014 00.19 skrev "Adam Young" : >> Could you please spend 5 minutes on the blueprint https://blueprints.launchpad.net/horizon/+spec/user-registration and add your suggestions in the white board. > Does it make sense for this to be in Keystone first, and then Horizon just consumes it? I would think that "user-registration-request" would be a reasonable Keystone extension. Then, you would add a role user-approver" for a specific domain to approve a user, which would trigger the create event. This makes perfect sense to me. /Soren ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] User Signup
2014-02-10 17:03 GMT+01:00 Kieran Spear : > On 10 February 2014 08:27, Soren Hansen wrote: >> I agree that putting admin credentials on a public web server is a >> security risk, but I'm not sure why a set of restricted admin >> credentials that only allow you to create users and tenants is a >> bigger problem than the credentials for separate registration service >> that performs the exact same operations? > The third (and most dangerous) operation here is the role grant. I > don't think any Keystone policy could be specific enough to prevent > arbitrary member role assignment in this case. Fair enough. That seems like something we should fix, though. It really seems to me like adding this intermediate service is an overly complicated (although necessary given the current constraints) approach. User registration seems like something that very much falls under Keystone's domain: * Keystone should abstract any and all interaction with the user database. Having another service that adds things directly to MySQL or LDAP seems wrong to me. * Having a component whose only job is to talk to Keystone really screams to me that it ought to be part of Keystone. Perhaps a user registration API extension that lets you pass just username/password/whatever and then it creates the relevant things on the backend in a way that's configured in Keystone. I.e. it validates the request and then creates the user and tenant and grants the appropriate roles. As I see it, if we don't trust Keystone's security, we're *so* screwed anyway. This needs to work. -- Soren Hansen | http://linux2go.dk/ Ubuntu Developer | http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] User Signup
I've just taken a look at the feedback in the whiteboard. If it's ok, I'd like to take this discussion back to the mailing list. I find the whiteboards somewhat clumsy for discussions. Akihiro Motoki points out that all services should work without the dashboard. Keystone already exposes an API to create new users, so that requirement is already fulfilled, whether there's an intermediate service or not, so I don't really understand this objection. Kieran Spear argues in favour of a separate registration service that Horizon talks to over some sort of RPC interface. He argues that putting Keystone admin credentials on public facing webserver is a security risk. I agree that putting admin credentials on a public web server is a security risk, but I'm not sure why a set of restricted admin credentials that only allow you to create users and tenants is a bigger problem than the credentials for separate registration service that performs the exact same operations? Soren Hansen | http://linux2go.dk/ Ubuntu Developer | http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/ 2014-02-01 18:24 GMT+01:00 Saju M : > Hi folks, > > Could you please spend 5 minutes on the blueprint > https://blueprints.launchpad.net/horizon/+spec/user-registration and add > your suggestions in the white board. > > > Thanks, > > ___ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Hierarchicical Multitenancy Discussion
2100 UTC is 1 PM Pacific. :-) Den 29/01/2014 17.01 skrev "Vishvananda Ishaya" : > I apologize for the confusion. The Wiki time of 2100 UTC is the correct > time (Noon Pacific time). We can move tne next meeting to a different > day/time that is more convienient for Europe. > > Vish > > > On Jan 29, 2014, at 1:56 AM, Florent Flament < > florent.flament-...@cloudwatt.com> wrote: > > > Hi Vishvananda, > > > > I would be interested in such a working group. > > Can you please confirm the meeting hour for this Friday ? > > I've seen 1600 UTC in your email and 2100 UTC in the wiki ( > https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting). > As I'm in Europe I'd prefer 1600 UTC. > > > > Florent Flament > > > > - Original Message - > > From: "Vishvananda Ishaya" > > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev@lists.openstack.org> > > Sent: Tuesday, January 28, 2014 7:35:15 PM > > Subject: [openstack-dev] Hierarchicical Multitenancy Discussion > > > > Hi Everyone, > > > > I apologize for the obtuse title, but there isn't a better succinct term > to describe what is needed. OpenStack has no support for multiple owners of > objects. This means that a variety of private cloud use cases are simply > not supported. Specifically, objects in the system can only be managed on > the tenant level or globally. > > > > The key use case here is to delegate administration rights for a group > of tenants to a specific user/role. There is something in Keystone called a > “domain” which supports part of this functionality, but without support > from all of the projects, this concept is pretty useless. > > > > In IRC today I had a brief discussion about how we could address this. I > have put some details and a straw man up here: > > > > https://wiki.openstack.org/wiki/HierarchicalMultitenancy > > > > I would like to discuss this strawman and organize a group of people to > get actual work done by having an irc meeting this Friday at 1600UTC. I > know this time is probably a bit tough for Europe, so if we decide we need > a regular meeting to discuss progress then we can vote on a better time for > this meeting. > > > > > https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting > > > > Please note that this is going to be an active team that produces code. > We will *NOT* spend a lot of time debating approaches, and instead focus on > making something that works and learning as we go. The output of this team > will be a MultiTenant devstack install that actually works, so that we can > ensure the features we are adding to each project work together. > > > > Vish > > > > ___ > > OpenStack-dev mailing list > > OpenStack-dev@lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > ___ > > OpenStack-dev mailing list > > OpenStack-dev@lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > ___ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Hat Tip to fungi
+1. Thanks for all your hard work. Den 16/01/2014 19.17 skrev "Anita Kuno" : > Thank you, fungi. > > You have kept openstack-infra running for that last 2 weeks as the sole > plate-spinner whilst the rest of us were conferencing, working on the > gerrit upgrade or getting our laptop stolen. > > You spun up and configured two new Jenkinses (Jenkinsii?) and then deal > with the consequences of one expansion of our system slowing down > another. With Jim's help from afar, Zuul is now on a faster server. [0] > > All this while dealing with the every day business of keeping -infra > operating. > > I am so grateful for all you do. > > I tip my hat to you, sir. > > Anita. > > > [0] My sense is once Jim is back from vacation time he will provide a > detailed report on these changes. > > ___ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack] [OpenStack][Sentry]
I've not read the blueprint yet, but I think we'll need another name for it. I'm sure lots of us are running this Sentry in prduction: https://github.com/getsentry/sentry Soren Hansen | http://linux2go.dk/ Ubuntu Developer | http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/ 2014/1/10 Anastasia Latynskaya : > Hello, OpenStack folks, > > we have once more idea how to impove our wonderful OpenStack=) We've made a > new concept named Sentry for host security attestation. And we need your > review and comments, please. > > There is a link > https://blueprints.launchpad.net/sentry/+spec/sentry-general-architecture > > > Thanks! > > -- > Anastasia Latynskaya > Junior Software Engineer > Mirantis, Inc. > > ___ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openst...@lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] future fate of nova-network?
2013/11/22 John Garbutt : > Another approach to help with (1) is in Icehouse we remove the > features from nova-network that neutron does not implement. We have > warned about deprecation for a good few releases, so its almost OK. You want to motivate Neutron developers by punishing users of nova-network who are probably already nervous that the rug will be pulled out from under them? I'm about a -1,000,000 on that one. -- Soren Hansen | http://linux2go.dk/ Ubuntu Developer | http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime
I'd very much like to take part in the discussions. Depending on the outcome of said discussion, I may or may not want to participate in the implementation :) Soren Hansen | http://linux2go.dk/ Ubuntu Developer | http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/ 2013/11/21 Robert Collins : > https://etherpad.openstack.org/p/icehouse-external-scheduler > > I'm looking for 4-5 folk who have: > - modest Nova skills > - time to follow a fairly mechanical (but careful and detailed work > needed) plan to break the status quo around scheduler extraction > > And of course, discussion galore about the idea :) > > Cheers, > Rob > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > > ___ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Does Nova really need an SQL database?
2013/11/20 Chris Friesen : > What about a hybrid solution? > There is data that is only used by the scheduler--for performance reasons > maybe it would make sense to store that information in RAM as described at > > https://blueprints.launchpad.net/nova/+spec/no-db-scheduler > > For the rest of the data, perhaps it could be persisted using some alternate > backend. What would that solve? -- Soren Hansen | http://linux2go.dk/ Ubuntu Developer | http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Does Nova really need an SQL database?
2013/11/18 Mike Spreitzer : > There were some concerns expressed at the summit about scheduler > scalability in Nova, and a little recollection of Boris' proposal to > keep the needed state in memory. > I also heard one guy say that he thinks Nova does not really need a > general SQL database, that a NOSQL database with a bit of > denormalization and/or client-maintained secondary indices could > suffice. I may have said something along those lines. Just to clarify -- since you started this post by talking about scheduler scalability -- the main motivation for using a non-SQL backend isn't scheduler scalability, it's availability and resilience. I just don't accept the failure modes that MySQL (and derivatives such as Galera) impose. > Has that sort of thing been considered before? It's been talked about on and off since... well, probably since we started this project. > What is the community's level of interest in exploring that? The session on adding a backend using a non-SQL datastore was pretty well attended. -- Soren Hansen | http://linux2go.dk/ Ubuntu Developer | http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] A simple way to improve nova scheduler
2013/9/26 Joe Gordon : >> Yes, when moving beyond simple flavours, the idea as initially proposed >> falls apart. I see two ways to fix that: >> >> * Don't move beyond simple flavours. Seriously. Amazon have been pretty >>darn succesful with just their simple instance types. > Who says we have to support one scheduler model? I can see room for several > scheduler models that have different tradeoffs, such as performance / scale > vs features/ Sure. I didn't mean necessarily removing the support for richer instance configurations, but simply making them simple to disable and thus enable the O(1) scheduler. -- Soren Hansen | http://linux2go.dk/ Ubuntu Developer | http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] A simple way to improve nova scheduler
Hey, sorry for necroposting. I completely missed this thread when it was active, but Russel just pointed it out to me on Twitter earlier today and I couldn't help myself. 2013/7/19 Sandy Walsh : > On 07/19/2013 05:01 PM, Boris Pavlovic wrote: > Sorry, I was commenting on Soren's suggestion from way back (essentially > listening on a separate exchange for each unique flavor ... so no > scheduler was needed at all). It was a great idea, but fell apart rather > quickly. I don't recall we ever really had the discussion, but it's been a while :) Yes, when moving beyond simple flavours, the idea as initially proposed falls apart. I see two ways to fix that: * Don't move beyond simple flavours. Seriously. Amazon have been pretty darn succesful with just their simple instance types. * If you must make things complicated, use fanout to send a reservation request: - Send out reservation requests to everyone listening (*) - Compute nodes able to accommodate the request reserve the resources in question and respond directly to the requestor. Those unable to accommodate the request do nothing. - Requestor (scheduler, API server, whatever) picks a winner amongst the repondants and broadcasts a message announcing the winner of the request. - The winning node acknowledges acceptance of the task to the requestor and gets to work. - Every other node that responded also sees the broadcast and cancels the reservation. - Reservations time out after 5 seconds, so a lost broadcast doesn't result in reserved-but-never-used resources. - If noone has volunteered to accept the reservation request within a couple of seconds, broadcast wider. (*) "Everyone listening" isn't necessarily every node. Maybe you have topics for nodes that are at less than 10% utilisation, one for less than 25% utilisation, etc. First broadcast to those at 10% or less, move on to 20%, etc. This is just off the top of my head. I'm sure it can be improved upon. A lot. My point is just that there's plenty of alternatives to the omniscient schedulers that we've been used to for 3 years now. -- Soren Hansen | http://linux2go.dk/ Ubuntu Developer | http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev