Re: [Openstack] Future of Launchpad OpenStack mailing list (this list)

2012-08-28 Thread Andrew Clay Shafer
On Tue, Aug 28, 2012 at 3:14 PM, Brian Schott <
brian.sch...@nimbisservices.com> wrote:

> At the risk of getting bad e-karma for cross posting to
> openstack-operators, that might be the place to post that question.  I for
> one disagree that we should merge the openstack general list into
> openstack-operat...@lists.openstack.org and the only other vote I caught
> on this thread also disagreed.
>

Everyone who responded to the question so far opposed merging the general
list with operators.


>
> Several reasons:
>
> 1) new users will look for openst...@openstack.lists.openstack.org because 
> openstack-dev@and openstack-operators@are both specific things.  
> community@might have been an option, but that is taken already.
> 2) operations guys are just as specialized as devs in terms of what they
> want to talk about, it isn't meant for general "why openstack" questions.
> 3) if/when you migrate email addresses / logs it will be easier to move
> them to a brand new list.  Otherwise you will have to try to merge history
> and not step on existing data.
> 4) reusing an email address for the sake of optimization of having "too
> many lists" at the cost of community confusion is false optimization,
> you'll just get more non-dev traffic on the dev list if the choice is -dev
> or -operators.
>

yes, yes, yes and yes.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Future of Launchpad OpenStack mailing list (this list)

2012-08-28 Thread Andrew Clay Shafer
On Tue, Aug 28, 2012 at 11:59 AM, Francis J. Lacoste <
francis.laco...@canonical.com> wrote:

> On 12-08-27 08:32 PM, Andrew Clay Shafer wrote:
> > For exporting from Launchpad, surely someone at Canonical would be able
> > and willing to get that list of emails.
> >
>
> We can provide the mailing list pickle (Mailman 2) which contains all
> the email addresses as well as preferences.
>
> > If people think migrating the archive is important, then it shouldn't be
> > that hard to sort that out either, once we decide what is acceptable.
>
>
> Similarly, we can give you the mbox file from which the HTML archive is
> generated.
>


Thanks Francis

See, the system works... :)
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Future of Launchpad OpenStack mailing list (this list)

2012-08-27 Thread Andrew Clay Shafer
There are at least end users, operators and developers in the OpenStack
technical ecosystem.

There are also 'OpenStack' related discussions that aren't technical in
nature.

It doesn't seem right to make the operators list a catchall.

For exporting from Launchpad, surely someone at Canonical would be able and
willing to get that list of emails.

If people think migrating the archive is important, then it shouldn't be
that hard to sort that out either, once we decide what is acceptable.

regards,
Andrew


On Mon, Aug 27, 2012 at 7:38 PM, Brian Schott <
brian.sch...@nimbisservices.com> wrote:

> Stef,
>
> It's pretty obvious to me that there should be a general list at
> openst...@lists.openstack.org.  The operators list is intended for
> operations people that host OpenStack deployments, not a general OpenStack
> user audience.  I'd create the general openstack list, and setup a daily
> post to the LP list stating that the LP list will shut down on the last day
> of the summit along with descriptions and links to the foundation mailing
> lists and their purpose. Make the exact same info available on the wiki and
> in the docs.  The sooner we end the "right list" ambiguity the better.
>
> In terms of the old archives, can you export the old LP-hosted mailing
> list archives?  If so, the mailman archive file format is brain dead simple
> and a grad student somewhere could perl script it (or python it or ruby it
> or whatever they use these days) in an hour or so.  If not, it is OK to
> just link the old archives in the description of the new lists.
>
> Brian
>
>
> On Aug 27, 2012, at 6:54 PM, Stefano Maffulli 
> wrote:
>
> > Hello folks
> >
> > picking up this comment on the Development mailing list:
> >
> > On Mon 27 Aug 2012 02:08:48 PM PDT, Jason Kölker wrote:
> >> I've noticed that both this list and the old launchpad lists are being
> >> used. Which is the correct list?
> >
> > I sent the following message, with questions at the end that are better
> > answered on this list.
> >
> > The mailing list situation *at the moment* is summarized on
> > http://wiki.openstack.org/MailingLists
> >
> > To try to answer your question, the mailing list  for the developers of
> > OpenStack to discuss development issues and roadmap is
> > openstack-...@lists.openstack.org. It is focused on the
> > next release of OpenStack: you should post on this list if you are a
> > contributor to OpenStack or are very familiar with OpenStack
> > development and want to discuss very precise topics, contribution ideas
> > and similar. Do not ask support requests on this list.
> >
> >
> > The old Launchpad list (this list) should be closed so we don't rely on
> > Launchpad for mailing list anymore. Last time we talked about this I
> > don't think we reached consensus on how to move things around and where
> > to land this General mailing list. A few people suggested to use the
> > exisiting openstack-operators mailing list as General list, therefore
> > not creating anything new.
> >
> > Moving a group of over 4000 people from one list on Launchpad to
> > another on our mailman is scary. Unfortunately we can't export the list
> > of email addresses subscribed to Launchpad and invite them to another
> > list (LP doesn't allow that).  The first question is:
> >
> > * where would people go for general openstack usage questions (is
> > 'operators' the  best fit?)
> >
> > * Then, what do we do with Launchpad mailing list archives?
> >
> > If we find an agreement we can aim at closing the old LP-hosted mailing
> > list around the summit, where we will be able to announce the new list
> > destination to many people.
> >
> > Thoughts?
> >
> > /stef
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Gold Member Election Update

2012-08-16 Thread Andrew Clay Shafer
Cisco - Lew Tucker
> Cloudscaling - Randy Bias
> Dell - John Igoe
> Dreamhost - Simon Anderson
> ITRI/CCAT - Dr. Tzi-cker Chiueh
> Mirantis - Boris Renski
> Piston - Joshua McKenty
> Yahoo! - Sean Roberts
>

Congratulations to the newly elected board members.

Do we have an oath of office?
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RedHAt * OPenSTack

2012-08-14 Thread Andrew Clay Shafer
First you fill out that form, then you have to have a Redhat login, so
either make one or login, then you have to fill out another form, then you
get on a wait list, then you get an email that the subscription is active.
I received that email.

You can't do anything unless you have running Redhat licenses, even then
there isn't really any instructions in that path of forms and emails. (that
or they were not obvious and I totally missed them) You have to know Redhat
and how the subscriptions work.

The way it works now acts as a filter for existing RedHat customers more
than a generic install of OpenStack.


On Tue, Aug 14, 2012 at 3:16 PM, Joshua Harlow wrote:

> Are signups taking a while??
>
> Anyone else got the email yet, I think they lost mine, sad++
>
> On 8/14/12 11:57 AM, "Frans Thamura"  wrote:
>
> >hi all
> >
> >Redhat just post in his wall
> >
> >openstack..
> >
> >http://www.redhat.com/openstack/?sc_cid=7016000TmB8AAK
> >
> >
> >--
> >Frans Thamura (曽志胜)
> >Shadow Master and Lead Investor
> >Meruvian.
> >Integrated Hypermedia Java Solution Provider.
> >
> >Mobile: +628557888699
> >Blog: http://blogs.mervpolis.com/roller/flatburger (id)
> >
> >FB: http://www.facebook.com/meruvian
> >TW: http://www.twitter.com/meruvian / @meruvian
> >Website: http://www.meruvian.org
> >
> >"We grow because we share the same belief."
> >
> >___
> >Mailing list: https://launchpad.net/~openstack
> >Post to : openstack@lists.launchpad.net
> >Unsubscribe : https://launchpad.net/~openstack
> >More help   : https://help.launchpad.net/ListHelp
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Setting Expectations

2012-08-14 Thread Andrew Clay Shafer
> These are important and difficult questions. As you say, OpenStack is
> many different things to different people. So far we survived while
> avoiding to answer clearly, mostly because we had no good way of coming
> up with answers. That ultimately creates tension between participants in
> our community when the different models clash.
>

Yes, these are difficult questions.

I'm don't agree with the assertion that there was no good way of coming up
with answers, but for a variety of reasons, we did not.

You say OpenStack has survived, but I believe we may have compounded and
multiplied the challenges OpenStack faces by collectively neglecting to
resolve this. Without going into all the technical necessity and political
complexity, I would argue we allowed OpenStack fragmentation at the project
level. Without a unified conscience of purpose, the fragmentation only gets
magnified at the point users are interacting with different deployments.

I want to also respond to the idea that OpenStack can be seen like the
Linux kernel. This is a point I made and articulated early in the OpenStack
discussion.

The artifacts of my using that analogy date back to the Fall of 2010:
http://www.slideshare.net/littleidea/open-stack-sdforum/44
http://www.slideshare.net/littleidea/openstack-summit-a-community-of-service-providers/27

I don't believe that the kernel is a perfect analogy, but even if it was
this one sentence 'OpenStack is like the Linux kernel' will not make it so.

Linus Torvalds provides both technical oversight and the kind of conscience
I keep referring to.

What is the OpenStack equivalent of this?
https://lkml.org/lkml/2012/3/8/495

I suggest everyone read the whole email from Linus at that link.

On some level, this attitude is what prevents a preponderance of the
tension we have recently seen in OpenStack mailing lists. Granted, it
implies other more pointed conflict, but some of that is Linus being Linus.
The very real choice in these types of projects is between resolving open
conflict early and often or sublimated conflicts that tend to erupt with a
vengeance later.


> My hope is that the formation of the Foundation will help providing a
> forum for this discussion, and a mechanism to come with clearer answers.
> I actually see that as the main mission of the Foundation for the first
> year.


I share this hope, but I also don't think we should abdicate all
responsibility for this to the Foundation.

We are all ostensibly individual members of the foundation, if not
corporate members.

OpenStack will be what we collectively make it.

Cheers,
Andrew
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Setting Expectations

2012-08-10 Thread Andrew Clay Shafer
What is OpenStack?

Clearly, OpenStack is many things to many people and organizations.

What does it mean to contribute to OpenStack? What does it mean to deploy
OpenStack? What does it mean to operate OpenStack?

What do we mean when we say compatible? interoperable? community? branded?

Is OpenStack a framework? a project? a product?

Recent discussions make it clear that we have a lot of different ideas
about all of these things.

Our collective and individual responsibilities to each other are also a
point of tension.

There is a marked difference in the perspective of those developing the
projects, those operating the projects as services and the final
consumers/clients of those services.

If OpenStack is going to live up to it's full potential and stated mission,
I believe there needs to be a much stronger collective conscience about how
decisions are made.

I feel we will all benefit by making some things more explicit.

If the expectation is that OpenStack is a framework, which is a word I've
heard people use many times, then does an upgrade path have to exist?

The OpenStack dashboard was essentially rewritten to upgrade to a new
version of Django. Was there any expectation that Django should upgrade
itself for us?

Upgrading an application to use a different versions of Rails, another
framework, often borders on impossible, particularly if the application
happens have some feature with a dependency of a gem that hasn't been kept
in sync with the upstream project.

Is OpenStack more or less complicated than those frameworks? What
responsibility should OpenStack core development have to consider existing
deployments? Frameworks are expected be a foundation to build on. By
definition, changing foundations is not easy. Clearly, OpenStack can be
deployed and operated, but if OpenStack needs to be easier to deploy,
operate and upgrade, and that is a responsibility of OpenStack itself, that
can't be something that get's tacked on at the end. There is no 'ease of
deployment' powder to sprinkle on at the end.

Distributions and tooling can and do obscure the difficultly for the
downstream user, but that also leads to a lot of potential fragmentation.
And is that the right answer? Who can and should answer that?

That OpenStack should be easy to deploy and upgrade is somewhat at odds
with OpenStack supporting every possible combination of hypervisor, storage
and networking option, let alone what the expectation should be with closed
source customizations/integrations.

Which brings up questions of compatibility. API compatibility is
potentially misleading if the semantics and behaviors vary. I've heard
several service provider discuss ideas about how they can be differentiated
in the market, and many of those ideas lead differences in APIs to expose.
Is that wrong? Maybe, maybe not, but it certainly makes a lot of work if
your core business is dependent on maintaining integrations with service
providers. Taken to an extreme these decisions complicate and call into
question any future of federated OpenStack services.

I'm not advocating any choice here.

I just want to point out there are compromises that have to be made. There
are goals and desires for OpenStack that are at odds with each other.

Some of that is a function of the perspective of persona, but a lot is also
from fundamental differences in understanding about where OpenStack is,
where OpenStack needs to be, and how to get there.

If there isn't a core guiding conscience about what we are trying to
accomplish that gets applied across the board, then I worry that the future
of OpenStack ends up with more fragments optimizing for their perspective
and inevitable skirmishes when the perspectives collide.

I see there are many conversations we aren't having, which leads to
surfacing all the unaddressed issues when someone does try to involve the
community in discussions.

OpenStack can't be all things, but we get to decide what it will be.

The question is will we do that explicitly and consciously, or indirectly
and passively.

There is no one person who can address this alone.

I'm hoping this can start a conversation.

Best Regards,
Andrew
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [swift] Operational knowledge sharing

2012-08-10 Thread Andrew Clay Shafer
Thanks for sharing.



On Fri, Aug 10, 2012 at 12:31 PM, John Dickinson  wrote:

> In a standard swift deployment, the proxy server is running behind a load
> balancer and/or an SSL terminator. At SwiftStack, we discovered an issue
> that may arise from some config parameters in this layer, and we'd like to
> share it with other swift deployers.
>
> Symptom:
>
> Users updating metadata (ie POST) on larger objects get 503 error
> responses. However, there are no error responses logged by swift.
>
> Cause:
>
> Since POSTs are implemented, by default, as a server-side copy in swift
> and there is no traffic between the user and swift during the server-side
> copy, the LB or SSL terminator times out before the operation is done.
>
> Solution:
>
> Two options:
>
> 1) Raise the timeout in the LB/SSL terminator config. For example, with
> pound change the "TimeOut" for the swift backend. pound defaults to 15
> seconds. The appropriate value is however log it takes to do a server side
> copy of your largest object. If you have a 1gbps network, it will take
> about 160 seconds to copy a 5GB object ((8*5*2**30)/((2**30)/4) -- the
> divide by 4 is because the 1gbps link is used to read one stream (the
> original) and write the new copy (3 replicas)).
>
> 2) Change the behavior of POSTs to not do a server-side copy. This will
> make POSTs faster, but it will prevent all metadata values from being
> updated (notably, Content-Type will not be able to be modified with a
> POST). Also, this will not make the issue go away with user-initiated
> server-side copies.
>
> I would recommend the first solution, unless your workload makes heavy use
> of POSTs.
>
> Hoep this helps.
>
> --John
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Making the RPC backend a required configuration parameter

2012-08-08 Thread Andrew Clay Shafer
Is there a good reason NOT to do this?


On Wed, Aug 8, 2012 at 4:35 PM, Eric Windisch  wrote:

> I believe that the RPC backend should no longer have any default.
>
>
>
> Historically, it seems that the Kombu driver is default only because it
> existed before all others and before there was an abstraction. With
> multiple implementations now available, it may be time for a change.
>
> Why?
> * A default skews the attitudes and subsequent architectures toward a
> specific implementation
>
>
> * A default skews the practical testing scenarios, ensuring maturity of
> one driver over others.
> * The kombu driver does not work "out of the box", so it is no more
> reasonable as a default than impl_fake.
> * The RPC code is now in openstack-common, so addressing this later will
> only create additional technical debt.
>
> My proposal is that for Folsom, we introduce a "future_required" flag on
> the configuration option, "rpc_backend". This will trigger a WARNING
> message if the rpc_backend configuration value is not set.  In Grizzly, we
> would make the rpc_backend variable mandatory in the configuration.
>
> Mark McLoughlin wisely suggested this come before the mailing list, as it
> will affect a great many people. I welcome feedback and discussion.
>
> Regards,
> Eric Windisch
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-14 Thread Andrew Clay Shafer
I disagree with your last point, it is true if we look only into this
> particular problem, but if you look into the whole ecosystem you'll realize
> that the code removal of nova-volumes is not the only change from essex to
> folsom.. if we had deprecated all other changes, this particular one would
> not be painful at all.
>

I'm not sure what you are disagreeing with or advocating.

We should expect code to change between releases, no?

As this thread demonstrated, we collectively have done a poor job of
communicating and managing those changes.

It is precisely because I expect more changes that I state that option 2 is
postponing risks, not lowering them.

I'm not saying you are wrong, or that my assumptions are correct, but I
don't understand what you disagree with.


>
>>
>> [...]
>> On the question of compatibility, my assumption is this a non-issue for
>> the moment.
>>
>
> I believe it wouldn't be an issue if people were not using OpenStack, but
> we are..
>

I thought it was clear from the context of the thread and my email that
compatibility in this case is in reference to the consumers of the API and
not to the differences for managing deployments.



>
> On the question of an upgrade path and the relative ease of deployment, my
>> assumption is this is no worse than any of the other upgrades.
>>
>
> It doesn't really mean a good thing, since I don't think that the others
> upgrades were good, based on what I heard and experienced with sysadmins
> from my team...
>

I totally agree that it's not a good thing.

Do we believe that keeping nova-volumes will make it painless?


> [...]
>> In specific, I think getting more information from operators and users is
>> generally good. Ultimately, if OpenStack cannot be operated and utilized,
>> the mission will fail.
>>
>
> I agree! (finally :P)
> I also think that it's our responsibility (as developers) to ask for input
> from operators, it's not because they are not complaining that things are
> going smoothly. We should ask for every one we know who's working with
> OpenStack and do our best to get feedback.
>

Definitely!

There are also different sets of unstated assumptions about what OpenStack
is or should be that have to be resolved or we are going to keep running
into these type of situations.

Those definitely create the situation we are facing, but they aren't unique
to Cinder/Volumes, so I will start another thread.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-13 Thread Andrew Clay Shafer
day.

As for community, I am personally disappointed by how past decisions were
made about lots of things. We can do better. The issues and emotions raised
in this thread should not be dismissed. Doing that is to put the potential
of OpenStack in peril.

Regards,
Andrew Clay Shafer
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-11 Thread Andrew Clay Shafer
One vote for option 1.

Remove Volumes
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Do we need an API and storage?

2012-05-16 Thread Andrew Clay Shafer
+1

On Wed, May 16, 2012 at 2:00 PM, Francis J. Lacoste <
francis.laco...@canonical.com> wrote:

> Hi,
>
> The whole API discussion made me wondered if this part of the
> architecture is worth keeping.
>
> The main use case for the metering API is so that billing systems can be
> integrated in OpenStack. We have the assumption that any billing system
> will need an integation layer. I think this is a fair assumption.
>
> But at the same time, we are forcing the integration to be made around a
> polling model. From time to time, poll the metering API to create
> billing artefacts.
>
> I'm now of the opinion that we exclude storage and API from the metering
> project scope. Let's just focus on defining a metering message format,
> bus, and maybe a client-library to make it easy to write metering
> consumers.
>
> That way we avoid building a lot of things that we only _think will be
> useful_ for potential billing integration. Only writing/delivering such
> an integration component would prove that we built at least something
> that is useful.
>
> Cheers
>
> --
> Francis J. Lacoste
> francis.laco...@canonical.com
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] i18n of log message

2012-05-07 Thread Andrew Clay Shafer
I would vote for 0 or 1 on the cost versus benefit. Option 0 is the least
overhead, but option 1 would be nice for a lot of reasons.

The downside to i18n of the logs and errors in the dilution of information
available to find solutions can be higher than the benefit of providing
messages in a native language.

The level of effort is certainly much much higher to provide option 3.

I'd vote for effort to go to improving the OpenStack core technology and
features over something that adds a lot of overhead and also some downside.


On Mon, May 7, 2012 at 3:40 AM, Ying Chun Guo  wrote:

> I will vote option 3, because I think API-user-facing messages is as
> important as
> user interface messages. Since the workload of option 3 is not much more
> than option 2,
> option 3 will be a better choice.
>
> btw, I see documentation, e.g. OpenStack manuals, is excluded in these
> four options.
> Does that mean there is no comments against the globalization of
> documentation?
>
> Regards
> Daisy
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Metering] how should it be done ? ( Was: schema and counter definitions)

2012-05-02 Thread Andrew Clay Shafer
>
>
> It would be better if all OpenStack core components agreed on unified
> interfaces / messages for metering that would be easy to harvest without
> installing agents on nodes. This is also true for many services outside of
> the OpenStack eco-system. However, much in the same way munin and nagios
> plugins are developped outside of the project for which they provide
> graphing and monitoring (for instance we recently published swift munin
> plugins in the repository where ops usually look for them :
> https://github.com/munin-monitoring/contrib/tree/master/plugins/swift and
> there is no glance plugin or up-to-date nova plugins yet ), metering agents
> will be developed separately, most of the time.
>

This is Conway's Law manifest.

The people developing nagios plugins are often in an operators role running
software they didn't necessarily write (neither what they have deployed as
a service nor the monitoring framework).

People make it work, but it's rarely the best solution.

Regardless of the monitoring solution, having application level metrics
exposed by the application and implemented by people who understand the
application has always led to a qualitatively better solution in my
experience.

As I wrote in a previous mail, once we manage to provide an implementation
> that proves useful, we will be in a position to approach the core OpenStack
> components.


I don't follow this statement.


> Integrating the metering agents as part of the core component, much in the
> same way it's currently done in nova.


What specifically is done?

If metering is not integrated in the beginning it will likely never be.


> That will reduce the overall complexity of deploying OpenStack with
> metering (which must not be mandatory).


I'm confused what you are using 'that' to refer to in this sentence. The
integrated solution or the standalone service?

We have a framework that's operation is entirely dependent on generating
most of the events that need to be metered. What could be less complex than
exposing that in a sensible way?

Sadly, I think too many believe deploying OpenStack without monitoring must
not be mandatory.

I personally hope we can get to the point where fault tolerance is a first
class concern for OpenStack services, and in my opinion getting there is
somewhat dependent on solving this same problem in a sensible way.


> However, there is very little chance that all components developed around
> OpenStack are convinced and there will always be a need for a metering that
> is external to the component.


If that is true, then it is a sad state of affairs.

I would hope people have a more holistic understanding of what OpenStack
could and should become.


> Therefore, even if metering eventually manages to become a first class
> concern for the core OpenStack components, the proposed architecture of the
> metering project ( per node agents when necessary and a collector
> harvesting them into a storage ) will keep being used for other components.
>

I agree there is potentially interesting engineering work to be done on the
transport and collection of metrics. I have an aversion to thinking the
starting point for that should be defining a schema and deciding on a db.

Do you think I'm wrong ? We're at a very early stage and now is the time to
> question everything :-)
>

I don't think your motives are wrong.

I could also be 'wrong'.

I think the answer to that depends on what we think OpenStack should be in
the end and what is good enough to get there.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Metering] schema and counter definitions

2012-05-01 Thread Andrew Clay Shafer
I'm glad to see people championing the effort to implement metering. Is
there someway to refocus the enthusiasm for solving the metering problem
into engineering a general solution in OpenStack?

I'm just going to apologize in advance, but I don't think this project is
headed in the right direction.

I believe metering should be a first class concern of OpenStack and the way
this project is starting is almost exactly backwards from what I think a
solution to metering should look like.

The last thing I want to see right now is a blessed OpenStack metering
project adding more agents, coupled to a particular db and making policy
decisions about what is quantifiable.

I think there are really three problems that need to be solved to do
metering, what data to get, getting the data and doing things with the data.

>From my perspective, a lot if not all of the data events should be coming
out of the services themselves. There is already a service that should know
when an instance gets started by what tenant. A cross cutting system for
publishing those events and a service definition for collecting them seems
like a reasonable place to start. To me that should look awful lot like a
message queue or centralized logging. Once the first two problems are
solved well, if you are so inclined to collect the data into a relational
model, the schema will be obvious.

If the first two problems are solved well, then I could be persuaded that a
service that provides some of the aggregation functionality is a great idea
and a reference implementation on a relational database isn't the worst
thing in the world.

Without a general solution for the first two problems, I believe the
primary focus on a schema and db is premature and sub-optimal. I also
believe the current approach likely results in a project that is generally
unusable.

Does anyone else share my perspective?

Maybe I'm the crazy one...

Andrew
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-compute] Startup error

2012-04-27 Thread Andrew Clay Shafer
In nova.conf, what is instances_path being set to?

It's blowing up trying to find the path
'/usr/lib/python2.7/dist-packages/instances', which is getting set as the
value of FLAGS.instances_path.




On Fri, Apr 27, 2012 at 12:00 PM, Leander Bessa  wrote:

> Hello,
>
> I'm clueless as how to solve this problem, any ideas?
>
> DEBUG nova.utils [req-007e9c3f-2dcb-4b42-8486-800a51e272e1 None None]
>>> backend >> '/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc'> from
>>> (pid=17035) __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:658
>>
>> Traceback (most recent call last):
>>
>>   File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 336,
>>> in fire_timers
>>
>> timer()
>>
>>   File "/usr/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line
>>> 56, in __call__
>>
>> cb(*args, **kw)
>>
>>   File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line
>>> 192, in main
>>
>> result = function(*args, **kwargs)
>>
>>   File "/usr/lib/python2.7/dist-packages/nova/service.py", line 101, in
>>> run_server
>>
>> server.start()
>>
>>   File "/usr/lib/python2.7/dist-packages/nova/service.py", line 174, in
>>> start
>>
>> self.manager.update_available_resource(ctxt)
>>
>>   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
>>> 2403, in update_available_resource
>>
>> self.driver.update_available_resource(context, self.host)
>>
>>   File
>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line
>>> 1898, in update_available_resource
>>
>> 'local_gb': self.get_local_gb_total(),
>>
>>   File
>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line
>>> 1712, in get_local_gb_total
>>
>> stats = libvirt_utils.get_fs_info(FLAGS.instances_path)
>>
>>   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py",
>>> line 277, in get_fs_info
>>
>> hddinfo = os.statvfs(path)
>>
>> OSError: [Errno 2] No such file or directory:
>>> '/usr/lib/python2.7/dist-packages/instances'
>>
>> 2012-04-27 16:51:48 CRITICAL nova [-] [Errno 2] No such file or
>>> directory: '/usr/lib/python2.7/dist-packages/instances'
>>
>> 2012-04-27 16:51:48 TRACE nova Traceback (most recent call last):
>>
>> 2012-04-27 16:51:48 TRACE nova   File "/usr/bin/nova-compute", line 49,
>>> in 
>>
>> 2012-04-27 16:51:48 TRACE nova service.wait()
>>
>> 2012-04-27 16:51:48 TRACE nova   File
>>> "/usr/lib/python2.7/dist-packages/nova/service.py", line 413, in wait
>>
>> 2012-04-27 16:51:48 TRACE nova _launcher.wait()
>>
>> 2012-04-27 16:51:48 TRACE nova   File
>>> "/usr/lib/python2.7/dist-packages/nova/service.py", line 131, in wait
>>
>> 2012-04-27 16:51:48 TRACE nova service.wait()
>>
>> 2012-04-27 16:51:48 TRACE nova   File
>>> "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 166, in
>>> wait
>>
>> 2012-04-27 16:51:48 TRACE nova return self._exit_event.wait()
>>
>> 2012-04-27 16:51:48 TRACE nova   File
>>> "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
>>
>> 2012-04-27 16:51:48 TRACE nova return hubs.get_hub().switch()
>>
>> 2012-04-27 16:51:48 TRACE nova   File
>>> "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 177, in switch
>>
>> 2012-04-27 16:51:48 TRACE nova return self.greenlet.switch()
>>
>> 2012-04-27 16:51:48 TRACE nova   File
>>> "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 192, in
>>> main
>>
>> 2012-04-27 16:51:48 TRACE nova result = function(*args, **kwargs)
>>
>> 2012-04-27 16:51:48 TRACE nova   File
>>> "/usr/lib/python2.7/dist-packages/nova/service.py", line 101, in run_server
>>
>> 2012-04-27 16:51:48 TRACE nova server.start()
>>
>> 2012-04-27 16:51:48 TRACE nova   File
>>> "/usr/lib/python2.7/dist-packages/nova/service.py", line 174, in start
>>
>> 2012-04-27 16:51:48 TRACE nova
>>> self.manager.update_available_resource(ctxt)
>>
>> 2012-04-27 16:51:48 TRACE nova   File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2403, in
>>> update_available_resource
>>
>> 2012-04-27 16:51:48 TRACE nova
>>> self.driver.update_available_resource(context, self.host)
>>
>> 2012-04-27 16:51:48 TRACE nova   File
>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line
>>> 1898, in update_available_resource
>>
>> 2012-04-27 16:51:48 TRACE nova 'local_gb': self.get_local_gb_total(),
>>
>> 2012-04-27 16:51:48 TRACE nova   File
>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line
>>> 1712, in get_local_gb_total
>>
>> 2012-04-27 16:51:48 TRACE nova stats =
>>> libvirt_utils.get_fs_info(FLAGS.instances_path)
>>
>> 2012-04-27 16:51:48 TRACE nova   File
>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 277, in
>>> get_fs_info
>>
>> 2012-04-27 16:51:48 TRACE nova hddinfo = os.statvfs(path)
>>
>> 2012-04-27 16:51:48 TRACE nova OSError: [Errno 2] No such file or
>>> directory: '/usr/lib/python2.7/dist-packages/instances'

Re: [Openstack] How does everyone build OpenStack disk images?

2012-04-25 Thread Andrew Clay Shafer
Justin,

I'm a fan of veewee.
https://github.com/jedi4ever/veewee

Probably some work to support Xen, but should work for building KVM images.

These docs should give a bit better idea.
https://github.com/jedi4ever/veewee/blob/master/doc/definition.md
https://github.com/jedi4ever/veewee/blob/master/doc/template.md

Looks like precise isn't added to the templates yet, but that should be a
solvable problem.

I'm sure Patrick Debois would be willing to answer any questions. I'm not
positive, but I believe he'd probably think a glance or s3
registration/integration is a good idea too.

Let me know if that looks like something in the direction of where you
think you want to go.


On Wed, Apr 25, 2012 at 9:14 PM, Justin Santa Barbara
wrote:

> How does everyone build OpenStack disk images?  The official documentation
> describes a manual process (boot VM with ISO), which is sub-optimal in
> terms of repeatability / automation / etc.  I'm hoping we can do better!
>
> I posted how I do it on my blog, here:
> http://blog.justinsb.com/blog/2012/04/25/creating-an-openstack-image/
>
> Please let me know the many ways in which I'm doing it wrong :-)
>
> I'm thinking we can have a discussion here, and then I can then compile
> the responses into a wiki page and/or a nice script...
>
> Justin
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Ops] OpenStack and Operations: Input from the Wild

2012-04-05 Thread Andrew Clay Shafer
Interested in devops.

Off the top of my head.

live upgrades
api queryable indications of cluster health
api queryable cluster version and configuration info
enabling monitoring as a first class concern in OpenStack (either as a
cross cutting concern, or as it's own project)
a framework for gathering and sharing performance benchmarks with
architecture and configuration


On Thu, Apr 5, 2012 at 1:52 PM, Duncan McGreggor wrote:

> For anyone interested in DevOps, Ops, cloud hosting management, etc.,
> there's a proposed session we could use your feedback on for topics of
> discussion:
>  http://summit.openstack.org/sessions/view/57
>
> Respond with your thoughts and ideas, and I'll be sure to add them to the
> list.
>
> Thanks!
>
> d
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift nee st code weirdness

2012-04-05 Thread Andrew Clay Shafer
Pete,

There is clearly something interesting going on with scope. 'options',
which appears to really be 'parser' get passed in as a variable, but are
then gets over written before being used by the call using the global
'parser'.
(options, args) = parse_args(parser, args)

The history of the commands you run and the stacktrace from the errors/logs
might help.

What were you expecting/trying to do with the code change?

Most of that code looks like it was written by Greg Holt and/or Chuck Thier.

Hopefully they can shed more light on this by eyeballing the code than I
can.




On Thu, Apr 5, 2012 at 8:43 PM, Pete Zaitcev  wrote:

> Hi, All:
>
> In the process of tinkering for lp:959221, I made a benign modification
> to make it possible to invoke swift as a module, when everything broke
> loose with errors like "NameError: global name 'parser' is not defined".
> Looking at the code it seems like a thinko, with the following fix:
>
> diff --git a/bin/swift b/bin/swift
> index 0cac5d6..a14c646 100755
> --- a/bin/swift
> +++ b/bin/swift
> @@ -1202,7 +1203,7 @@ download --all OR download container [options]
> [object] [object] ...
> stdout.'''.strip('\n')
>
>
> -def st_download(options, args, print_queue, error_queue):
> +def st_download(parser, args, print_queue, error_queue):
> parser.add_option('-a', '--all', action='store_true', dest='yes_all',
> default=False, help='Indicates that you really want to download '
> 'everything in the account')
> @@ -1378,7 +1379,7 @@ list [options] [container]
>  '''.strip('\n')
>
>
> -def st_list(options, args, print_queue, error_queue):
> +def st_list(parser, args, print_queue, error_queue):
> parser.add_option('-p', '--prefix', dest='prefix', help='Will only
> list '
> 'items beginning with the prefix')
> parser.add_option('-d', '--delimiter', dest='delimiter', help='Will
> roll '
> @@ -1423,7 +1424,7 @@ stat [container] [object]
> args given (if any).'''.strip('\n')
>
>
> -def st_stat(options, args, print_queue, error_queue):
> +def st_stat(parser, args, print_queue, error_queue):
> (options, args) = parse_args(parser, args)
> args = args[1:]
> conn = get_conn(options)
> @@ -1548,7 +1549,7 @@ post [options] [container] [object]
> post -m Color:Blue -m Size:Large'''.strip('\n')
>
>
> -def st_post(options, args, print_queue, error_queue):
> +def st_post(parser, args, print_queue, error_queue):
> parser.add_option('-r', '--read-acl', dest='read_acl', help='Sets the '
> 'Read ACL for containers. Quick summary of ACL syntax: .r:*, '
> '.r:-.example.com, .r:www.example.com, account1, account2:user2')
> @@ -1619,7 +1620,7 @@ upload [options] container file_or_directory
> [file_or_directory] [...]
>  '''.strip('\n')
>
>
> -def st_upload(options, args, print_queue, error_queue):
> +def st_upload(parser, args, print_queue, error_queue):
> parser.add_option('-c', '--changed', action='store_true',
> dest='changed',
> default=False, help='Will only upload files that have changed
> since '
> 'the last upload')
>
> Seems obvious if I look at this:
>
>globals()['st_%s' % args[0]](parser, argv[1:], print_queue, error_queue)
>
> The parser is always the first argument, so... Someone please tell me
> if I am missing something here. Or should I just file it in Gerrit?
>
> Weird part is, unit tests pass both on the current code and the one
> with my patch above. The problem seems too gross to let it work, but
> everything appears in order.
>
> -- Pete
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift: NAS or DAS?

2012-03-16 Thread Andrew Clay Shafer
>
>
> *This link shows the integration of NexentaStor (a NAS/SAN integrated
>> storage solution) with Openstack Nova:
>> http://mirantis.blogspot.com/2011/11/converging-openstack-with-nexenta.html
>> *
>
>
>
> That's Nova, not Swift..
> In case of Nova, a NAS or SAN approach makes very much sense.
>

Running swift on a NAS is all downside. You can make it work, but I don't
see any benefit.

As an aside, above a moderate scale, NAS can start to get problematic for
Nova volumes as well, depending on the architecture and usage patterns.

Nexenta is productized ZFS. You could set up something similar providing
iscsi from other systems that support ZFS.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [CHEF] How to structure upstream OpenStack cookbooks?

2012-03-10 Thread Andrew Clay Shafer
>
>
> >
> > If your SAIO/test diverge from production deployments, what are you
> really
> > testing?
>
> there's a big difference, imho, between testing code integration and
> production deployments.
> e.g. if you're trying to write a functional test for swift, that
> requires the proxy server to connect to the account, container and
> object servers - it'd be pretty handy to have a SAIO deployment (e.g.
> to test the new brimring).
>
> Are you trying to say that all deployments should ever be the same,
> less they're meaningless?
>

I'm not saying that all deployments should be the same.

What I am saying is that cookbooks are code and if you are not deploying to
production in the same manner that you deploy to test, eventually there
will be some assumption made in test that doesn't come along for the ride
to production.

In my utopian vision of the future, the process to set up to deploy to test
is also implicitly testing the deployment deployment and management of the
service.


> > You might think I'm being zealot, and relatively speaking, you might be
> > right, but from my experience the extra work to do this upfront pays
> itself
> > back many times over. Also, I can show you where the real zealots live.
> :)
>
> I actually agree with you. My comments were specific to the proposed
> swift- cookbook. It do mostly what the online instruction provide (and
> used much of the bash shell code verbatim), with chef/ruby code added
> to attempt and handle idempotency.
>
> My point was that for SIAO, you don't really need config
> management and an "installer" as you call it is probably
> sufficient.
> Sorry if I wasn't clear.


My problem is that it is kind of a slippery slope and if people aren't
familiar with the tools and configuration management in theory and
practice, they take what they see as an example of chef code and it bleeds
over into other recipes.

Then they have a bit of a mess and everyone questions why they are using
chef instead of bash.

So if you want installers, just use bash, then no one gets confused.

For the children...

Though my preference would be insisting on idempotent recipes, if you write
installers with configuration management tools, put big red flashing lights
with sirens and PSA warning not to run with scissors.

>
> I'll get to your example in a bit (next comment below). But the
> example I used was trying to get at a different issue.
> In both cases i described, the information is "discovered" in your
> terminology. Chef reports the IP address (or crowbar - which just
> moves it per interface),
> The issue is - when a given piece of information can be retrieved from
> multiple (apparently equivalent) places - how do you choose which one
> to use? And what does it take to modify the choice?
> The eval approach allows modifying an attribute (rather than a recipe)
> to change the source of the discovered data (the IP address to use).
> This could help reduce the changes a potential user of a cookbook
> might need to make.


It's all ruby code. You can make anything come from anywhere.

I think this is one of the hardest questions to answer about chef code and
it's an art not a science.

Given the choice between options, I try to look for the most semantically
relevant intent revealing option and minimize surprise.

There is obviously subjectivity of what is surprising.


> funny.. there are cookbooks out there for swift that match your
> spectrum to a tee ;)
> The Crowbar cookbooks discover the disks (with a bit of filtering)
> The Voxel ones have a prescribed data bag, which determines the ring's
> content.


yep, thus the motivation for this thread.

we can't really hope to collapse the variation of the cookbooks unless we
as a community adopt guiding principles and conventions that are widely
applicable.

> Operational scenarios start begging the question of what should be managed
> > with chef at all. (they also beg the question of whether there should be
> > some more automated ring management in Swift itself)
> >
>
> that could be a bit dangerous - a brief failure in e.g. connectivity
> to a node holding 30TB of data should probably not trigger an
> automatic removal of the node...


Is it more dangerous than people manually editing attributes?

This is not a problem unique to Swift. There are more sophisticated and
tunable ways to handle a brief partition than remove the node. I'm not
saying it is the highest priority, but there are some interesting
approaches that could reduce operational overhead.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [CHEF] How to structure upstream OpenStack cookbooks?

2012-03-10 Thread Andrew Clay Shafer
some response inline followed by general comments on the topic

On Sat, Mar 10, 2012 at 9:30 AM, andi abes  wrote:

> I like where this discussion is going. So
> I'd like to throw a couple more sticks into the fire, around test/SAIO
> vs production deployments..
>

I think there are some misconceptions that lead to problems.

If your SAIO/test diverge from production deployments, what are you really
testing?

If you really understand what configuration management tools are for, the
recipes are code that is going through the same cycle.

* Swift cookbooks (and in general) should not assume control of system
> side resources, but rather use the appropriate cookbook  (or better
> yet "definition" if it exists). e.g rsync might be used for a variety
> of other purposes - by other roles deployed to the same node. The
> rsync (not currently, but hopefully soon) cookbook should provide the
> appropriate hooks to add your role's extras. Maybe a better example is
> the soduers cookbook, which allows node attributes to describe users &
> groups.
>

I agree in principle.

This is good practice in general, but in specific can require a lot of
understanding and discipline to separate the recipes.

* SAIO deployments could probably be kept really simple if they don't
> have to deal with repeated application - no need to worry about
> idempotency which tends to make things much harder. A greenfield
> deployment + some scripts to ""operate"" the test install are probably
> just the right thing.
>

If you find yourself writing chef with the idea that you don't need to
worry about idempotency, you aren't doing configuration management, you are
writing an installer. You are almost better off using bash. Part of my
position here also stems from my position that these recipes shouldn't be
different.

You might think I'm being zealot, and relatively speaking, you might be
right, but from my experience the extra work to do this upfront pays itself
back many times over. Also, I can show you where the real zealots live. :)

* Configurability - in testing, you'd like things pretty consistent.
> One pattern I've been using is having attribute values that are
> 'eval'ed to retrieve the actual data.
> For example - the IP address/interface to use for storage
> communication (i.e. proxy <-> account server)  a node attribute called
> "storage_interface" is evaluated. A user (or higher level system) can
> assign either "node[:ipaddress]" (which is controlled by chef, and
> goes slightly bonkers when multiple interfaces are present) or be more
> opinionated and use e.g
> "node[:crowbar][:interfaces][:storage_network]"
>

This gets to the heart of one of the reasons why there is so much variation
in the wild.

chef allows for almost infinite flexibility for how information gets into
the system and how it gets used.

Further, there is a tension between specificity and discovery.

By that I mean one can either specify a specific value, in cookbooks, roles
or databags, or you can discover values from the running systems.

This is a spectrum and there is not always a 'right' answer. Both have
value and both can become problematic. Context, opinions and philosophy of
the author are typically what determine the details of a given cookbook.

I'll attempt to make this more concrete. How do you want to deploy and
manage the filesystems and devices for rings of Swift?

One on end of the spectrum, parameterize an attribute with all the devices
that will be in the ring, with the obvious alternative being that some
procedure and convention gets the devices from the running systems.

This is also a tension between knowing and doing, and leads to a bunch of
other questions.

In the case where a specified device is missing/failed, what should the
behavior be? If doing discovery, is there a method to sanity check what the
devices should be?

Both cases only get more complicated when considering on going management
of the cluster, adding/removing capacity etc.

Operational scenarios start begging the question of what should be managed
with chef at all. (they also beg the question of whether there should be
some more automated ring management in Swift itself)

This is just one example, but I hope it illustrates the point.

My current personal preferences/bias:

   - Do not make 'installers' with configuration management tools
   - Stand alone recipes should not be separate. If they do exist, it is as
   part of the initial pass to get a working cookbook with the intent to
   refactor to more generalize cookbook. AIO should then be a role
   - lean towards specificity supported by tooling to manage the metadata
   for node specific configuration
   - utilize discovery for cross node configuration (of the data that was
   specified for the other nodes)

I'm not saying I'm right, just that I've seen and tried things a few
different ways and this seemed to work best in my context.

So to answer Jay's explicit questions:

1) Do resources that set up non-productio

Re: [Openstack] Nova RC-1 Bugs

2012-03-07 Thread Andrew Clay Shafer
+1

On Wed, Mar 7, 2012 at 7:59 PM, Alexey Eromenko  wrote:

> There are several blocker bugs in manuals. (they prevent new users
> from installing or configuring OpenStack)
>
> But I doubt they are marked as such.
>
> What to do ?
> Can I up priority for docs on L-pad, if a broken docs prevent new
> users from configuring OpenStack ?
>
> --
> -Alexey Eromenko "Technologov"
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] how to run selected tests

2012-02-29 Thread Andrew Clay Shafer
The way run_tests.sh works right now, you can run all tests, all tests in a
file, a module or an individual test depending on what args you run with.

The default with no args will run all the tests.

You can run one file passing in the name of the file minus .py (if it is a
sub-directory, replace slashes with .)

./run_tests.sh test_compute

or

./run_tests.sh scheduler.test_scheduler

A module can be run passing in the file and the module:

./run_tests.sh scheduler.test_scheduler:SchedulerManagerTestCase

A single test by adding the name of the test:

./run_tests.sh
scheduler.test_scheduler:SchedulerManagerTestCase.test_existing_method

Hope that helps.




On Wed, Feb 29, 2012 at 3:42 PM, Yun Mao  wrote:

> Greetings,
>
> What's the most convenient way to run a subset of the existing tests?
> By default run_tests.sh tests everything. For example, I'd like to run
> everything in test_scheduler plus test_notify.py, what's the best way
> to do that? Thanks,
>
> Yun
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone: Redux (Dubstep Remix)

2012-02-15 Thread Andrew Clay Shafer
+1

Don't deprecate, until the bass drops... lesson learned.





On Wed, Feb 15, 2012 at 11:22 PM, Soren Hansen  wrote:

> 2012/2/14 Jesse Andrews :
> > The major lessons of keystone:
>
> Now that we're verbalising lessons learnt from Keystone, I'd like to add
> another thing from back in the Diablo days: We should only ever depend
> on code that already exists or is under our own release management. When
> Keystone was very young, we deprecated Nova's built-in auth system, but
> seeing as Keystone wasn't ready, nor was being tracked by our release
> manager, we ended up releasing Nova with a deprecated auth system and a
> preferred auth system that wasn't released yet. I'd like to avoid that
> happening again.
>
> --
> Soren Hansen | http://linux2go.dk/
> Senior Software Engineer | http://www.cisco.com/
> Ubuntu Developer | http://www.ubuntu.com/
> OpenStack Developer  | http://www.openstack.org/
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova/puppet blueprint, and some questions

2012-01-27 Thread Andrew Clay Shafer
On Thu, Jan 26, 2012 at 8:56 PM, Andrew Bogott wrote:

>  Andrew --
>
> Thanks for your comments.  I'm going to start with a screenshot for
> context:
>
> http://bogott.net/misc/osmpuppet.png
>
> That's what it looks like when you configure an instance using Open Stack
> Manager, which is WikiMedia's VM management interface.  My main priority
> for adding puppet support to Nova is to facilitate the creation and control
> of a GUI much like that one.
>


Can you explain how your solution works now? You want to inject data into
the VMs in the proposal, but outside of designating the puppet master, all
the data for variables and classes should be changes to the puppet master,
not the instances. That's kind of the whole point of the puppet master.

One thing you really seem to want is RBAC for the nova users.

How are you getting the names for the recipes into your system? Is that
sync with what is on the puppet master somehow or you are going to do data
entry and it's all string matching?

On 1/26/12 5:03 PM, Andrew Clay Shafer wrote:
>
>
>  I'd also like to see more of a service oriented approach and avoid adding
> tables to nova if possible.
>
>  I'm not sure the best solution is to come up with a generic service for
> $configuration_manager for nova core. I'd rather see these implemented as
> optional first class extensions.
>
> This sounds intriguing, but I'll plead ignorance here; can you tell me
> more about what this would look like, or direct me to an existing analogous
> service?
>

I don't think there is a good existing example, but I know if the defacto
way to add functionality in nova is to add tables to the db, that's the
path to operational and maintenance hell.

That's not just for this integration, but in general.

For openstack to become what it should be, Nova shouldn't be a monolithic
app on a database.

Even if you wanted to run this on the same node, it probably shouldn't be
tables in the same database. It should be a separate services with it's own
db user and scheme then be integrated by apis or maybe adding to wsgi.

 What are you going to inject into the instances exactly? Where does the
> site.pp live?
>
> This is the question I'm hoping to get feedback on.  Either nova can
> generate a fully-formed site.pp and inject that, or it can pass config
> information as metadata, in which case an agent would need to be running on
> the guest which would do the work of generating the site.pp.  I certainly
> prefer the former but I'm not yet clear on whether or not file injection is
> widely supported.
>

I'm confused how you want to run puppet exactly. The site.pp would
typically live on the puppet master.

Can you explain more about what you are thinking or how your current
solution works?

 I haven't thought about this that much yet, but off the top of my head,
> but if the instances already have puppet clients and are configured for the
> puppet master, then the only thing you should need to interact with is the
> puppet master.
>
>
> It's definitely the case that all of this could be done via LDAP or the
> puppet master and involve no Nova action at all; that's how WikiMedia's
> system works now.  My aim is to consolidate the many ways we currently
> interact with instances so that we delegate as much authority to Nova as
> possible.  That strikes me as generally worthwhile, but you're welcome to
> disagree :)
>

I think it would be sweet if nova and the dashboard (and probably keystone
too) had a standardized way to add integrated functionality. I don't
believe nova core should be reimplementing/duplicating functionality and
logic in other systems.

The goal of interacting with the instances through a shared interface is a
good one, I'm not against that, I just want to see less deep coupling to
accomplish it.



>  I'm not a fan of the Available, Unavailable, Default, particularly
> because you are managing state of something that may not be true on the
> puppet master.
>
> I may be misunderstanding you, or my blueprint may be unclear.  Available,
> Unavailable, and Default don't refer to the availability of classes on the
> puppet master; rather, they refer to whether or not a class is made
> available to a nova user for a given instance.  An 'available' class would
> appear in the checklist in my screenshot.  An Unavailable class would not.
> A 'default' class would appear, and be pre-checked.  In all three cases the
> class is presumed to be present on the puppet master.
>

I already asked this, but what keeps that in sync with the puppet master?

Personally, I'd rather see an integration that has a per user configuration
to a puppet master that stays i

Re: [Openstack] nova/puppet blueprint, and some questions

2012-01-26 Thread Andrew Clay Shafer
I would love to see a first class puppet integration with nova instances.

I'd also like to see more of a service oriented approach and avoid adding
tables to nova if possible.

I'm not sure the best solution is to come up with a generic service for
$configuration_manager for nova core. I'd rather see these implemented as
optional first class extensions.

I'm also not clear on how you think this should work with for Puppet.

What are you going to inject into the instances exactly? Where does the
site.pp live?

I haven't thought about this that much yet, but off the top of my head, but
if the instances already have puppet clients and are configured for the
puppet master, then the only thing you should need to interact with is the
puppet master.

I'm not a fan of the Available, Unavailable, Default, particularly because
you are managing state of something that may not be true on the puppet
master. Unless you are going to have a deeper integration with a puppet
master, I think it is better to just declare the classes you want applied.
If you want a 'default' class, make that in puppet and add it.

I also think managing a site.pp is going to be inferior to providing an
endpoint that can act as an eternal node tool for the puppet master.
http://docs.puppetlabs.com/guides/external_nodes.html

One other point, that you might have thought of, but I don't see anywhere
on the wiki is how to handle the ca/certs for the instances.

I also have a question about how you want to handle multi-tenant
environments? Meta data about the puppet master seems like the thing you
need to configure dynamically on the client, then handle all the class,
variables and CA stuff on the appropriate master.

Just to reiterate, I'd love to see deeper configuration management
integrations (because I think managing instances without them is it's own
hell), but I'm not convinced it should be part of core nova per se.

probably over the 0.02 limit for now, but I'm happy to expand or explore
these ideas to see where they lead.


On Thu, Jan 26, 2012 at 4:29 PM, Andrew Bogott wrote:

> Oh, and regarding this:
>
>
> On 1/26/12 11:30 AM, Tim Bell wrote:
>
>> I would hope for, at minimum, an implementation for Xen and KVM with, if
>> appropriate, something for lxc too.
>>
>
> That relates directly to the second question in my email:  what
> communication method should I use to pass information between nova and the
> guest OS?  Is there not One True Injector that is currently supported (or
> has plans to be supported) across all hypervisors?
>
>
> -Andrew
>
>
> __**_
> Mailing list: 
> https://launchpad.net/~**openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : 
> https://launchpad.net/~**openstack
> More help   : 
> https://help.launchpad.net/**ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack Foundation] OpenStack Mission & Goals

2012-01-05 Thread Andrew Clay Shafer
+1

On Thu, Jan 5, 2012 at 11:57 AM, Lloyd Dewolf  wrote:

> Having played a very minor role in this process for WordPress, and
> been an onlooking numerous times, it is always a long and involved
> process.
>
> Ever try telling the IRS that you don't want to pay taxes?
>
>
> I appreciate the passion of this discussion, but some of it feels ad
> hominem and non-constructive.
>
>
> As the process continues to proceeds, who is currently blocked by
> what, so we can rally around the pragmatic causes?
>
>
> I can't grow without light,
> Lloyd
> ___
> Foundation mailing list
> foundat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] swift enforcing ssl?

2011-12-28 Thread Andrew Clay Shafer
Do not use the ssl in the python for anything beyond noodling on a proof of
concept.

Between the python ssl and eventlet, ssl is unusably broken.

This should probably be in red in the documentation.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp