Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-19 Thread Stan Lagun
Kurt Griffiths,

Thanks for detailed explanation. Is there a comparison between Marconi and
existing message brokers anywhere that you can point me out?
I can see how your examples can be implemented using other brokers like
RabbitMQ. So why there is a need another broker? And what is wrong with
currently deployed RabbitMQ that most of OpenStack services are using
(typically via oslo.messaging RPC)?



On Wed, Mar 19, 2014 at 4:00 AM, Kurt Griffiths <
kurt.griffi...@rackspace.com> wrote:

> I think we can agree that a data-plane API only makes sense if it is
> useful to a large number of web and mobile developers deploying their apps
> on OpenStack. Also, it only makes sense if it is cost-effective and
> scalable for operators who wish to deploy such a service.
>
> Marconi was born of practical experience and direct interaction with
> prospective users. When Marconi was kicked off a few summits ago, the
> community was looking for a multi-tenant messaging service to round out
> the OpenStack portfolio. Users were asking operators for something easier
> to work with and more web-friendly than established options such as AMQP.
>
> To that end, we started drafting an HTTP-based API specification that
> would afford several different messaging patterns, in order to support the
> use cases that users were bringing to the table. We did this completely in
> the open, and received lots of input from prospective users familiar with
> a variety of message broker solutions, including more "cloudy" ones like
> SQS and Iron.io.
>
> The resulting design was a hybrid that supported what you might call
> "claim-based" semantics ala SQS and feed-based semantics ala RSS.
> Application developers liked the idea of being able to use one or the
> other, or combine them to come up with new patterns according to their
> needs. For example:
>
> 1. A video app can use Marconi to feed a worker pool of transcoders. When
> a video is uploaded, it is stored in Swift and a job message is posted to
> Marconi. Then, a worker claims the job and begins work on it. If the
> worker crashes, the claim expires and the message becomes available to be
> claimed by a different worker. Once the worker is finished with the job,
> it deletes the message so that another worker will not process it, and
> claims another message. Note that workers never "list" messages in this
> use case; those endpoints in the API are simply ignored.
>
> 2. A backup service can use Marconi to communicate with hundreds of
> thousands of backup agents running on customers' machines. Since Marconi
> queues are extremely light-weight, the service can create a different
> queue for each agent, and additional queues to broadcast messages to all
> the agents associated with a single customer. In this last scenario, the
> service would post a message to a single queue and the agents would simply
> list the messages on that queue, and everyone would get the same message.
> This messaging pattern is emergent, and requires no special routing setup
> in advance from one queue to another.
>
> 3. A metering service for an Internet application can use Marconi to
> aggregate usage data from a number of web heads. Each web head collects
> several minutes of data, then posts it to Marconi. A worker periodically
> claims the messages off the queue, performs the final aggregation and
> processing, and stores the results in a DB. So far, this messaging pattern
> is very much like example #1, above. However, since Marconi's API also
> affords the observer pattern via listing semantics, the metering service
> could run an auditor that logs the messages as they go through the queue
> in order to provide extremely valuable data for diagnosing problems in the
> aggregated data.
>
> Users are excited about what Marconi offers today, and we are continuing
> to evolve the API based on their feedback.
>
> Of course, app developers aren't the only audience Marconi needs to serve.
> Operators want something that is cost-effective, scales, and is
> customizable for the unique needs of their target market.
>
> While Marconi has plenty of room to improve (who doesn't?), here is where
> the project currently stands in these areas:
>
> 1. Customizable. Marconi transport and storage drivers can be swapped out,
> and messages can be manipulated in-flight with custom filter drivers.
> Currently we have MongoDB and SQLAlchemy drivers, and are exploring Redis
> and AMQP brokers. Now, the v1.0 API does impose some constraints on the
> backend in order to support the use cases mentioned earlier. For example,
> an AMQP backend would only be able to support a subset of the current API.
> Operators occasionally ask about AMQP broker support, in particular, and
> we are exploring ways to evolve the API in order to support that.
>
> 2. Scalable. Operators can use Marconi's HTTP transport to leverage their
> existing infrastructure and expertise in scaling out web heads. When it
> comes to the backend, for small deplo

Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-19 Thread Oleg Bondarev
Hi Jorge,

Thanks for taking care of this and bringing it all together! This will be
really useful for LBaaS discussions.
I updated the wiki to include L7 rules support and also marking already
implemented requirements.

Thanks,
Oleg


On Wed, Mar 19, 2014 at 2:57 AM, Jorge Miramontes <
jorge.miramon...@rackspace.com> wrote:

> Hey Neutron LBaaS folks,
>
> Per last week's IRC meeting I have created a preliminary requirements &
> use case wiki page. I requested adding such a page since there appears to
> be a lot of new interest in load balancing and feel that we need a
> structured way to align everyone's interest in the project. Furthermore,
> it appears that understanding everyone's requirements and use cases will
> aid in the current object model discussion we all have been having. That
> being said, this wiki is malleable and open to discussion. I have added
> some preliminary requirements from my team's perspective in order to start
> the discussion. My vision is that people add requirements and use cases to
> the wiki for what they envision Neutron LBaaS becoming. That way, we can
> all discuss as a group, figure out what should and shouldn't be a
> requirement and prioritize the rest in an effort to focus development
> efforts. ReadyŠsetŠgo!
>
> Here is the link to the wiki ==>
> https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements
>
> Cheers,
> --Jorge
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] advanced servicevm framework IRC meeting March 18(Tuesday) 23:00 UTC

2014-03-19 Thread balaj...@freescale.com
Hi Isaku Yamahata,

Can you please share the meeting details.

Regards,
Balaji.P

From: Mohammad Banikazemi [mailto:m...@us.ibm.com]
Sent: Wednesday, March 19, 2014 12:31 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: isaku.yamah...@gmail.com
Subject: Re: [openstack-dev] [Neutron] advanced servicevm framework IRC meeting 
March 18(Tuesday) 23:00 UTC


Thanks for setting up the meeting.
I would second the request for change of the time slot; Hope to attend this one 
and see if we can come up with a better time slot.
With respect to other suggestions, it would be great if we start with a report 
on the current state of this work. Something similar to what you started last 
week (from what I gather from the logs of the meeting) but going a bit more 
into details. I personally would like to see how this fits in the advanced 
services framework in general and how we can utilize it within the group policy 
framework we are trying to develop.

Best,

Mohammad

[Inactive hide details for Isaku Yamahata ---03/18/2014 05:00:38 AM---Hi 
Balaji. Let's discuss/determine on the time at the meet]Isaku Yamahata 
---03/18/2014 05:00:38 AM---Hi Balaji. Let's discuss/determine on the time at 
the meeting as it is listed as agenda.

From: Isaku Yamahata mailto:isaku.yamah...@gmail.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>,
Cc: isaku.yamah...@gmail.com
Date: 03/18/2014 05:00 AM
Subject: Re: [openstack-dev] [Neutron] advanced servicevm framework IRC meeting 
March 18(Tuesday) 23:00 UTC





Hi Balaji.

Let's discuss/determine on the time at the meeting as it is listed as agenda.
Sorry for inconvenience for the first time.
Do you have any feedback other than the meeting time?

thanks,

On Tue, Mar 18, 2014 at 06:18:01AM +,
"balaj...@freescale.com" 
mailto:balaj...@freescale.com>> wrote:

> Hi Isaku Yamahata,
>
> Is it possible to have any convenient slot between 4.00 - 6.30 PM - UTC.
>
> So, that folks from asia can also join the meetings.
>
> Regards,
> Balaji.P
>
> > -Original Message-
> > From: Isaku Yamahata [mailto:isaku.yamah...@gmail.com]
> > Sent: Tuesday, March 18, 2014 11:35 AM
> > To: OpenStack Development Mailing List
> > Cc: isaku.yamah...@gmail.com
> > Subject: [openstack-dev] [Neutron] advanced servicevm framework IRC
> > meeting March 18(Tuesday) 23:00 UTC
> >
> > Hello. This is a reminder for servicevm framework IRC meeting.
> > date: March 18 (Tuesday) 23:00 UTC
> > channel: #openstack-meeting
> >
> > the followings are proposed as agenda.
> > Meeting wiki: https://wiki.openstack.org/wiki/Meetings/ServiceVM
> >
> > * the current status summary
> > * decide the time/day/frequency
> >
> > Thanks,
> > --
> > Isaku Yamahata mailto:isaku.yamah...@gmail.com>>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Isaku Yamahata mailto:isaku.yamah...@gmail.com>>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Actions design BP

2014-03-19 Thread Renat Akhmerov


On 19 Mar 2014, at 02:08, Joshua Harlow  wrote:

> is mistral planning doing this, throwing way the POC and rewriting it as a 
> non-POC using the ideas learned from the POC?)[http://tinyurl.com/lbz293s]. 

Yes, if needed. I don’t think it’s too important and I agree that terminology 
here may be a thing that makes some confusion. If we decide most of the parts 
are fine to be just evolving (instead of rewriting them) we can use them as 
their are.

> For the 'asynchronous manner' discussion see http://tinyurl.com/n3v9lt8; I'm 
> still not sure why u would want to make is_sync/is_async a primitive concept 
> in a workflow system, shouldn't this be only up to the entity running the 
> workflow to decide?


> Why is a task allowed to be sync/async

I would ask the opposite: “Why a task should always be sync?” Where does this 
limitation come from?

Any long lasting and/or resource intensive task should be logically considered 
async. Async here may not be a good word even. I apologize if it’s a problem 
for understanding. The main thing here is that a workflow system should be able 
not to hold local state to keep track of these heavy tasks. Otherwise, the 
consequences are well-known: hard to scale, hard to make it durable, etc etc. 
Some of the tasks may last weeks or even months (real life examples from the 
customers). If the engine is used as a hosted service then resource starvation 
may easily become an absolutely problem. From my experience, assumptions like 
that made at design phase used to make production systems go down. In other 
words, all the flowering variety of side-effects of Distributed Programming 
come from that.

So we decided to put this conceptual understanding into a core of the system.

Joshua, can we start working on that in TaskFlow? We’re ready to help and 
contribute (myself and other folks from the team).

> , that has major side-effects for state-persistence, resumption (and to me is 
> a incorrect abstraction to provide) and general workflow execution control, 
> I'd be very careful with this (which is why I am hesitant to add it without 
> much much more discussion).

Of course, it has major effects. It’s what we’ve been emphasising as a 
fundamental difference. In that regard, one of the interesting ideas is to 
thing about the role that TaskFlow in OpenStack community. Should it be dealing 
with, say, with persistence not being a service and not being able to fully 
address HA? I know your point of view on that (that at least it could provide 
facilities to address HA partially etc), but that’s a topic to think about. I’d 
suggest we start discussing that too.

>> So we actually talked to people a lot (including Josh) and provided this 
>> reasoning when this question raised again. Reaction was nearly always 
>> positive and made a lot of sense to customers and developers.
>> 
>> Thought #2: A library shouldn't drive a project where it’s used.
> 
> 
> To me this assumes said library is fixed in stone, can't be changed, and 
> can't be evolved. If a library is 'dead/vaporware' then sure, I would 100% 
> agree with this, but all libraries in openstack do not fit into the later 
> category; and those libraries can be evolved/developed/improved. As a 
> community I think it is our goal to grow the libraries in the community (not 
> reduce them or avoid them, as this is not benefical for the community). I 
> think doug put this very well his essay @ http://tinyurl.com/lr9wvfl and imho 
> we need to embrace evolving libraries in all projects, not avoiding them due 
> to thoughts like this. 

100% agree that OS libraries are not fixed in stone, no questions. I meant a 
totally different thing: the process of evolution. TaskFlow is a beautiful 
well-written library, at least I think so. But we shouldn’t build a project 
around a library in the first place. We should be building a project and 
generating requirements to a library. If the library is ready to change it's 
wonderful. If it is not, I’m totally against of adjusting the project feature 
set so that it can be implemented using the library capabilities. Am I missing 
something?
For example, when I had a project where we used a persistence framework and it 
was ok for a while but at some point we realized that we needed lazy-loading 
that was missing in that framework and since the team of that framework said 
“it’s against our principles, you better change your vision of the project” we 
said ok, threw it away and started using Hibernate and eventually all became 
happy including the customer. As simple as that. And it makes a lot of sense to 
me. “Use whatever works well for you”.
At the same time, I clearly understand that in OpenStack we can’t do like that. 
And honestly, we don’t want, since eventually we’re working on the same huge 
thing: cloud. Well, actually we can ignore others but not for a long time :) 
Rule is a rule, TC folks honestly say it was done intentionally to make people 
collaborate, otherwise we wo

[openstack-dev] [Openstack-dev] [Nova] use Keystone V3 token to volume attachment

2014-03-19 Thread Shao Kai SK Li

Hello:

 I am working on this patch(https://review.openstack.org/#/c/77524/) to
fix bugs about volume attach failure with keystone V3 token.

 Just wonder, is there some blue prints or plans in Juno to address
keystone V3 support in nova ?

 Thanks you in advance.


Best Regards~~~

Li, Shaokai___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] UTF-8 required charset/encoding for openstack database?

2014-03-19 Thread Clint Byrum
Excerpts from Doug Hellmann's message of 2014-03-18 15:08:36 -0700:
> On Mon, Mar 10, 2014 at 4:02 PM, Ben Nemec  wrote:
> 
> > On 2014-03-10 12:24, Chris Friesen wrote:
> >
> >> Hi,
> >>
> >> I'm using havana and recent we ran into an issue with heat related to
> >> character sets.
> >>
> >> In heat/db/sqlalchemy/api.py in user_creds_get() we call
> >> _decrypt() on an encrypted password stored in the database and then
> >> try to convert the result to unicode.  Today we hit a case where this
> >> errored out with the following message:
> >>
> >> UnicodeDecodeError: 'utf8' codec can't decode byte 0xf2 in position 0:
> >> invalid continuation byte
> >>
> >> We're using postgres and currently all the databases are using
> >> SQL_ASCII as the charset.
> >>
> >> I see that in icehouse heat will complain if you're using mysql and
> >> not using UTF-8.  There doesn't seem to be any checks for other
> >> databases though.
> >>
> >> It looks like devstack creates most databases as UTF-8 but uses latin1
> >> for nova/nova_bm/nova_cell.  I assume this is because nova expects to
> >> migrate the db to UTF-8 later.  Given that those migrations specify a
> >> character set only for mysql, when using postgres should we explicitly
> >> default to UTF-8 for everything?
> >>
> >> Thanks,
> >> Chris
> >>
> >
> > We just had a discussion about this in #openstack-oslo too.  See the
> > discussion starting at 2014-03-10T16:32:26 http://eavesdrop.openstack.
> > org/irclogs/%23openstack-oslo/%23openstack-oslo.2014-03-10.log
> >
> > While it seems Heat does require utf8 (or at least matching character
> > sets) across all tables, I'm not sure the current solution is good.  It
> > seems like we may want a migration to help with this for anyone who might
> > already have mismatched tables.  There's a lot of overlap between that
> > discussion and how to handle Postgres with this, I think.
> >
> > I don't have a definite answer for any of this yet but I think it is
> > something we need to figure out, so hopefully we can get some input from
> > people who know more about the encoding requirements of the Heat and other
> > projects' databases.
> >
> > -Ben
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> Based on the discussion from the project meeting today [1], the Glance
> team is going to write a migration to fix the database as the other
> projects have (we have not seen issues with corrupted data, so we believe
> this to be safe). However, there is one snag. In a follow-up conversation
> with Ben in #openstack-oslo, he pointed out that no migrations will run
> until the encoding is correct, so we do need to make some changes to the db
> code in oslo.
> 


Hi! Thanks for considering the plight of the users that have high-byte
characters, but in reading the referenced IRC log, there was a lot of
"hoping for the best" in the outcome.

However, I think Glance in particular is likely to find the bugs in this
approach, as users are more apt to name images with descriptive words
than networks, servers, and volumes.

Anyway, if you do have latin1 tables that have utf-8 encoded data already
in them, you can't just alter table. Let me explain the scenario with
a simple copy/paste:

First, let's assume you've done nothing really and the server is just
set to latin1, but your client is utf-8:

mysql> insert into t1 values (2, '♬ ♭');
Query OK, 1 row affected, 1 warning (0.05 sec)

The warning there is that this is a latin1 table, and those are wide
chars, so they got stripped:

mysql> select * from t1;
++-+
| id | data|
++-+
|  1 | no utf8 |
|  2 | ? ? |
++-+

Now you may also have a situation where your client is defaulting to
latin1:

mysql> insert into t1 values (3, '♬ ♭');
Query OK, 1 row affected (0.05 sec)

Note, zero warnings. What happened here?

mysql> select * from t1;
++-+
| id | data|
++-+
|  1 | no utf8 |
|  2 | ? ? |
|  3 | ♬ ♭ |
++-+

OH EXCELLENT! My data looks right. Let's ignore that mysql thinks it is
6 chars, not 2, because we probably won't ever notice that.

Now I fix my clients and they start using utf-8:

mysql> set names 'utf8';
Query OK, 0 rows affected (0.00 sec)

mysql> select * from t1;
++-+
| id | data|
++-+
|  1 | no utf8 |
|  2 | ? ? |
|  3 | ♬ ♭ |
++-+
3 rows in set (0.00 sec)

Doh, what are those trademarked a's?

This is often where app writers give up, switch back to latin1, and
think they're fine because at least data is coming out the way it went
in. But now you have utf-8 in a latin1 table. If you alter this:

mysql> alter table t1 convert to character set 'utf8';
Query OK, 3 rows affected (0.28 sec)
Records: 3  Duplicates: 0  Warnings: 0

mysql> select * from t1;

[openstack-dev] [Neutron] advanced servicevm framework: meeting time slot proposal 5:00UTC (Tue) and minutes (was Re: [Neutron] advanced servicevm framework IRC meeting March 18(Tuesday) 23:00 UTC)

2014-03-19 Thread Isaku Yamahata

* Time slot
Weekly Tuesday 5:00UTC-
Next meeting: March 24, 5:00UTC-

Since there were many requests for new time slots, the proposed time slot
at the meeting is 5:00UTC.
The related timezones are
JST(UTC+9), IST(UTC+5.30), CED(UTC+1), EST(UTC-5), PDT(UTC-7), PST(UTC-8)
Hope it's easy for the most. Sorry if it's not.


* meeting minutes
The irc meeting was held on March 18,
The minutes can be found from [1]. I also added some useful links.
* time zone: see above
* the meeting will be held weekly at first. can be bi-weekly later.
* the current status summary


[1] https://wiki.openstack.org/wiki/Meetings/ServiceVM

Thanks,

On Tue, Mar 18, 2014 at 03:04:53PM +0900,
Isaku Yamahata  wrote:

> Hello. This is a reminder for servicevm framework IRC meeting.
> date: March 18 (Tuesday) 23:00 UTC
> channel: #openstack-meeting
> 
> the followings are proposed as agenda.
> Meeting wiki: https://wiki.openstack.org/wiki/Meetings/ServiceVM
> 
> * the current status summary
> * decide the time/day/frequency
> 
> Thanks,
> -- 
> Isaku Yamahata 

-- 
Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Disaster Recovery for OpenStack - call for stakeholder - discussion reminder

2014-03-19 Thread Ronen Kat
For those who are interested we will discuss the disaster recovery 
use-cases and how to proceed toward the Juno summit on March 19 at 17:00 
UTC (invitation below)



Call-in: 
https://www.teleconference.att.com/servlet/glbAccess?process=1&accessCode=6406941&accessNumber=1809417783#C2
 

Passcode: 6406941

Etherpad: 
https://etherpad.openstack.org/p/juno-disaster-recovery-call-for-stakeholders
Wiki: https://wiki.openstack.org/wiki/DisasterRecovery

Regards,
__
Ronen I. Kat, PhD
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com




From:   "Luohao (brian)" 
To: "OpenStack Development Mailing List (not for usage questions)" 
, 
Date:   14/03/2014 03:59 AM
Subject:Re: [openstack-dev] Disaster Recovery for OpenStack - call 
for stakeholder



1.  fsfreeze with vss has been added to qemu upstream, see 
http://lists.gnu.org/archive/html/qemu-devel/2013-02/msg01963.html for 
usage.
2.  libvirt allows a client to send any commands to qemu-ga, see 
http://wiki.libvirt.org/page/Qemu_guest_agent
3.  linux fsfreeze is not equivalent to windows fsfreeze+vss. Linux 
fsreeze offers fs consistency only, while windows vss allows agents like 
sqlserver to register their plugins to flush their cache to disk when a 
snapshot occurs.
4.  my understanding is xenserver does not support fsfreeze+vss now, 
because xenserver normally does not use block backend in qemu.

-Original Message-
From: Bruce Montague [mailto:bruce_monta...@symantec.com] 
Sent: Thursday, March 13, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for 
stakeholder

Hi, about OpenStack and VSS. Does anyone have experience with the qemu 
project's implementation of VSS support? They appear to have a 
within-guest agent, qemu-ga, that perhaps can work as a VSS requestor. 
Does it also work with KVM? Does qemu-ga work with libvirt (can VSS 
quiesce be triggered via libvirt)? I think there was an effort for qemu-ga 
to use fsfreeze as an equivalent to VSS on Linux systems, was that done? 
If so, could an OpenStack API provide a generic quiesce request that would 
then get passed to libvirt? (Also, the XenServer VSS support seems 
different than qemu/KVM's, is this true? Can it also be accessed through 
libvirt?

Thanks,

-bruce

-Original Message-
From: Alessandro Pilotti [mailto:apilo...@cloudbasesolutions.com]
Sent: Thursday, March 13, 2014 6:49 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Disaster Recovery for OpenStack - call for 
stakeholder

Those use cases are very important in enterprise scenarios requirements, 
but there's an important missing piece in the current OpenStack APIs: 
support for application consistent backups via Volume Shadow Copy (or 
other solutions) at the instance level, including differential / 
incremental backups.

VSS can be seamlessly added to the Nova Hyper-V driver (it's included with 
the free Hyper-V Server) with e.g. vSphere and XenServer supporting it as 
well (quescing) and with the option for third party vendors to add drivers 
for their solutions.

A generic Nova backup / restore API supporting those features is quite 
straightforward to design. The main question at this stage is if the 
OpenStack community wants to support those use cases or not. Cinder 
backup/restore support [1] and volume replication [2] are surely a great 
starting point in this direction.

Alessandro

[1] https://review.openstack.org/#/c/69351/
[2] https://review.openstack.org/#/c/64026/


> On 12/mar/2014, at 20:45, "Bruce Montague"  
wrote:
>
>
> Hi, regarding the call to create a list of disaster recovery (DR) use 
cases ( 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/028859.html 
), the following list sketches some speculative OpenStack DR use cases. 
These use cases do not reflect any specific product behavior and span a 
wide spectrum. This list is not a proposal, it is intended primarily to 
solicit additional discussion. The first basic use case, (1), is described 
in a bit more detail than the others; many of the others are elaborations 
on this basic theme.
>
>
>
> * (1) [Single VM]
>
> A single Windows VM with 4 volumes and VSS (Microsoft's Volume 
Shadowcopy Services) installed runs a key application and integral 
database. VSS can quiesce the app, database, filesystem, and I/O on demand 
and can be invoked external to the guest.
>
>   a. The VM's volumes, including the boot volume, are replicated to a 
remote DR site (another OpenStack deployment).
>
>   b. Some form of replicated VM or VM metadata exists at the remote 
site. This VM/description includes the replicated volumes. Some systems 
might use cold migration or some form of wide-area live VM migration to 
establish this remote site VM/description.
>
>   c. When specified by an SLA or policy, VSS is invoked, putting the 
VM's volumes in an

Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-19 Thread Nadya Privalova
Ok, so we don't want to switch to UCA, let's consider this variant.
What options do we have to make possible to run Ceilometer jobs with Mongo
backend?
I see only  https://review.openstack.org/#/c/81001/ or making Ceilometer
able to work with old Mongo. But the last variant looks inappropriate at
least in Icehouse.
What am I missing here? Maybe there is smth else we can do?


On Tue, Mar 18, 2014 at 9:28 PM, Tim Bell  wrote:

>
>
> If UCA is required, what would be the upgrade path for a currently running
> OpenStack Havana site to Icehouse with this requirement ?
>
>
>
> Would it be an online upgrade (i.e. what order to upgrade the different
> components in order to keep things running at all times) ?
>
>
>
> Tim
>
>
>
> *From:* Chmouel Boudjnah [mailto:chmo...@enovance.com]
> *Sent:* 18 March 2014 17:58
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra]
> Ceilometer tempest testing in gate
>
>
>
>
>
> On Tue, Mar 18, 2014 at 5:21 PM, Sean Dague  wrote:
>
>  So I'm still -1 at the point in making UCA our default run environment
> until it's provably functional for a period of time. Because working
> around upstream distro breaks is no fun.
>
>
>
> I agree, if UCA is not very stable ATM, this os going to cause us more
> pain, but what would be the plan of action? a non-voting gate for
> ceilometer as a start ? (if that's possible).
>
> Chmouel
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][Nova] Update: Nova v2/v3 API List for Missing Tempest Tests

2014-03-19 Thread Masayuki Igawa
Hi,

Thanks many guys for updating these spreadsheets!

I have updated: Nova API List for Missing Tempest Tests.
  
https://docs.google.com/spreadsheet/ccc?key=0AmYuZ6T4IJETdEVNTWlYVUVOWURmOERSZ0VGc1BBQWc

The summary of these spreadsheets:
-  Nova V2 APIs  ---
different count from
Tested or not # of APIs ratio   the last time
---
Tested  152  60.1%  +10
Not Tested[1]42  16.6%   -2
Not Need to Test[2]  59  23.3%   -2
---
Total:  253 100.0%   +6
[1] included 5 Doings
[2] Because they are deprecated APIs such as nova-network and volume.

-  Nova V3 APIs  ---
different count from
Tested or not # of APIs ratio   the last time
---
Tested  111  78.2%  +23
Not Tested[1]28  19.7%  -25
Not Need to Test[2]   3   2.1%   +3
---
Total:  142 100.0%   +1
[1] included 6 Doings
[2] Nova APIs are not implemented yet.


Additional information:
 I made this API list with these nova's patch
  https://review.openstack.org/#/c/25882/
  https://review.openstack.org/#/c/72277/
  https://review.openstack.org/#/c/65615/
  (Actually, I need to extract and summarize the data from
   its screen-n-api.txt.gz manually because of some reasons.)

This information would be useful for creating Tempest tests.
Any comments/questions/suggestions are welcome.

And if you find any mistakes on this list, feel free to correct/update it 
please.

Best Regards,
-- Masayuki Igawa



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-19 Thread Stan Lagun
Steven,

Agree with your opinion on HOT expansion. I see that inclusion of
imperative workflows and ALM would require major Heat redesign and probably
would be impossible without loosing compatibility with previous HOT syntax.
It would blur Heat mission, confuse current users and rise a lot of
questions what should and what should not be in Heat. Thats why we chose to
built a system on top of Heat rather then expending HOT.

Now I would like to clarify why have we chosen imperative approach with DSL.

You see a DSL as an alternative to HOT but it is not. DSL is alternative to
Python-encoded resources in Heat (heat/engine/resources/*.py). Imagine how
Heat would look like if you let untrusted users to upload Python plugins to
Heat engine and load them on the fly. Heat resources are written in Python
which is imperative language. So that MuranoPL for the same reason.

We want application authors to be able to express application deployment
and maintenance logic of any complexity. This may involve communication
with 3rd party REST services (APIs of applications being deployed, external
services like DNS server API, application licensing server API, billing
systems, some hardware component APIs etc) and internal OpenStack services
like Trove, Sahara, Marconi and others including those that are not
incubated yet and those to come in the future. You cannot have such things
in HOT and when you required to you need to develop custom resource in
Python. Independence  on custom plugins is not good for Murano because they
cannot be uploaded by end users and thus he cannot write application
definition that can be imported to/run on any cloud and need to convince
cloud administrator to install his Python plugin (something that is
unimaginable in real life).

Because DSL is a way to write custom resources (in Heats terminology) it
has to be Turing-complete and have all the characteristics of
general-purpose language. It also has to have domain-specific features
because we cannot expect that DSL users would be as skilled as Heat
developers and could write such resources without knowledge on hosting
engine architecture and internals.

HOT DSL is declarative because all the imperative stuff is hardcoded into
Heat engine. Thus all is left for HOT is to define "state of the world" -
desired outcome. That is analogous to Object Model in Murano (see [1]). It
is Object Model that can be compared to HOT, not DSL. As you can see it not
more complex than HOT. Object Model is what end-user produces in Murano.
And he event don't need to write it cause it can be composed in UI.

Now because DSL provides not only a way to write sandboxed isolated code
but also a lot of declarations (classes, properties, parameters,
inheritance and contracts) that are mostly not present in Python we don't
need Parameters or Output sections in Object Model because all of this can
be inferred from resource (classes) DSL declaration. Another consequence is
that most of the things that can be written wrong in HOT can be verified on
client side by validating classes' contracts without trying to deploy the
stack and then go through error log debugging. Because all resources'
attributes types their constraints are known in advance (note that resource
attribute may be a reference to another resource with constraints on that
reference like "I want any (regular, Galera etc) MySQL implementation") UI
knows how to correctly compose the environment and can point out your
mistakes at design time. This is similar to how statically typed languages
like C++/Java can do a lot of validation at compile time rather then in
runtime as in Python.

Personally I would love to see many of this features in HOT. What is your
vision on this? What of the mentioned above can be contributed to Heat? We
definitely would like to integrate more with HOT and eliminate all
duplications between projects. I think that Murano and Heat are
complimentary products that can effectively coexist. Murano provides access
to all HOT features and relies on Heat for most of its activities. I
believe that we need to find an optimal way to integrate Heat, Murano,
Mistral, Solum, Heater, TOSCA, do some integration between ex-Thermal and
Murano Dashboard, be united regarding Glance usage for metadata and so on.
We are okay with throwing MuranoPL out if the issues it solves would be
addressed by HOT.

If you have a vision on how HOT can address the same domain MuranoPL does
or any plans for such features in upcoming Heat releases I would ask you to
share it.

[1] https://wiki.openstack.org/wiki/Murano/DSL/Blueprint#Object_model



On Wed, Mar 19, 2014 at 8:06 AM, Steven Dake  wrote:

> Ruslan,
>
> Some of my thoughts on the evolution of the HOT DSL to date.
>
>
> On 03/18/2014 05:32 PM, Ruslan Kamaldinov wrote:
>
>> Here is my 2 cents:
>>
>> I personally think that evolving Heat/HOT to what Murano needs for it's
>> use
>> cases is the best way to make PaaS layer of OpenStack to look and feel as
>> a
>> compl

Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device Mapping

2014-03-19 Thread Zhangleiqiang (Trump)
> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> Sent: Wednesday, March 19, 2014 12:14 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
> Mapping
> 
> On Tue, Mar 18, 2014 at 5:33 PM, Zhangleiqiang (Trump)
>  wrote:
> >> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> >> Sent: Tuesday, March 18, 2014 4:40 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
> >> Mapping
> >>
> >> On Tue, Mar 18, 2014 at 11:01 AM, Zhangleiqiang (Trump)
> >>  wrote:
> >> >> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> >> >> Sent: Tuesday, March 18, 2014 10:32 AM
> >> >> To: OpenStack Development Mailing List (not for usage questions)
> >> >> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw
> >> >> Device Mapping
> >> >>
> >> >> On Tue, Mar 18, 2014 at 9:40 AM, Zhangleiqiang (Trump)
> >> >>  wrote:
> >> >> > Hi, stackers:
> >> >> >
> >> >> > With RDM, the storage logical unit number (LUN) can be
> >> >> > directly
> >> >> connected to a instance from the storage area network (SAN).
> >> >> >
> >> >> > For most data center applications, including Databases,
> >> >> > CRM and
> >> >> ERP applications, RDM can be used for configurations involving
> >> >> clustering between instances, between physical hosts and instances
> >> >> or where SAN-aware applications are running inside a instance.
> >> >> If 'clustering' here refers to things like cluster file system,
> >> >> which requires LUNs to be connected to multiple instances at the same
> time.
> >> >> And since you mentioned Cinder, I suppose the LUNs (volumes) are
> >> >> managed by Cinder, then you have an extra dependency for
> >> >> multi-attach
> >> >> feature:
> >> https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume.
> >> >
> >> > Yes.  "Clustering" include Oracle RAC, MSCS, etc. If they want to
> >> > work in
> >> instance-based cloud environment, RDM and multi-attached-volumes are
> >> both needed.
> >> >
> >> > But RDM is not only used for clustering, and haven't dependency for
> >> multi-attach-volume.
> >>
> >> Set clustering use case and performance improvement aside, what other
> >> benefits/use cases can RDM bring/be useful for?
> >
> > Thanks for your reply.
> >
> > The advantages of Raw device mapping are all introduced by its capability of
> "pass" scsi command to the device, and the most common use cases are
> clustering and performance improvement mentioned above.
> >
> As mentioned in earlier email, I doubt the performance improvement comes
> from 'virtio-scsi' interface instead of RDM.  We can actually test them to
> verify.  Here's what I would do: create one LUN(volume) on the SAN, attach
> the volume to instance using current attach code path but change the virtual
> bus to 'virtio-scsi' and then measure the IO performance using standard IO
> benchmark; next, attach the volume to instance using 'lun' device for 'disk' 
> and
> 'virtio-scsi' for bus, and do the measurement again.  We shall be able to see
> the performance difference if there is any.  Since I don't have a SAN to play
> with, could you please do the test and share the results?

The performance improvement does comes from "virtio-scsi" controller, and is 
not caused by using "lun" device instead of "disk" device.
I don't have a usable SAN at present. But from the libvirt's doc ([1]), the 
"lun" device behaves identically to "disk" device except that generic SCSI 
commands from the instance are accepted and passed through to the physical 
device. 

Sorry for misleading. The "RDM" I mentioned in earlier email includes the "lun" 
device and the "virtio-scsi" controller.

Now, the performance improvement comes from "virtio-scsi" controller, however, 
boot-from a volume using virtio-scsi interface or attach a volume with a new 
virtio-scsi interface are both unsupported currently. I think add these 
features is meaningful. And as mentioned in the first email, set the 
"virtio-scsi" controller aside, "lun" device has already supported by 
block-device-mapping-v2 extension.

[1] http://libvirt.org/formatdomain.html#elementsDisks

> > And besides these two scenarios, there is another use case: running
> SAN-aware application inside instances, such as:
> > 1. SAN management app
> Yes, that is possible if RDM is enable.  But I wonder what is the real use 
> case
> behind this.  Even though SAN mgmt app inside instance is able to manage the
> LUN directly, but it is just a LUN instead of a real SAN, what the instance 
> can do
> is *limited* to the specific LUN, which doesn't seem very useful IMO.  Or are
> you thinking about creating a big enough LUN for user so they can treat it 
> like a
> 'virtual' SAN and do all kinds of management stuff to it and even maybe 
> resell it
> for PaaS use cases?
> 
> > 2. Apps which can offload the device related works, such as s

Re: [openstack-dev] [oslo.messaging] [zeromq] nova-rpc-zmq-receiver bottleneck

2014-03-19 Thread yatin kumbhare
Hi Mike,

Thanks for your feedback.

I'm not aware on the details of ceilometer messaging.

Would please point out on "messaging behaviors that are desirable for
ceilometer currently and possibly other things in the future"?

This will help me in evaluating my idea further.

Regards,
Yatin


On Sat, Mar 15, 2014 at 12:15 AM, Mike Wilson  wrote:

> Hi Yatin,
>
> I'm glad you are thinking about the drawbacks of the zmq-receiver causes,
> I want to give you a reason to keep the zmq-receiver and get your feedback.
> The way I think about the zmq-receiver is a tiny little mini-broker that
> exists separate from any other OpenStack service. As such, it's
> implementation can be augmented to support store-and-forward and possibly
> other messaging behaviors that are desirable for ceilometer currently and
> possibly other things in the future. Integrating the receiver into each
> service is going to remove its independence and black box nature and give
> it all the bugs and quirks of any project it gets lumped in with. I would
> prefer that we continue to improve zmq-receiver to overcome the tough
> parts. Either that or find a good replacement and use that. An example of a
> possible replacement might be the qpid dispatch router[1], although this
> guy explicitly wants to avoid any store and forward behaviors. Of course,
> dispatch router is going to be tied to qpid, I just wanted to give an
> example of something with similar functionality.
>
> -Mike
>
>
> On Thu, Mar 13, 2014 at 11:36 AM, yatin kumbhare 
> wrote:
>
>> Hello Folks,
>>
>> When zeromq is use as rpc-backend, "nova-rpc-zmq-receiver" service needs
>> to be run on every node.
>>
>> zmq-receiver receives messages on tcp://*:9501 with socket type PULL and
>> based on topic-name (which is extracted from received data), it forwards
>> data to respective local services, over IPC protocol.
>>
>> While, openstack services, listen/bind on "IPC" socket with socket-type
>> PULL.
>>
>> I see, zmq-receiver as a bottleneck and overhead as per the current
>> design.
>> 1. if this service crashes: communication lost.
>> 2. overhead of running this extra service on every nodes, which just
>> forward messages as is.
>>
>>
>> I'm looking forward to, remove zmq-receiver service and enable direct
>> communication (nova-* and cinder-*) across and within node.
>>
>> I believe, this will create, zmq experience more seamless.
>>
>> the communication will change from IPC to zmq TCP socket type for each
>> service.
>>
>> like: rpc.cast from scheduler -to - compute would be direct rpc message
>> passing. no routing through zmq-receiver.
>>
>> Now, TCP protocol, all services will bind to unique port (port-range
>> could be, 9501-9510)
>>
>> from nova.conf, rpc_zmq_matchmaker =
>> nova.openstack.common.rpc.matchmaker_ring.MatchMakerRing.
>>
>> I have put arbitrary ports numbers after the service name.
>>
>> file:///etc/oslo/matchmaker_ring.json
>>
>> {
>>  "cert:9507": [
>>  "controller"
>>  ],
>>  "cinder-scheduler:9508": [
>>  "controller"
>>  ],
>>  "cinder-volume:9509": [
>>  "controller"
>>  ],
>>  "compute:9501": [
>>  "controller","computenodex"
>>  ],
>>  "conductor:9502": [
>>  "controller"
>>  ],
>>  "consoleauth:9503": [
>>  "controller"
>>  ],
>>  "network:9504": [
>>  "controller","computenodex"
>>  ],
>>  "scheduler:9506": [
>>  "controller"
>>  ],
>>  "zmq_replies:9510": [
>>  "controller","computenodex"
>>  ]
>>  }
>>
>> Here, the json file would keep track of ports for each services.
>>
>> Looking forward to seek community feedback on this idea.
>>
>>
>> Regards,
>> Yatin
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-19 Thread Renat Akhmerov

On 19 Mar 2014, at 16:00, Stan Lagun  wrote:

> We want application authors to be able to express application deployment and 
> maintenance logic of any complexity. This may involve communication with 3rd 
> party REST services (APIs of applications being deployed, external services 
> like DNS server API, application licensing server API, billing systems, some 
> hardware component APIs etc) and internal OpenStack services like Trove, 
> Sahara, Marconi and others including those that are not incubated yet and 
> those to come in the future. You cannot have such things in HOT and when you 
> required to you need to develop custom resource in Python. Independence  on 
> custom plugins is not good for Murano because they cannot be uploaded by end 
> users and thus he cannot write application definition that can be imported 
> to/run on any cloud and need to convince cloud administrator to install his 
> Python plugin (something that is unimaginable in real life).

+1. Makes perfect sense to me.

> Because DSL is a way to write custom resources (in Heats terminology) it has 
> to be Turing-complete and have all the characteristics of general-purpose 
> language. It also has to have domain-specific features because we cannot 
> expect that DSL users would be as skilled as Heat developers and could write 
> such resources without knowledge on hosting engine architecture and internals.

+1

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] add configuration item to set virtual machine swapfile location

2014-03-19 Thread Yuzhou (C)
Hi everyone,

Currently, disk.swap(the swapfile of instance) is created on the 
instances_path(deflaut : /var/lib/nova/instances/). Maybe we should 
add configuration item in nova.conf to set virtual machine swapfile location. 
With such a feature enabled, swapfiles can be placed onto a specified storage, 
e.g. a SSD, seperatedly.

Thanks,

Zhou Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] add configuration item to set virtual machine swapfile location

2014-03-19 Thread Chen CH Ji
if it's local disk to host , +1 since we might get better performance
btw, the base file is in /var/lib/nova/instances/_base/swap_xxx, also need
to be considered



Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   "Yuzhou (C)" 
To: "openstack-dev@lists.openstack.org"
,
Date:   03/19/2014 05:39 PM
Subject:[openstack-dev] [nova] add configuration item to set virtual
machine swapfile location



Hi everyone,

 Currently, disk.swap(the swapfile of instance) is created on
the instances_path(deflaut : /var/lib/nova/instances/). Maybe we
should add configuration item in nova.conf to set virtual machine swapfile
location. With such a feature enabled, swapfiles can be placed onto a
specified storage, e.g. a SSD, seperatedly.

Thanks,

Zhou Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Icehouse dependency freeze

2014-03-19 Thread Thierry Carrez
Thomas Goirand wrote:
> We're now 1 month away from the scheduled release date. It is my strong
> opinion (as the main Debian OpenStack package maintainer) that for the
> last Havana release, the freeze of dependency happened really too late,
> creating issues hard to deal with on the packaging side. I believe it
> would be also hard to deal with for Ubuntu people (with the next LTS
> releasing soon).
> 
> I'd be in the favor to freeze the dependencies for Icehouse *right now*
> (including version updates which aren't packaged yet in Debian).
> Otherwise, it may be very hard for me to get things pass the FTP masters
> NEW queue in time for new packages.

This was discussed at the Release meeting yesterday and we decided to
have a Dependency freeze next week (EOD Tuesday, March 25).

In the mean time we'll work out the details of leveraging specific
branches of openstack/requirements to ensure nothing inadvertently slips in.

That said, given that feature freeze is in progress I don't expect new
dependencies to be added at this point anyway.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-19 Thread Flavio Percoco

Kurt already gave a quite detailed explanation of why Marconi, what
can you do with it and where it's standing. I'll reply in-line:

On 19/03/14 10:17 +1300, Robert Collins wrote:

So this came up briefly at the tripleo sprint, and since I can't seem
to find a /why/ document
(https://wiki.openstack.org/wiki/Marconi/Incubation#Raised_Questions_.2B_Answers
and https://wiki.openstack.org/wiki/Marconi#Design don't supply this)
we decided at the TC meeting that I should raise it here.

Firstly, let me check my facts :) - Marconi is backed by a modular
'storage' layer which places some conceptual design constraints on the
storage backends that are possible (e.g. I rather expect a 0mq
implementation to be very tricky, at best (vs the RPC style front end
https://wiki.openstack.org/wiki/Marconi/specs/zmq/api/v1 )), and has a
hybrid control/data plane API implementation where one can call into
it to make queues etc, and to consume them.


Those docs refers to a transport driver not a storage driver. In
Marconi, it's possible to have different protocols on top of the API.
The current one is based on HTTP but there'll likely be others in the
future.

We've changed some things in the API to support amqp based storage drivers.
We had a session during the HKG summit about this and since then, we've
always kept amqp drivers in mind when doing changes on the API. I'm
not saying it's perfect, though.



The API for the queues is very odd from a queueing perspective -
https://wiki.openstack.org/wiki/Marconi/specs/api/v1#Get_a_Specific_Message
- you don't subscribe to the queue, you enumerate and ask for a single
message.


The current way to subscribe to queues is by using polling.
Subscribing is not just tight to the "API" but also the transport
itself. As mentioned above, we currently just have support for HTTP.

Also, enumerating is not necessary. For instance, claiming with limit
1 will consume one message.

(Side note: At the incubation meeting, it was recommended to not put
efforts on writing new transport but to stabilize the API and work an
a storage backend with a license != AGPL)


And the implementations in tree are mongodb (which is at best
contentious, due to the AGPL and many folks reasonable concerns about
it), and mysq.


Just to avoid misleading folks that are not familiar with marconi, I
just want to point out that the driver is based on sqlalchemy.


My desires around Marconi are:
- to make sure the queue we have is suitable for use by OpenStack
itself: we have a very strong culture around consolidating technology
choices, and it would be extremely odd to have Marconi be something
that isn't suitable to replace rabbitmq etc as the queue abstraction
in the fullness of time.


Although this could be done in the future, I've heard from many folks
in the community that replacing OpenStack's rabbitmq / qpid / etc layer
with Marconi is a no-go. I don't recall the exact reasons now but I
think I can grab them from logs or something (Unless those folks are
reading this email and want to chime in). FWIW, I'd be more than happy
to *experiment* with this in the future. Marconi is definitely not ready as-is.


- to make sure that deployers with scale / performance needs can have
that met by Marconi
- to make my life easy as a deployer ;)


This has been part of our daily reviews, work and designs. I'm sure
there's room for improvement, though.


So my questions are:
- why isn't the API a queue friendly API (e.g. like


Define *queue friendly*


https://github.com/twitter/kestrel - kestrel which uses the memcache
API, puts put into the queue, gets get from the queue). The current


I don't know kestrel but, how is this different from what Marconi does?


API looks like pretty much the worst case scenario there - CRUD rather
than submit/retrieve with blocking requests (e.g. longpoll vs poll).


I agree there are some limitations from using HTTP for this job, hence
the support for different transports. Just saying *the API is CRUD* is
again misleading and it doesn't highlight the value of having an HTTP
based transport. It's just wrong to think about marconi as *just
another queuing system* instead of considering the use-cases it's
trying to solve.

There's a rough support for websocket in an external project but:

1. It's not offical... yet.
2. It was written as a proof of concept for the transport layer.
3. It likely needs to be updated.

https://github.com/FlaPer87/marconi-websocket


- wouldn't it be better to expose other existing implementations of
HTTP message queues like nova does with hypervisors, rather than
creating our own one? E.g. HTTPSQS, RestMQ, Kestrel, queues.io.


We've discussed to have support for API extensions in order to allow
some deployments to expose features from a queuing technology that we
don't necessary consider part of the core API.


  - or even do what Trove does and expose the actual implementation directly?
- whats the plan to fix the API?


Fix the API?

For starters, moving away fr

Re: [openstack-dev] Updating libvirt in gate jobs

2014-03-19 Thread Sean Dague
On 03/18/2014 08:15 PM, Joe Gordon wrote:
> 
> 
> 
> On Tue, Mar 18, 2014 at 8:12 AM, Sean Dague  > wrote:
> 
> On 03/18/2014 10:11 AM, Daniel P. Berrange wrote:
> > On Tue, Mar 18, 2014 at 07:50:15AM -0400, Davanum Srinivas wrote:
> >> Hi Team,
> >>
> >> We have 2 choices
> >>
> >> 1) Upgrade to libvirt 0.9.8+ (See [1] for details)
> >> 2) Enable UCA and upgrade to libvirt 1.2.2+ (see [2] for details)
> >>
> >> For #1, we received a patched deb from @SergeHallyn/@JamesPage
> and ran
> >> tests on it in review https://review.openstack.org/#/c/79816/
> >> For #2, @SergeHallyn/@JamesPage have updated UCA
> >> ("precise-proposed/icehouse") repo and we ran tests on it in review
> >> https://review.openstack.org/#/c/74889/
> >>
> >> For IceHouse, my recommendation is to request Ubuntu folks to
> push the
> >> patched 0.9.8+ version we validated to public repos, then we can can
> >> install/run gate jobs with that version. This is probably the
> smallest
> >> risk of the 2 choices.
> >
> > If we've re-run the tests in that review enough times to be confident
> > we've had a chance of exercising the race conditions, then using the
> > patched 0.9.8 seems like a no-brainer. We know the current version in
> > ubuntu repos is broken for us, so the sooner we address that the
> better.
> 
> 
> 
> ++
>  
> 
> >
> >> As soon as Juno begins, we can switch 1.2.2+ on UCA and request
> Ubuntu
> >> folks to push the verified version where we can use it.
> 
>  
> ++
>  
> 
> >
> > This basically re-raises the question of /what/ we should be
> testing in
> > the gate, which was discussed on this list a few weeks ago, and
> I'm not
> > clear that there was a definite decision in that thread
> >
> >  
> 
> http://lists.openstack.org/pipermail/openstack-dev/2014-February/027734.html
> >
> > Testing the lowest vs highest is targetting two different scenarios
> >
> >   - Testing the lowest version demonstrates that OpenStack has not
> > broken its own code by introducing use of a new feature.
> >
> >   - Testing the highest version demonstrates that OpenStack has not
> > been broken by 3rd party code introducing a regression.
> >
> > I think it is in scope for openstack to be targetting both of these
> > scenarios. For anything in-between though, it is upto the downstream
> > vendors to test their precise combination of versions. Currently
> though
> > our testing policy for non-python bits is "whatever version ubuntu
> ships",
> > which may be neither the lowest or highest versions, just some
> arbitrary
> > version they wish to support. So this discussion is currently more
> of a
> > 'what ubuntu version should we test on' kind of decision
> 
> I think testing 2 versions of libvirt in the gate is adding a matrix
> dimension that we currently can't really support. We're just going to
> have to pick one per release and be fine with it (at least for
> icehouse).
> 
> If people want other versions tested, please come in with 3rd party ci
> on it.
> 
> We can revisit the big test matrix at summit about the combinations
> we're going to actually validate, because with the various limitations
> we've got (concurrency limits, quota limits, upstream package limits,
> kinds of tests we want to run) we're going to have to make a bunch of
> compromises. Testing something new is going to require throwing existing
> stuff out of the test path.
> 
> 
> I think this is definitely worth revisiting at the summit, but I think
> we should move Juno to Libvirt 1.2.2+ as soon as possible instead of
> gating on a 2 year old release, and at the summit we can sort out what
> the full test matrix can be.
> 
> As a side note tripleo uses libvirt from Saucy (1.1.1) so moving to
> latest libvirt would help support them.

Honestly, given that we've been trying to get a working UCA for 6
months, I'm really not thrilled by the idea of making UCA part of our
gate. Because it's clearly not at the same level of testing as the base
distro. I think this will be even more so with UCA post 14.04 release,
as that's designed as a transitional stage to get you to 14.04.

As has been demonstrated, Canonical's testing systems are clearly not
finding the same bugs we are finding in their underlying packages.

I think the libvirt 1.2+ plan should be moving Juno to 14.04 as soon as
we can get that stable. That will bring in a whole fresh OS, kernel,
etc. And we recenter our testing on that LTS going forward.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-d

Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread Thierry Carrez
Kurt Griffiths wrote:
> Kudos to Balaji for working so hard on this. I really appreciate his candid 
> feedback on both frameworks.

Indeed, that analysis is very much appreciated.

>From the Technical Committee perspective, we put a high weight on a
factor that was not included in the report results: consistency and
convergence between projects we commonly release in an integrated manner
every 6 months. There was historically a lot of deviation, but as we add
more projects that deviation is becoming more costly. We want developers
to be able to jump from one project to another easily, and we want
convergence from an operators perspective.

Individual projects are obviously allowed to pick the best tool in their
toolbox. But the TC may also decide to let projects live out of the
"integrated release" if we feel they would add too much divergence in.

> After reviewing the report below, I would recommend that Marconi
> continue using Falcon for the v1.1 API and then re-evaluate Pecan for
> v2.0 or possibly look at using swob.

The report (and your email below) makes a compelling argument that
Falcon is a better match for Marconi's needs (or for a data-plane API)
than Pecan currently is. My question would be, can Pecan be improved to
also cover Marconi's use case ? Could we have the best of both worlds
(an appropriate tool *and* convergence) ?

If the answer is "yes, probably", then it might be an option to delay
inclusion in the integrated release so that we don't add (even
temporary) divergence. If the answer is "definitely no", then we'll have
to choose between convergence and functionality.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][Mistral] Adding new core reviewers

2014-03-19 Thread Renat Akhmerov
Team,

So far I’ve been just the only one core member of the team. I started feeling 
lonely :) Since the project team and the project itself has now grown (thanks 
to StackStorm and Intel) I think it’s time to think about extending the core 
team.

I would propose:
Nikolay Makhotkin (nmakhotkin at launchpad). He's been working on the project 
since almost the very beginning and made significant contribution (design, 
reviews, code).
Dmitri Zimine (i-dz at launchpad). Dmitri joined the project about 2 months 
ago. Since then he’s made a series of important high-quality commits, a lot of 
valuable reviews and, IMO most importantly, he has a solid vision of the 
project in general (requirements, use cases, comparison to other technologies) 
and has a pro-active viewpoint in all our discussions.

Thoughts?

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][SDK] What Nova APIs are used by SDKs

2014-03-19 Thread Kenichi Oomichi


Hi,

We need to know what Nova APIs are used by each SDK.
Could SDK developers/users/someone write them on the below spreadsheet?

  
https://docs.google.com/spreadsheet/ccc?key=0AvimqlvxcSGGdGxYSVJQb2tic25wUmFkcDJFV25OSUE#gid=2

I have written SDK names on the spreadsheet from 
https://wiki.openstack.org/wiki/SDKs
Please write "1" on the used API items.


On Nova v3 API development, there is one big problem about the backward
incompatibility. The API design of v3 API has been changed since v2 API
for making the design consistent, but v2 API is already used by many SDKs.
So we cannot remove v2 API implementation method soon.

To avoid this problem, we are working for v2.1 API[1].
When receiving a v2 format request, that translates the request to one of
v3 and passes it to v3 API implementation method. After v3 method operation,
it translates the response to v2 format again. Then Nova returns v2 format
response to a client. v2 API implementation method is not used, and we will
be able to remove it without the backward incompatibility issues.

In addition, we are working for improving v2 API tests[2].
Now Tempest does not check API responses of Nova API in many cases.
For example, Tempest does not check what API attributes("flavor", "image",
etc.) should be included in the response body of "create a server" API.
So we need to improve Tempest coverage from this viewpoint for blocking
backward incompatibility changes.

To implementing these developments efficiently, I'd like to know what Nova
APIs are used by each SDK. We will implement/test the APIs, which are used
by many SDKs, with high priority.


Thanks
Ken'ichi Ohmichi

---
[1]: https://blueprints.launchpad.net/nova/+spec/v2-on-v3-api
[2]: https://blueprints.launchpad.net/tempest/+spec/nova-api-attribute-test


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-19 Thread Thierry Carrez
Flavio Percoco wrote:
> On 19/03/14 10:17 +1300, Robert Collins wrote:
>> My desires around Marconi are:
>> - to make sure the queue we have is suitable for use by OpenStack
>> itself: we have a very strong culture around consolidating technology
>> choices, and it would be extremely odd to have Marconi be something
>> that isn't suitable to replace rabbitmq etc as the queue abstraction
>> in the fullness of time.
> 
> Although this could be done in the future, I've heard from many folks
> in the community that replacing OpenStack's rabbitmq / qpid / etc layer
> with Marconi is a no-go. I don't recall the exact reasons now but I
> think I can grab them from logs or something (Unless those folks are
> reading this email and want to chime in). FWIW, I'd be more than happy
> to *experiment* with this in the future. Marconi is definitely not ready
> as-is.

That's the root of this thread. Marconi is not really designed to cover
Robert's use case, which would be to be consumed internally by OpenStack
as a message queue.

I classify Marconi as an "application building block" (IaaS+), a
convenient, SQS-like way for cloud application builders to pass data
around without having to spin up their own message queue in a VM. I
think that's a relevant use case, as long as performance is not an order
of magnitude worse than the "spin up your own in a VM" alternative.
Personally I don't consider "serving the internal needs of OpenStack" as
a feature blocker. It would be nice if it could, but the IaaS+ use case
is IMHO compelling enough.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Docs for new plugins

2014-03-19 Thread Anne Gentle
On Wed, Mar 19, 2014 at 12:45 AM, Edgar Magana  wrote:

> Including Anne in this thread.
>
> Anne,
>
> Can your provide your input here?
>
> Thanks,
>
> Edgar
>
> From: Mohammad Banikazemi 
> Reply-To: OpenStack List 
> Date: Monday, March 17, 2014 8:02 AM
> To: OpenStack List 
>
> Subject: Re: [openstack-dev] [Neutron] Docs for new plugins
>
> I think the docs get updated for each release, so probably the newly added
> stuff (after I3) will be picked up by the RC1 release date. (cc'ing Tom
> Fifield for a definitive answer.)
>
> By the way I do see the odl config table in the openstack-manuals source
> tree:
> https://github.com/openstack/openstack-manuals/blob/master/doc/common/tables/neutron-ml2_odl.xml
> and that is being referenced here:
>
> https://github.com/openstack/openstack-manuals/blob/master/doc/config-reference/networking/section_networking-plugins-ml2.xml
>
> Best,
>
> Mohammad
>
>
> [image: Inactive hide details for Kyle Mestery ---03/17/2014 09:40:51
> AM---Edgar: I don't see the configuration options for the OpenDay]Kyle
> Mestery ---03/17/2014 09:40:51 AM---Edgar: I don't see the configuration
> options for the OpenDaylight ML2
>

Hi Kyle, there is still a manual step by a docs person to run the scripts
and create a patch.

Does Gauvain's patch contain the OpenDaylight config options?
 https://review.openstack.org/#/c/81013/

I also want to note that just because there is reference information
doesn't mean the docs are complete -- are there concepts and tasks still to
be written so that users know how to use this ML2 plug-in and that it's
available? Add those to the Cloud Administration Guide please.

Thanks,
Anne

>
>
> From: Kyle Mestery 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>,
> Date: 03/17/2014 09:40 AM
> Subject: Re: [openstack-dev] [Neutron] Docs for new plugins
> --
>
>
>
> Edgar:
>
> I don't see the configuration options for the OpenDaylight ML2
> MechanismDriver
> added here yet, even though the code was checked in well over a week ago.
> How long does it take to autogenerate this page from the code?
>
> Thanks!
> Kyle
>
>
>
> On Wed, Mar 12, 2014 at 5:10 PM, Edgar Magana 
> <*emag...@plumgrid.com*>
> wrote:
>
>You should be able to add your plugin here:
>
>
> *http://docs.openstack.org/havana/config-reference/content/networking-options-plugins.html*
>
>Thanks,
>
>Edgar
>
>*From: *Mohammad Banikazemi <*m...@us.ibm.com* >
> * Date: *Monday, March 10, 2014 2:40 PM
> * To: *OpenStack List 
> <*openstack-dev@lists.openstack.org*
>>
> * Cc: *Edgar Magana <*emag...@plumgrid.com* >
> * Subject: *Re: [openstack-dev] [Neutron] Docs for new plugins
>
>Would like to know what to do for adding documentation for a new
>plugin. Can someone point me to the right place/process please.
>
>Thanks,
>
>Mohammad
>
>___
>OpenStack-dev mailing list
> *OpenStack-dev@lists.openstack.org* 
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___ OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] First time contributing to some project

2014-03-19 Thread Dharmit Shah
Hello Stackers,

This is the first time I am mailing some dev mailing list and first
time I am trying to contribute on some open source project.

I am working on a low-hanging-fruit bug
(https://bugs.launchpad.net/nova/+bug/1261909). It's about removing
hba specific code from nova as cinder manages the volume part.

The function in question is get_fc_hbas() in the
nova/virt/libvirt/utils.py file. This function is being referred by
get_fc_hbas_info() function in the same file and connect_volume()
function in nova/virt/libvirt/volume.py file. Issue is that the latter
is being used in a lot of other files across nova code.

I think the same connect_volume() function is defined in
cinder/brick/initiator/connector.py file and has similar code as that
in nova/virt/libvirt/volume.py. Should I just nuke all the
get_fc_hbas(), get_fc_hbas_info() and connect_volume() definitions and
reference from the nova code?

I am really confused about how I should proceed further on this. Can
someone please help me get started with this?

Thanks!
-- 
Dharmit Shah

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device Mapping

2014-03-19 Thread zhangleiqiang

After second thought, it will be more meaningful to just add virtio-SCSI bus 
type support to block-device-mapping. 

RDM can then be used or not, depend on the bus type and device type of bdm 
specified by user.  And user can also just use virtio-SCSI bus for performance 
other than pass through.

Any suggestions? 


"Zhangleiqiang (Trump)"  :

>> From: Huang Zhiteng [mailto:winsto...@gmail.com]
>> Sent: Wednesday, March 19, 2014 12:14 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
>> Mapping
>> 
>> On Tue, Mar 18, 2014 at 5:33 PM, Zhangleiqiang (Trump)
>>  wrote:
 From: Huang Zhiteng [mailto:winsto...@gmail.com]
 Sent: Tuesday, March 18, 2014 4:40 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
 Mapping
 
 On Tue, Mar 18, 2014 at 11:01 AM, Zhangleiqiang (Trump)
  wrote:
>> From: Huang Zhiteng [mailto:winsto...@gmail.com]
>> Sent: Tuesday, March 18, 2014 10:32 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw
>> Device Mapping
>> 
>> On Tue, Mar 18, 2014 at 9:40 AM, Zhangleiqiang (Trump)
>>  wrote:
>>> Hi, stackers:
>>> 
>>>With RDM, the storage logical unit number (LUN) can be
>>> directly
>> connected to a instance from the storage area network (SAN).
>>> 
>>>For most data center applications, including Databases,
>>> CRM and
>> ERP applications, RDM can be used for configurations involving
>> clustering between instances, between physical hosts and instances
>> or where SAN-aware applications are running inside a instance.
>> If 'clustering' here refers to things like cluster file system,
>> which requires LUNs to be connected to multiple instances at the same
>> time.
>> And since you mentioned Cinder, I suppose the LUNs (volumes) are
>> managed by Cinder, then you have an extra dependency for
>> multi-attach
>> feature:
 https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume.
> 
> Yes.  "Clustering" include Oracle RAC, MSCS, etc. If they want to
> work in
 instance-based cloud environment, RDM and multi-attached-volumes are
 both needed.
> 
> But RDM is not only used for clustering, and haven't dependency for
 multi-attach-volume.
 
 Set clustering use case and performance improvement aside, what other
 benefits/use cases can RDM bring/be useful for?
>>> 
>>> Thanks for your reply.
>>> 
>>> The advantages of Raw device mapping are all introduced by its capability of
>> "pass" scsi command to the device, and the most common use cases are
>> clustering and performance improvement mentioned above.
>> As mentioned in earlier email, I doubt the performance improvement comes
>> from 'virtio-scsi' interface instead of RDM.  We can actually test them to
>> verify.  Here's what I would do: create one LUN(volume) on the SAN, attach
>> the volume to instance using current attach code path but change the virtual
>> bus to 'virtio-scsi' and then measure the IO performance using standard IO
>> benchmark; next, attach the volume to instance using 'lun' device for 'disk' 
>> and
>> 'virtio-scsi' for bus, and do the measurement again.  We shall be able to see
>> the performance difference if there is any.  Since I don't have a SAN to play
>> with, could you please do the test and share the results?
> 
> The performance improvement does comes from "virtio-scsi" controller, and is 
> not caused by using "lun" device instead of "disk" device.
> I don't have a usable SAN at present. But from the libvirt's doc ([1]), the 
> "lun" device behaves identically to "disk" device except that generic SCSI 
> commands from the instance are accepted and passed through to the physical 
> device. 
> 
> Sorry for misleading. The "RDM" I mentioned in earlier email includes the 
> "lun" device and the "virtio-scsi" controller.
> 
> Now, the performance improvement comes from "virtio-scsi" controller, 
> however, boot-from a volume using virtio-scsi interface or attach a volume 
> with a new virtio-scsi interface are both unsupported currently. I think add 
> these features is meaningful. And as mentioned in the first email, set the 
> "virtio-scsi" controller aside, "lun" device has already supported by 
> block-device-mapping-v2 extension.
> 
> [1] http://libvirt.org/formatdomain.html#elementsDisks
> 
>>> And besides these two scenarios, there is another use case: running
>> SAN-aware application inside instances, such as:
>>> 1. SAN management app
>> Yes, that is possible if RDM is enable.  But I wonder what is the real use 
>> case
>> behind this.  Even though SAN mgmt app inside instance is able to manage the
>> LUN directly, but it is just a LUN instead of a r

Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-19 Thread Mark McLoughlin
On Wed, 2014-03-19 at 10:17 +1300, Robert Collins wrote:
> So this came up briefly at the tripleo sprint, and since I can't seem
> to find a /why/ document
> (https://wiki.openstack.org/wiki/Marconi/Incubation#Raised_Questions_.2B_Answers
> and https://wiki.openstack.org/wiki/Marconi#Design don't supply this)

I think we need a slight reset on this discussion. The way this email
was phrased gives a strong sense of "Marconi is a dumb idea, it's going
to take a lot to persuade me otherwise".

That's not a great way to start a conversation, but it's easy to
understand - a TC member sees a project on the cusp of graduating and,
when they finally get a chance to look closely at it, a number of things
don't make much sense. "Wait! Stop! WTF!" is a natural reaction if you
think a bad decision is about to be made.

We've all got to understand how pressurized a situation these graduation
and incubation discussions are. Projects put an immense amount of work
into proving themselves worthy of being an integrated project, they get
fairly short bursts of interaction with the TC, TC members aren't
necessarily able to do a huge amount of due diligence in advance and yet
TC members are really, really keen to avoid either undermining a healthy
project around some cool new technology or undermining OpenStack by
including an unhealthy project or sub-par technology.

And then there's the time pressure where a decision has to be made by a
certain date and if that decision is "not this time", the six months
delay until the next chance for a positive decision can be really
draining on motivation and momentum when everybody had been so focused
on getting a positive decision this time around.

We really need cool heads here and, above all, to try our best to assume
good faith, intentions and ability on both sides.


Some of the questions Robert asked are common questions and I know they
were discussed during the incubation review. However, the questions
persist and it's really important that TC members (and the community at
large) feel they can stand behind the answers to those questions. If I'm
chatting to someone and they ask me "why does OpenStack need to
implement its own messaging broker?", I need to have a good answer.

How about we do our best to put the implications for the graduation
decision aside for a bit and focus on collaboratively pulling together a
FAQ that everyone can buy into? The "raised questions and answers"
section of the incubation review linked above is a good start, but I
think we can take this email as feedback that those questions and
answers need much improvement.

This could be a good pattern for all new projects - if the TC and the
new project can't work together to draft a solid FAQ like this, then
it's not a good sign for the project.

See below for my attempt to summarize the questions and how we might go
about answering them. Is this a reasonable start?

Mark.


Why isn't Marconi simply an API for provisioning and managing AMQP, Kestrel,
ZeroMQ, etc. brokers and queues? Why is a new broker implementation needed?

 => I'm not sure I can summarize the answer here - the need for a HTTP data
plane API, the need for multi-tenancy, etc.? Maybe a table listing the
required features and whether they're provided by these existing solutions.

Maybe there's also an element of "we think we can do a better job". If so,
the point probably worth addressing is "OpenStack shouldn't attempt to write
a new database, or a new hypervisor, or a new SDN controller, or a new block
storage implementation ... so why should we write a implement a new message
broker? If this is just a bad analogy, explain why?

Implementing a message queue using an SQL DB seems like a bad idea, why is
Marconi doing that?

 => Perhaps explain why MongoDB is a good storage technology for this use case
and the SQLalchemy driver is just a toy.

Marconi's default driver depends on MongoDB which is licensed under the AGPL.
This license is currently a no-go for some organizations, so what plans does
Marconi have to implement another production-ready storage driver that supports
all API features?

 => Discuss the Redis driver plans?

Is Marconi designed to be suitable for use by OpenStack itself?

 => Discuss that it's not currently in scope and why not. In what way does the
OpenStack use case differ from the applications Marconi's current API
focused on?

How should a client subscribe to a queue?

 => Discuss that it's not by GET /messages but instead POST /claims?limit=N




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Subteam meeting Thursday, 14-00 UTC

2014-03-19 Thread Eugene Nikanorov
Hi neutron and lbaas folks,

Let's keep our regular meeting on Thursday, at 14-00 UTC at
#openstack-meeting

Jorge Miramontes has made a nice wiki page capturing service requirements:
https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements
Please check it out as this will help us to get on the same page regarding
API and obj model discussion.


Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-19 Thread Russell Bryant
On 03/19/2014 07:49 AM, Thierry Carrez wrote:
> Flavio Percoco wrote:
>> On 19/03/14 10:17 +1300, Robert Collins wrote:
>>> My desires around Marconi are: - to make sure the queue we have
>>> is suitable for use by OpenStack itself: we have a very strong
>>> culture around consolidating technology choices, and it would
>>> be extremely odd to have Marconi be something that isn't
>>> suitable to replace rabbitmq etc as the queue abstraction in
>>> the fullness of time.
>> 
>> Although this could be done in the future, I've heard from many
>> folks in the community that replacing OpenStack's rabbitmq / qpid
>> / etc layer with Marconi is a no-go. I don't recall the exact
>> reasons now but I think I can grab them from logs or something
>> (Unless those folks are reading this email and want to chime in).
>> FWIW, I'd be more than happy to *experiment* with this in the
>> future. Marconi is definitely not ready as-is.
> 
> That's the root of this thread. Marconi is not really designed to
> cover Robert's use case, which would be to be consumed internally
> by OpenStack as a message queue.
> 
> I classify Marconi as an "application building block" (IaaS+), a 
> convenient, SQS-like way for cloud application builders to pass
> data around without having to spin up their own message queue in a
> VM. I think that's a relevant use case, as long as performance is
> not an order of magnitude worse than the "spin up your own in a VM"
> alternative. Personally I don't consider "serving the internal
> needs of OpenStack" as a feature blocker. It would be nice if it
> could, but the IaaS+ use case is IMHO compelling enough.

This is my view, as well.  I never considered replacing OpenStack's
current use of messaging within the scope of Marconi.

It's possible we could have yet another project that is a queue
provisioning project in the style of Trove.  I'm not sure that
actually makes sense (an application template you can deploy may
suffice here).  In any case, I view OpenStack's use case and anyone
wanting to use qpid/rabbit/whatever directly separate and out of scope
of Marconi.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Bug Day kickoff

2014-03-19 Thread Mauro S M Rodrigues

Morning QA Team,

I hope everybody can help on this effort so we can release icehouse with 
less bugs as possible.


So this morning, March, 19th, 12:00 UTC the current tempest's bug picture is

Current picture of Tempest's bugs:
 * 166 Open Bugs https://bugs.launchpad.net/tempest/+bugs
   - 17 classified as wishlist
   - 8 marked as fix committed, *should we wait for Icehouse to mark it 
as Released?* (or if we're doing it right now can we adopt this policy 
for ever?)


*Now our focus are the 142 left (http://bit.ly/1iAMbBA)*
   - 62 to triage and prioritize, some of them already have assignee, 
http://bit.ly/1cXhPcF (this includes incomplete ones that were answered).

   - 29 to prioritize  http://bit.ly/1dcL9MO
   - 51 of them In Progress http://bit.ly/1hzsjfH ordered by the ones 
with less activity, which we need to reach the current assignees and see 
the current status (and maybe assign someone else to take care of it).
   - 4 are incomplete without answer and we may try to reach the 
reporter to get an update.



Some actions, from the first email:

On 03/12/2014 09:31 PM, Mauro S M Rodrigues wrote:

== Actions ==
Basically I'm proposing the follow actions for the QA Bug Day, nothing 
much new here:


1st - Triage those 48 bugs in [1], this includes:
* Prioritize it;
* Mark any duplications;
* Add tags and any other project that can be related to the bug so 
we can have the right eyes on it;
* Some cool extra stuff: comments with any suggestions, links to 
logstash queries so we can have the real dimension of how critic the 
bug in question is;


2nd - Assign yourself to some of the unassigned bugs if possible so we 
can squash it eventually.


3rd - Dedicate some time to review the 51 In Progress bugs  AND/OR be 
in touch with the current assignee in case the bug hadn't recent 
activity so we can put it back into triage steps. 



Thanks,

mauro(sr)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][Mistral] Adding new core reviewers

2014-03-19 Thread Stan Lagun
+1 for both


On Wed, Mar 19, 2014 at 3:35 PM, Renat Akhmerov wrote:

> Team,
>
> So far I've been just the only one core member of the team. I started
> feeling lonely :) Since the project team and the project itself has now
> grown (thanks to StackStorm and Intel) I think it's time to think about
> extending the core team.
>
> I would propose:
>
>- Nikolay Makhotkin (nmakhotkin at launchpad). He's been working on
>the project since almost the very beginning and made significant
>contribution (design, reviews, code).
>- Dmitri Zimine (i-dz at launchpad). Dmitri joined the project about 2
>months ago. Since then he's made a series of important high-quality
>commits, a lot of valuable reviews and, IMO most importantly, he has a
>solid vision of the project in general (requirements, use cases, comparison
>to other technologies) and has a pro-active viewpoint in all our
>discussions.
>
>
> Thoughts?
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.com
sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-19 Thread Ryan O'Hara
On Tue, Mar 18, 2014 at 10:57:15PM +, Jorge Miramontes wrote:
> Hey Neutron LBaaS folks,
> 
> Per last week's IRC meeting I have created a preliminary requirements &
> use case wiki page. I requested adding such a page since there appears to
> be a lot of new interest in load balancing and feel that we need a
> structured way to align everyone's interest in the project. Furthermore,
> it appears that understanding everyone's requirements and use cases will
> aid in the current object model discussion we all have been having. That
> being said, this wiki is malleable and open to discussion. I have added
> some preliminary requirements from my team's perspective in order to start
> the discussion. My vision is that people add requirements and use cases to
> the wiki for what they envision Neutron LBaaS becoming. That way, we can
> all discuss as a group, figure out what should and shouldn't be a
> requirement and prioritize the rest in an effort to focus development
> efforts. ReadyŠsetŠgo!
> 
> Here is the link to the wiki ==>
> https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements
> 
> Cheers,
> --Jorge

Thank you for creating this page. I suggest that links be added to
existing blueprints when applicable. Thoughts?

Ryan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-19 Thread Eugene Nikanorov
Hi Jorge,

Thanks for taking care of the page. I've added priorities, although I'm not
sure we need precise priority weights.
Those features that still have '?' need further clarification.

Thanks,
Eugene.



On Wed, Mar 19, 2014 at 11:18 AM, Oleg Bondarev wrote:

> Hi Jorge,
>
> Thanks for taking care of this and bringing it all together! This will be
> really useful for LBaaS discussions.
> I updated the wiki to include L7 rules support and also marking already
> implemented requirements.
>
> Thanks,
> Oleg
>
>
> On Wed, Mar 19, 2014 at 2:57 AM, Jorge Miramontes <
> jorge.miramon...@rackspace.com> wrote:
>
>> Hey Neutron LBaaS folks,
>>
>> Per last week's IRC meeting I have created a preliminary requirements &
>> use case wiki page. I requested adding such a page since there appears to
>> be a lot of new interest in load balancing and feel that we need a
>> structured way to align everyone's interest in the project. Furthermore,
>> it appears that understanding everyone's requirements and use cases will
>> aid in the current object model discussion we all have been having. That
>> being said, this wiki is malleable and open to discussion. I have added
>> some preliminary requirements from my team's perspective in order to start
>> the discussion. My vision is that people add requirements and use cases to
>> the wiki for what they envision Neutron LBaaS becoming. That way, we can
>> all discuss as a group, figure out what should and shouldn't be a
>> requirement and prioritize the rest in an effort to focus development
>> efforts. ReadyŠsetŠgo!
>>
>> Here is the link to the wiki ==>
>> https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements
>>
>> Cheers,
>> --Jorge
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Heat] How to reliably detect VM failures? (Zane Bitter)

2014-03-19 Thread WICKES, ROGER
> On 03/18/2014 07:54 AM, Qiming Teng wrote:
>> Hi, Folks,
>>
>>I have been trying to implement a HACluster resource type in Heat. I
>> haven't created a BluePrint for this because I am not sure everything
>> will work as expected.
...
>>The most difficult issue here is to come up with a reliable VM failure
>> detection mechanism.  The service_group feature in Nova only concerns
>> about the OpenStack services themselves, not the VMs.  Considering that
>> in our customer's cloud environment, user provided images can be used,
>> we cannot assume some agents in the VMs to send heartbeat signals.

[Roger] My response is more of a user-oriented rather than developer-
oriented, but was asked on dev so...here goes:

When enabled, the hypervisor is always collecting (and sending to 
Ceilometer) basic cpu, memory stats that you can alarm on. 
http://docs.openstack.org/trunk/openstack-ops/content/logging_monitoring.html

For external monitoring, consider setting up a Nagios or Selenium server 
for agent-less monitoring. You can have it do the most basic heartbeat 
(ping) test; if the ping is slow for a period of say five minutes, or fails, 
alarm 
that you have a network problem. You can use Selenium to execute synthetic
transactions against whatever the server is supposed to provide; if it does it
for you, you can assume it is doing it for everyone else. If it fails, you can 
take action
http://www.seleniumhq.org
You can also use Selenium to re-run selected OpenStack test cases to ensure 
your 
infrastructure is working properly.

>>I have checked the 'instance' table in Nova database, it seemed that
>> the 'update_at' column is only updated when VM state changed and
>> reported.  If the 'heartbeat' messages are coming in from many VMs very
>> frequently, there could be a DB query performance/scalability issue,
>> right?

[Roger] For time-series, high-volume collection, consider going to a 
non-relational 
system like RRDTool, PyRRD, Graphite, etc. if you want to store the history and 
look 
for trends. 

>>So, how can I detect VM failures reliably, so that I can notify Heat
>> to take the appropriate recovery action?

[Roger] When Nagios detects a problem, have it kick off the appropriate script
(shell script) that invokes the Heat API or other to fix the issue with the 
cluster. 
I think you were hoping that Heat could be coded to automagically fix any 
issue, 
but I think you may need to be more specific; develop specific use cases for 
what 
you mean by "VM failure", as the desired action may be different depending on 
the type of failure. 

> Qiming,
>
> Check out
>
> https://github.com/openstack/heat-templates/blob/master/cfn/F17/WordPress_Single_Instance_With_HA.template

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] WSGI and Python paste

2014-03-19 Thread victor stinner
Hi,

According to the following table, paste is blocking many OpenStack servers to 
be ported to Python 3:

   https://wiki.openstack.org/wiki/Python3#Core_OpenStack_projects

The author of paste, Ian Bicking, gave me the commit permission to paste. I 
integrated patches from Debian and my colleague Cyril Roelandt, and I added 
even more patches. All these changes are just for the Python 3 syntax (import, 
except as, print, etc.). It looks like paste doesn't know anything about Python 
3 and WSGI 1.0.1 (PEP ):

http://legacy.python.org/dev/peps/pep-/

A function handling a web page must return bytes (b'data' in Python 3), whereas 
nativate string can be used in Python 2. It looks like paste is old (last 
release was 4 years ago, version 1.7.5.1 in 2010). Even the author of paste 
suggest to use something else like WebOb:

   "Paste has been under development for a while, and has lots of code in it. 
Too much code! The code is largely decoupled except for some core functions 
shared by many parts of the code. Those core functions are largely replaced in 
WebOb, and replaced with better implementations."

   http://pythonpaste.org/future.html#introduction

What is the plan for OpenStack? Should we use something else?

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnetodb] Using gevent in MagnetoDB. OpenStack standards and approaches

2014-03-19 Thread Ryan Petrello
Dmitriy,

Gunicorn + gevent + pecan play nicely together, and they’re a combination I’ve 
used to good success in the past.  Pecan even comes with some helpers for 
integrating with gunicorn:

$ gunicorn_pecan pecan_config.py -k gevent -w4
http://pecan.readthedocs.org/en/latest/deployment.html?highlight=gunicorn#gunicorn

---
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com

On Mar 18, 2014, at 2:51 PM, Dmitriy Ukhlov  wrote:

> Hello openstackers,
> 
> We are working on MagnetoDB project and trying our best to follow OpenStack 
> standards.
> 
> So, MagnetoDB is aimed to be high performance scalable OpenStack based WSGI 
> application which provide interface to high available distributed reliable 
> key-value storage. We investigated best practices and separated the next 
> points:
>   • to avoid problems with GIL our application should be executed in 
> single thread mode with non-blocking IO (using greenlets or another python 
> specific approaches to rich this)
>   • to make MagnetoDB scalable it is necessary to make MagnetoDB 
> stateless. It allows us run a lot of independent MagnetoDB processes and 
> switch all requests flow between them:
>   • at single node to load all CPU’s cores
>   • at the different nodes for horizontal scalability
>   • use Cassandra as most reliable and mature distributed key-value 
> storage
>   • use datastax python-driver as most modern cassandra python client 
> which supports newest CQL3 and Cassandra native binary protocol features set
> 
> So, considering this points The next technologies was chosen:
>   • gevent as one of the fastest non-blocking single-thread WSGI server. 
> It is based on greenlet library and supports monkey patching of standard 
> threading library. It is necessary because of datastax python driver uses 
> threading library and it’s backlog has task to add gevent backlog. (We 
> patched python-driver ourselves to enable this feature as temporary solution 
> and waiting for new python-driver releases). It makes gevent more interesting 
> to use than other analogs (like eventlet for example)
>   • gunicorn as WSGI server which is able to run a few worker processes 
> and master process for workers managing and routing request between them. 
> Also it has integration with gevent   and can run gevent based 
> workers. We also analyzed analogues, such as uWSGI. It looks like more faster 
> but unfortunately we didn’t manage to work uWSGI in multi process mode with 
> MagnetoDB application.
> 
> Also I want to add that currently oslo wsgi framework is used for organizing 
> request routing. I know that current OpenStack trend is to migrate WSGI 
> services to Pecan wsgi framework. Maybe is it reasonable for MagnetoDB too.
> 
> We would like to hear your opinions about the libraries and approaches we 
> have chosen and would appreciate you help and support in order to find the 
> best balance between performance, developer friendness  and OpenStack 
> standards.
> -- 
> Best regards,
> Dmitriy Ukhlov
> Mirantis Inc.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnetodb] Using gevent in MagnetoDB. OpenStack standards and approaches

2014-03-19 Thread Doug Hellmann
On Tue, Mar 18, 2014 at 2:51 PM, Dmitriy Ukhlov wrote:

>  Hello openstackers,
>
>  We are working on MagnetoDB project and trying our best to follow
> OpenStack standards.
>
>  So, MagnetoDB is aimed to be high performance scalable OpenStack based
> WSGI application which provide interface to high available distributed
> reliable key-value storage. We investigated best practices and separated
> the next points:
>
>1.
>
>to avoid problems with GIL our application should be executed in
>single thread mode with non-blocking IO (using greenlets or another python
>specific approaches to rich this)
> 2.
>
>to make MagnetoDB scalable it is necessary to make MagnetoDB
>stateless. It allows us run a lot of independent MagnetoDB processes and
>switch all requests flow between them:
> 1.
>
>   at single node to load all CPU's cores
>2.
>
>   at the different nodes for horizontal scalability
>3.
>
>use Cassandra as most reliable and mature distributed key-value storage
> 4.
>
>use datastax python-driver as most modern cassandra python client
>which supports newest CQL3 and Cassandra native binary protocol features 
> set
>
>
>  So, considering this points The next technologies was chosen:
>
>1.
>
>gevent as one of the fastest non-blocking single-thread WSGI server.
>It is based on greenlet library and supports monkey patching of standard
>threading library. It is necessary because of datastax python driver uses
>threading library and it's backlog has task to add gevent backlog. (We
>patched python-driver ourselves to enable this feature as temporary
>solution and waiting for new python-driver releases). It makes gevent more
>interesting to use than other analogs (like eventlet for example)
> 2.
>
>gunicorn as WSGI server which is able to run a few worker processes
>and master process for workers managing and routing request between them.
>Also it has integration with gevent and can run gevent based workers. We
>also analyzed analogues, such as uWSGI. It looks like more faster but
>unfortunately we didn't manage to work uWSGI in multi process mode with
>MagnetoDB application.
>
>
> Also I want to add that currently oslo wsgi framework is used for
> organizing request routing. I know that current OpenStack trend is to
> migrate WSGI services to Pecan wsgi framework. Maybe is it reasonable for
> MagnetoDB too.
>

The WSGI framework in Oslo is officially deprecated, and should no longer
be used for new projects. Its use in existing projects is being phased out
as projects create backwards-incompatible API changes that make moving to
new tools safe (there's no point in creating issues by rewriting the old
services). You should not use the old WSGI framework if you plan to propose
MagenetoDB for incubation.

Several integrated and incubated projects have already adopted Pecan, so
there are quite a few examples available to get you started. Drop by
#pecanpy on freenode if you have questions.

Doug



>
> We would like to hear your opinions about the libraries and approaches we
> have chosen and would appreciate you help and support in order to find the
> best balance between performance, developer friendness  and OpenStack
> standards.
>
> --
> Best regards,
> Dmitriy Ukhlov
> Mirantis Inc.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Automatic version creation in PBR

2014-03-19 Thread Doug Hellmann
On Tue, Mar 18, 2014 at 5:30 PM, Monty Taylor  wrote:

> On 03/18/2014 04:25 AM, Thierry Carrez wrote:
>
>> Robert Collins wrote:
>>
>>> If you set 'version' in setup.cfg, pbr's behaviour will not change at
>>> all.
>>>
>>> If you do not set 'version' in setup.cfg then:
>>>   - for tagged commits, pbr's behaviour will not change at all.
>>>   - for untagged commits, pbr will change from
>>> '$last_tag_version.$commit_count.g$sha' to
>>> '$next_highest_pre_release.dev$commit_count.g$sha'
>>>
>>
>> That sounds sane to me. IIUC it shouldn't impact the release team. The
>> version number ends up being quite ugly, but in that precise case
>> prettiness is not a primary goal.
>>
>> It may impact packagers in some corner cases so I'd engage with them to
>> check (#openstack-packaging ?) *and*, like Doug recommends, wait for
>> after icehouse release to make the change.
>>
>>
> We've also discussed adding a utility to transform the newer uglier but
> more pep440 correct generated version string into debian format for folks -
> since it is possible to do it purely mechanically, but grokking it totally
> might not be something someone wants to do.


I've created a blueprint to do this, but as I don't know the details of
the version numbers it doesn't have much detail. I would appreciate it if
anyone who does understand those differences would add some notes to 
https://blueprints.launchpad.net/oslo/+spec/pbr-packager-version-utility

Doug


>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] First time contributing to some project

2014-03-19 Thread yang, xing
Hi Dharmit,

You can't remove those from Nova.  If you remove them, attach volume won't work 
anymore.  Attach volume still goes through Nova.

Thanks,
Xing


-Original Message-
From: Dharmit Shah [mailto:dharmit@gmail.com] 
Sent: Wednesday, March 19, 2014 8:03 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova] First time contributing to some project

Hello Stackers,

This is the first time I am mailing some dev mailing list and first time I am 
trying to contribute on some open source project.

I am working on a low-hanging-fruit bug
(https://bugs.launchpad.net/nova/+bug/1261909). It's about removing hba 
specific code from nova as cinder manages the volume part.

The function in question is get_fc_hbas() in the nova/virt/libvirt/utils.py 
file. This function is being referred by
get_fc_hbas_info() function in the same file and connect_volume() function in 
nova/virt/libvirt/volume.py file. Issue is that the latter is being used in a 
lot of other files across nova code.

I think the same connect_volume() function is defined in 
cinder/brick/initiator/connector.py file and has similar code as that in 
nova/virt/libvirt/volume.py. Should I just nuke all the get_fc_hbas(), 
get_fc_hbas_info() and connect_volume() definitions and reference from the nova 
code?

I am really confused about how I should proceed further on this. Can someone 
please help me get started with this?

Thanks!
--
Dharmit Shah

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WSGI and Python paste

2014-03-19 Thread Jay Pipes
On Wed, 2014-03-19 at 14:25 +0100, victor stinner wrote:
> Hi,
> 
> According to the following table, paste is blocking many OpenStack servers to 
> be ported to Python 3:
> 
>https://wiki.openstack.org/wiki/Python3#Core_OpenStack_projects
> 
> The author of paste, Ian Bicking, gave me the commit permission to paste. I 
> integrated patches from Debian and my colleague Cyril Roelandt, and I added 
> even more patches. All these changes are just for the Python 3 syntax 
> (import, except as, print, etc.). It looks like paste doesn't know anything 
> about Python 3 and WSGI 1.0.1 (PEP ):
> 
> http://legacy.python.org/dev/peps/pep-/
> 
> A function handling a web page must return bytes (b'data' in Python 3), 
> whereas nativate string can be used in Python 2. It looks like paste is old 
> (last release was 4 years ago, version 1.7.5.1 in 2010). Even the author of 
> paste suggest to use something else like WebOb:
> 
>"Paste has been under development for a while, and has lots of code in it. 
> Too much code! The code is largely decoupled except for some core functions 
> shared by many parts of the code. Those core functions are largely replaced 
> in WebOb, and replaced with better implementations."
> 
>http://pythonpaste.org/future.html#introduction
> 
> What is the plan for OpenStack? Should we use something else?

AFAIK, we only use paste.deploy, and we use WebOb already (even though
it has a host of issues itself...)

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread Flavio Percoco

On 19/03/14 12:31 +0100, Thierry Carrez wrote:

Kurt Griffiths wrote:

Kudos to Balaji for working so hard on this. I really appreciate his candid 
feedback on both frameworks.


Indeed, that analysis is very much appreciated.

From the Technical Committee perspective, we put a high weight on a
factor that was not included in the report results: consistency and
convergence between projects we commonly release in an integrated manner
every 6 months. There was historically a lot of deviation, but as we add
more projects that deviation is becoming more costly. We want developers
to be able to jump from one project to another easily, and we want
convergence from an operators perspective.

Individual projects are obviously allowed to pick the best tool in their
toolbox. But the TC may also decide to let projects live out of the
"integrated release" if we feel they would add too much divergence in.



My only concern in this case - I'm not sure if this has been discussed
or written somewhere - is to define what the boundaries of that
divergence are. For instance, and I know this will sound quite biased,
I don't think there's anything wrong on supporting a *set* of wsgi
frameworks. To be fair, there's already a set since currently
integrated projects use webob, swob and Pecan.

The point I'd like to get at is that as a general rule we probably
shouldn't limit the set of supported libraries to just 1.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgp6sx_pc63_r.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread Kurt Griffiths
Thierry Carrez wrote:

> There was historically a lot of deviation, but as we add more projects
>that deviation is becoming more costly.

I totally understand the benefits of reducing the variance between
projects, and to be sure, I am not suggesting we have 10 different
libraries to do X.  However, as more projects are added, the variety of
requirements also increases, and it becomes very difficult for a single
library to meet all the projects' needs without some projects having to
make non-trivial compromises.

One approach to this that I’ve seen work well in other communities is to
define a small set of options that cover the major use cases.

> My question would be, can Pecan be improved to also cover Marconi's use
>case ? Could we have the best of both worlds (an appropriate tool *and*
>convergence) ?

That would certainly be ideal, but as always, the devil is in the details.

Pecan performance has been improving, so on that front there may be an
opportunity for convergence (assuming webob also improves in performance).
However, with respect to code paths and dependencies, I am not clear on
the path forward. Some dependencies could be removed by creating some kind
of “pecan-light” library, but that would need to be done in a way that
does not break projects that rely on those extra features. That would
still leave webob, which is an often-used selling point for Pecan. I am
not confident that webob can be modified to address Marconi and Swift's
needs without making backwards-incompatible changes to the library which
would obviously not be acceptable to the broader Python community.


Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread Donald Stufft

On Mar 19, 2014, at 10:18 AM, Kurt Griffiths  
wrote:

> Thierry Carrez wrote:
> 
>> There was historically a lot of deviation, but as we add more projects
>> that deviation is becoming more costly.
> 
> I totally understand the benefits of reducing the variance between
> projects, and to be sure, I am not suggesting we have 10 different
> libraries to do X.  However, as more projects are added, the variety of
> requirements also increases, and it becomes very difficult for a single
> library to meet all the projects' needs without some projects having to
> make non-trivial compromises.
> 
> One approach to this that I’ve seen work well in other communities is to
> define a small set of options that cover the major use cases.
> 
>> My question would be, can Pecan be improved to also cover Marconi's use
>> case ? Could we have the best of both worlds (an appropriate tool *and*
>> convergence) ?
> 
> That would certainly be ideal, but as always, the devil is in the details.
> 
> Pecan performance has been improving, so on that front there may be an
> opportunity for convergence (assuming webob also improves in performance).
> However, with respect to code paths and dependencies, I am not clear on
> the path forward. Some dependencies could be removed by creating some kind
> of “pecan-light” library, but that would need to be done in a way that
> does not break projects that rely on those extra features. That would
> still leave webob, which is an often-used selling point for Pecan. I am
> not confident that webob can be modified to address Marconi and Swift's
> needs without making backwards-incompatible changes to the library which
> would obviously not be acceptable to the broader Python community.

I’m not sure that “number of dependencies” is a useful metric at all tbh. At the
very least it’s not a very telling metric in the way it was presented in the 
review.
An example -> A tool that has to safely render untrusted HTML, you could do
it with nothing but the stdlib using say regex based parsers (and get it wrong) 
or
you could depend on bleach which depends on html5lib. Using the “number of
dependencies” metric the first would be considered the superior method
however it’s deeply flawed.

The reason given in the report is that more dependencies = larger attack 
surface,
but that’s not really accurate either. Often times you’ll find that if two 
libraries
solve the same problems, one with dependencies and one without the one
without dependencies has a less battle tested reimplementation of whatever
functionality the other library has a dependency for.

In order to accurately assess the impact of dependencies you have to understand
what the library is using those dependencies for, how well tested those 
dependencies
are, what the release cycle and backwards compatibility policy of those 
dependencies
are, and what the other project is doing in place of a dependency for the 
feature(s)
that depend on them (which the answer may be that it doesn’t have that feature,
and then you have to decide if that feature is useful to you and if you’ll need 
to add
a dependency or write less battle tested code in order to get it).

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] [Nova] use Keystone V3 token to volume attachment

2014-03-19 Thread Matt Riedemann



On 3/19/2014 2:48 AM, Shao Kai SK Li wrote:

Hello:

  I am working on this
patch(https://review.openstack.org/#/c/77524/) to fix bugs about volume
attach failure with keystone V3 token.

  Just wonder, is there some blue prints or plans in Juno to address
keystone V3 support in nova ?

  Thanks you in advance.


Best Regards~~~

Li, Shaokai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I have this on the nova meeting agenda for tomorrow [1].  I would think 
at a minimum this means running compute tests in Tempest against a 
keystone v3 backend.  I'm not sure what the current state of Tempest is 
regarding keystone v3.  Note that this isn't the only thing that made it 
into nova in Icehouse related to keystone v3 [2].


[1] https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting
[2] https://review.openstack.org/69972

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Timeline for upcoming PTL and TC elections

2014-03-19 Thread Anita Kuno
Election season is coming up.

In the next weeks we'll renew our PTLs (one for each OpenStack program)
and 7 Technical Committee members.

The timeline for those elections is as follows:
* March 28 - April 4, 05:59 UTC: Open candidacy to PTL positions
* April 4 - April 11: PTL elections
* April 11 - April 18, 05:59 UTC: Open candidacy to TC positions
* April 18 - April 24: TC elections

Election officials for both elections will be myself and Tristan
Cacqueray (tristanC).

See more details at:
https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014
https://wiki.openstack.org/wiki/TC_Elections_April_2014


Anita Kuno (anteaya)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread Doug Hellmann
On Wed, Mar 19, 2014 at 7:31 AM, Thierry Carrez wrote:

> Kurt Griffiths wrote:
> > Kudos to Balaji for working so hard on this. I really appreciate his
> candid feedback on both frameworks.
>
> Indeed, that analysis is very much appreciated.
>
> From the Technical Committee perspective, we put a high weight on a
> factor that was not included in the report results: consistency and
> convergence between projects we commonly release in an integrated manner
> every 6 months. There was historically a lot of deviation, but as we add
> more projects that deviation is becoming more costly. We want developers
> to be able to jump from one project to another easily, and we want
> convergence from an operators perspective.


> Individual projects are obviously allowed to pick the best tool in their
> toolbox. But the TC may also decide to let projects live out of the
> "integrated release" if we feel they would add too much divergence in.
>

As Thierry points out, an important aspect of being in the integrated
release is being aligned with the rest of the community. The evaluation
gives "community" considerations the lowest weight among the criteria
considered. Does that ranking reflect the opinion of the entire Marconi
team? If so, what benefits do you see to being integrated?

The evaluation does not discuss any of the infrastructure tooling being
built up around OpenStack's use of Pecan. For example, what will Marconi do
for API documentation generation?

Pecan is currently gating changes against projects that use it, so we can
be sure that changes to the framework do not break our applications. This
does not appear to have been factored into the evaluation.


>
> > After reviewing the report below, I would recommend that Marconi
> > continue using Falcon for the v1.1 API and then re-evaluate Pecan for
> > v2.0 or possibly look at using swob.
>
> The report (and your email below) makes a compelling argument that
> Falcon is a better match for Marconi's needs (or for a data-plane API)
> than Pecan currently is. My question would be, can Pecan be improved to
> also cover Marconi's use case ? Could we have the best of both worlds
> (an appropriate tool *and* convergence) ?
>

We had several conversations with Kurt and Flavio in Hong Kong about
adding features to Pecan to support the Marconi team, and Ryan prototyped
some of those changes shortly after we returned home. Was any of that work
considered in the evaluation?

Doug


>
> If the answer is "yes, probably", then it might be an option to delay
> inclusion in the integrated release so that we don't add (even
> temporary) divergence. If the answer is "definitely no", then we'll have
> to choose between convergence and functionality.
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] First time contributing to some project

2014-03-19 Thread Jay Pipes
On Wed, 2014-03-19 at 17:33 +0530, Dharmit Shah wrote:
> Hello Stackers,
> 
> This is the first time I am mailing some dev mailing list and first
> time I am trying to contribute on some open source project.

Welcome, Dharmit, nice to have you in our community! :)

> I am working on a low-hanging-fruit bug
> (https://bugs.launchpad.net/nova/+bug/1261909). It's about removing
> hba specific code from nova as cinder manages the volume part.
> 
> The function in question is get_fc_hbas() in the
> nova/virt/libvirt/utils.py file. This function is being referred by
> get_fc_hbas_info() function in the same file and connect_volume()
> function in nova/virt/libvirt/volume.py file. Issue is that the latter
> is being used in a lot of other files across nova code.
> 
> I think the same connect_volume() function is defined in
> cinder/brick/initiator/connector.py file and has similar code as that
> in nova/virt/libvirt/volume.py. Should I just nuke all the
> get_fc_hbas(), get_fc_hbas_info() and connect_volume() definitions and
> reference from the nova code?
> 
> I am really confused about how I should proceed further on this. Can
> someone please help me get started with this?

Hop on Freenode IRC, #openstack-nova channel, and have a chat with the
original bug reporter Padraig Brady (pixelb on IRC) or John Garbutt
(johnthetubaguy on IRC), who commented on the bug as well. I'm sure
either would be happy to provide more information to you.

If they aren't around, a number of other folks, including myself
(jaypipes on IRC), are available to help as well.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Canceling team meeting Mar 20 [savanna]

2014-03-19 Thread Sergey Lukjanov
Hi team,

I'm canceling team meeting tomorrow, Mar 20, because there will be no
quorum - Mirantis team will be unavailable due to the internal company
event, Matt F. said that he'll be unavailable too.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Docs for new plugins

2014-03-19 Thread Edgar Magana
Thanks for the input Anne!

Kyle,

Please file the proper bugs in the manual projects to be able to track the
progress of this topic.

Thanks,

Edgar

From:  Anne Gentle 
Date:  Wednesday, March 19, 2014 4:54 AM
To:  Edgar Magana 
Cc:  OpenStack List 
Subject:  Re: [openstack-dev] [Neutron] Docs for new plugins




On Wed, Mar 19, 2014 at 12:45 AM, Edgar Magana  wrote:
> Including Anne in this thread.
> 
> Anne,
> 
> Can your provide your input here?
> 
> Thanks,
> 
> Edgar
> 
> From:  Mohammad Banikazemi 
> Reply-To:  OpenStack List 
> Date:  Monday, March 17, 2014 8:02 AM
> To:  OpenStack List 
> 
> Subject:  Re: [openstack-dev] [Neutron] Docs for new plugins
> 
> I think the docs get updated for each release, so probably the newly added
> stuff (after I3) will be picked up by the RC1 release date. (cc'ing Tom
> Fifield for a definitive answer.)
> 
> By the way I do see the odl config table in the openstack-manuals source tree:
> https://github.com/openstack/openstack-manuals/blob/master/doc/common/tables/n
> eutron-ml2_odl.xml
> and that is being referenced here:
> https://github.com/openstack/openstack-manuals/blob/master/doc/config-referenc
> e/networking/section_networking-plugins-ml2.xml
> 
> Best,
> 
> Mohammad
> 
> 
> Kyle Mestery ---03/17/2014 09:40:51 AM---Edgar: I don't see the configuration
> options for the OpenDaylight ML2


Hi Kyle, there is still a manual step by a docs person to run the scripts
and create a patch.

Does Gauvain's patch contain the OpenDaylight config options?
 https://review.openstack.org/#/c/81013/


I also want to note that just because there is reference information doesn't
mean the docs are complete -- are there concepts and tasks still to be
written so that users know how to use this ML2 plug-in and that it's
available? Add those to the Cloud Administration Guide please.

Thanks,
Anne
> 
> 
> From: Kyle Mestery 
> To: "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 03/17/2014 09:40 AM
> Subject: Re: [openstack-dev] [Neutron] Docs for new plugins
> 
> 
> 
> 
> Edgar:
> 
> I don't see the configuration options for the OpenDaylight ML2 MechanismDriver
> added here yet, even though the code was checked in well over a week ago.
> How long does it take to autogenerate this page from the code?
> 
> Thanks!
> Kyle
> 
> 
> 
> On Wed, Mar 12, 2014 at 5:10 PM, Edgar Magana   > wrote:
>> You should be able to add your plugin here:
>> http://docs.openstack.org/havana/config-reference/content/networking-options-
>> plugins.html 
>> > -plugins.html> 
>> 
>> Thanks,
>> 
>> Edgar
>> 
>> From: Mohammad Banikazemi mailto:m...@us.ibm.com> >
>> Date: Monday, March 10, 2014 2:40 PM
>> To: OpenStack List >  >
>> Cc: Edgar Magana mailto:emag...@plumgrid.com> >
>> Subject: Re: [openstack-dev] [Neutron] Docs for new plugins
>> 
>> Would like to know what to do for adding documentation for a new plugin. Can
>> someone point me to the right place/process please.
>> 
>> Thanks,
>> 
>> Mohammad
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___ OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/li
> stinfo/openstack-dev



<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread Doug Hellmann
On Wed, Mar 19, 2014 at 10:11 AM, Flavio Percoco  wrote:

> On 19/03/14 12:31 +0100, Thierry Carrez wrote:
>
>> Kurt Griffiths wrote:
>>
>>> Kudos to Balaji for working so hard on this. I really appreciate his
>>> candid feedback on both frameworks.
>>>
>>
>> Indeed, that analysis is very much appreciated.
>>
>> From the Technical Committee perspective, we put a high weight on a
>> factor that was not included in the report results: consistency and
>> convergence between projects we commonly release in an integrated manner
>> every 6 months. There was historically a lot of deviation, but as we add
>> more projects that deviation is becoming more costly. We want developers
>> to be able to jump from one project to another easily, and we want
>> convergence from an operators perspective.
>>
>> Individual projects are obviously allowed to pick the best tool in their
>> toolbox. But the TC may also decide to let projects live out of the
>> "integrated release" if we feel they would add too much divergence in.
>>
>
>
> My only concern in this case - I'm not sure if this has been discussed
> or written somewhere - is to define what the boundaries of that
> divergence are. For instance, and I know this will sound quite biased,
> I don't think there's anything wrong on supporting a *set* of wsgi
> frameworks. To be fair, there's already a set since currently
> integrated projects use webob, swob and Pecan.
>

Only one project is using swob, and it is unlikely that will change. The
other projects are mostly using the legacy oslo framework or Pecan,
although a few are using Flask (perhaps based on ceilometer's initial
experimentation with it?).

As I understand it, all of the integrated projects have looked at Pecan,
and are anticipating the transition. Most have no reason to create a new
API version, and therefore build a new API service to avoid introducing
incompatibilities by rebuilding the existing API with a new tool. This
aligns with the plan when Pecan was proposed as a standard.

Doug


>
> The point I'd like to get at is that as a general rule we probably
> shouldn't limit the set of supported libraries to just 1.


> Cheers,
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] [Nova] use Keystone V3 token to volume attachment

2014-03-19 Thread Matthew Treinish
On Wed, Mar 19, 2014 at 09:35:34AM -0500, Matt Riedemann wrote:
> 
> 
> On 3/19/2014 2:48 AM, Shao Kai SK Li wrote:
> >Hello:
> >
> >  I am working on this
> >patch(https://review.openstack.org/#/c/77524/) to fix bugs about volume
> >attach failure with keystone V3 token.
> >
> >  Just wonder, is there some blue prints or plans in Juno to address
> >keystone V3 support in nova ?
> >
> >  Thanks you in advance.
> >
> >
> >Best Regards~~~
> >
> >Li, Shaokai
> >
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> I have this on the nova meeting agenda for tomorrow [1].  I would
> think at a minimum this means running compute tests in Tempest
> against a keystone v3 backend.  I'm not sure what the current state
> of Tempest is regarding keystone v3.  Note that this isn't the only
> thing that made it into nova in Icehouse related to keystone v3 [2].

On the tempest side there are some dedicated keystone v3 api tests, I'm not
sure how well things are covered there though. On using keystone v3 for auth
for the other tests tempest doesn't quite support that yet. Andrea Frittoli is
working on a bp to get this working:

https://blueprints.launchpad.net/tempest/+spec/multi-keystone-api-version-tests

But, at this point it will probably end up being early Juno thing before this
can be enabled everywhere in tempest.

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Getting close to RC1....

2014-03-19 Thread Ben Nemec
 

Please don't send review requests to the list:
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html


Thanks. 

-Ben 

On 2014-03-18 22:46, wu jiang wrote: 

> Hi Tracy, 
> 
> I've already updated the patch for bug/1195947 on 
> https://review.openstack.org/#/c/38073/ [18]. 
> Please review it. 
> 
> Thanks~ 
> 
> On Wed, Mar 19, 2014 at 5:31 AM, Tracy Jones  wrote:
> 
>> Folks - we are getting close to RC next week and therefore will start 
>> closing down the churn. Bugs that are not merged by Monday @8am EDT (12pm 
>> UTC) will be moved out of RC1 and pushed to icehouse-rc-potential. Only 
>> those bugs which are 
>> 
>> a. regression 
>> b. highest priority issues (as decided by russellb) 
>> 
>> will be reviewed after that time. 
>> 
>> The current list for RC1 is 
>> 
>> BUG REPORT
>> IMPORTANCE
>> ASSIGNEE
>> STATUS
>> 
>> #1195947
>> VM re-scheduler mechanism will cause BDM-volumes conflict [1]
>> 
>> High
>> wingwj [2] 
>> In Progress
>> 
>> #1246201
>> Live migration fails when the instance has a config-drive [3]
>> 
>> High
>> Michael Still [4] 
>> In Progress
>> 
>> #1269418
>> nova rescue doesn't put VM into RESCUE status on vmware [5]
>> 
>> High
>> Gary Kotton [6] 
>> In Progress
>> 
>> #1274129
>> host-update --maintenance enable regression for vmware VCDriver [7]
>> 
>> High
>> Gary Kotton [6] 
>> In Progress
>> 
>> #1290403
>> Hyper-V agent does not enable disk metrics for individual disks [8]
>> 
>> High
>> Claudiu Belu [9] 
>> In Progress
>> 
>> #1290540
>> neutron_admin_tenant_name deprecation warning is wrong [10]
>> 
>> High
>> Robert Collins [11] 
>> In Progress
>> 
>> #1290807
>> Resize on vCenter failed because of _VM_REFS_CACHE [12]
>> 
>> High
>> Feng Xi Yan [13] 
>> In Progress
>> 
>> #1294102
>> VMware: get_object_properties may not return any objects [14]
>> 
>> High
>> Gary Kotton [6] 
>> In Progress
>> 
>> #1180040
>> Race condition in attaching/detaching volumes when compute manager is 
>> unreachable [15]
>> 
>> Medium
>> Nikola Đipanov [16] 
>> In Progress
>> 
>> The rest will be deferred to Juno. 
>> 
>> Please let me know if you have questions or comments 
>> 
>> Tracy 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [17]
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [17]

 

Links:
--
[1] https://bugs.launchpad.net/bugs/1195947
[2] https://launchpad.net/~wingwj
[3] https://bugs.launchpad.net/bugs/1246201
[4] https://launchpad.net/~mikalstill
[5] https://bugs.launchpad.net/bugs/1269418
[6] https://launchpad.net/~garyk
[7] https://bugs.launchpad.net/bugs/1274129
[8] https://bugs.launchpad.net/bugs/1290403
[9] https://launchpad.net/~cbelu
[10] https://bugs.launchpad.net/bugs/1290540
[11] https://launchpad.net/~lifeless
[12] https://bugs.launchpad.net/bugs/1290807
[13] https://launchpad.net/~yanfengxi
[14] https://bugs.launchpad.net/bugs/1294102
[15] https://bugs.launchpad.net/bugs/1180040
[16] https://launchpad.net/~ndipanov
[17] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[18] https://review.openstack.org/#/c/38073/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][OVS Agent]

2014-03-19 Thread Nader Lahouti
Thanks Mathieu for your reply and info.


On Tue, Mar 18, 2014 at 3:35 AM, Mathieu Rohon wrote:

> Hi nader,
>
> The easiest way would be to register a new RPC callback in the current
> ovs agent. This is what we have done for the l2-pop MD, with fdb_add
> and fdb_remove callbacks.
> However, it could become a mess if every MD adds it own callback
> directly into the code of the agent. L2 agent should be able to load
> drivers, which might register new callbacks.
> This could potentially be something to do while refactoring the agent
> : https://review.openstack.org/#/c/57627/
>
> On Tue, Mar 18, 2014 at 7:42 AM, Nader Lahouti 
> wrote:
> > Hi All,
> >
> > In a multi-node setup, I'm using Ml2Plugin (as core plugin) and OVS
> > (OVSNeutronAgent) as an agent on compute nodes. From controller I need to
> > call a *new method* on agent ( on all compute nodes - using  RPC), to
> > perform a task (i.e. to communicate with an external process). As I need
> to
> > use OVSNeutronAgent, I am thinking the following as potential solution
> for
> > adding the new method to the agent:
> > 1. Create new plugin based on existing OVS agent - That means cloning
> > OVSNeutronAgent and add the new method to that.
> > 2. Create new plugin, which inherits OVSNeutronPlugin - the new plugin
> > defines the new method, setup_rpc,...
> > 3. Add the new method to the existing OVSNeutronAgent
> >
> > Please let me know your thought and comments.
> >
> > Regards,
> > Nader.
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread Flavio Percoco

On 19/03/14 11:20 -0400, Doug Hellmann wrote:




On Wed, Mar 19, 2014 at 10:11 AM, Flavio Percoco  wrote:
   My only concern in this case - I'm not sure if this has been discussed
   or written somewhere - is to define what the boundaries of that
   divergence are. For instance, and I know this will sound quite biased,
   I don't think there's anything wrong on supporting a *set* of wsgi
   frameworks. To be fair, there's already a set since currently
   integrated projects use webob, swob and Pecan.


Only one project is using swob, and it is unlikely that will change. The other
projects are mostly using the legacy oslo framework or Pecan, although a few
are using Flask (perhaps based on ceilometer's initial experimentation with
it?).

As I understand it, all of the integrated projects have looked at Pecan, and
are anticipating the transition. Most have no reason to create a new API
version, and therefore build a new API service to avoid introducing
incompatibilities by rebuilding the existing API with a new tool. This aligns
with the plan when Pecan was proposed as a standard.



Yeah, what I wanted to say is that it is arguable that we should add
another framework (falcon) to this already existing set of frameworks.
Although, it is being reduced to just 2 which still raises my previous
question:




   The point I'd like to get at is that as a general rule we probably
   shouldn't limit the set of supported libraries to just 1. 


... but perhaps decide in a per-case basis.


Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgpXcar_3te57.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Docs for new plugins

2014-03-19 Thread Kyle Mestery
I opened this bug Edgar:

https://bugs.launchpad.net/openstack-manuals/+bug/1294726


On Wed, Mar 19, 2014 at 10:17 AM, Edgar Magana  wrote:

> Thanks for the input Anne!
>
> Kyle,
>
> Please file the proper bugs in the manual projects to be able to track the
> progress of this topic.
>
> Thanks,
>
> Edgar
>
> From: Anne Gentle 
> Date: Wednesday, March 19, 2014 4:54 AM
> To: Edgar Magana 
> Cc: OpenStack List 
>
> Subject: Re: [openstack-dev] [Neutron] Docs for new plugins
>
>
>
>
> On Wed, Mar 19, 2014 at 12:45 AM, Edgar Magana wrote:
>
>> Including Anne in this thread.
>>
>> Anne,
>>
>> Can your provide your input here?
>>
>> Thanks,
>>
>> Edgar
>>
>> From: Mohammad Banikazemi 
>> Reply-To: OpenStack List 
>> Date: Monday, March 17, 2014 8:02 AM
>> To: OpenStack List 
>>
>> Subject: Re: [openstack-dev] [Neutron] Docs for new plugins
>>
>> I think the docs get updated for each release, so probably the newly
>> added stuff (after I3) will be picked up by the RC1 release date. (cc'ing
>> Tom Fifield for a definitive answer.)
>>
>> By the way I do see the odl config table in the openstack-manuals source
>> tree:
>> https://github.com/openstack/openstack-manuals/blob/master/doc/common/tables/neutron-ml2_odl.xml
>> and that is being referenced here:
>>
>> https://github.com/openstack/openstack-manuals/blob/master/doc/config-reference/networking/section_networking-plugins-ml2.xml
>>
>> Best,
>>
>> Mohammad
>>
>>
>> [image: Inactive hide details for Kyle Mestery ---03/17/2014 09:40:51
>> AM---Edgar: I don't see the configuration options for the OpenDay]Kyle
>> Mestery ---03/17/2014 09:40:51 AM---Edgar: I don't see the configuration
>> options for the OpenDaylight ML2
>>
>
> Hi Kyle, there is still a manual step by a docs person to run the scripts
> and create a patch.
>
> Does Gauvain's patch contain the OpenDaylight config options?
>  https://review.openstack.org/#/c/81013/
>
> I also want to note that just because there is reference information
> doesn't mean the docs are complete -- are there concepts and tasks still to
> be written so that users know how to use this ML2 plug-in and that it's
> available? Add those to the Cloud Administration Guide please.
>
> Thanks,
> Anne
>
>>
>>
>> From: Kyle Mestery 
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>,
>> Date: 03/17/2014 09:40 AM
>> Subject: Re: [openstack-dev] [Neutron] Docs for new plugins
>> --
>>
>>
>>
>> Edgar:
>>
>> I don't see the configuration options for the OpenDaylight ML2
>> MechanismDriver
>> added here yet, even though the code was checked in well over a week ago.
>> How long does it take to autogenerate this page from the code?
>>
>> Thanks!
>> Kyle
>>
>>
>>
>> On Wed, Mar 12, 2014 at 5:10 PM, Edgar Magana 
>> <*emag...@plumgrid.com*>
>> wrote:
>>
>>You should be able to add your plugin here:
>>
>>
>> *http://docs.openstack.org/havana/config-reference/content/networking-options-plugins.html*
>>
>>Thanks,
>>
>>Edgar
>>
>>*From: *Mohammad Banikazemi <*m...@us.ibm.com* >
>> * Date: *Monday, March 10, 2014 2:40 PM
>> * To: *OpenStack List 
>> <*openstack-dev@lists.openstack.org*
>>>
>> * Cc: *Edgar Magana <*emag...@plumgrid.com* >
>> * Subject: *Re: [openstack-dev] [Neutron] Docs for new plugins
>>
>>Would like to know what to do for adding documentation for a new
>>plugin. Can someone point me to the right place/process please.
>>
>>Thanks,
>>
>>Mohammad
>>
>>___
>>OpenStack-dev mailing list
>> *OpenStack-dev@lists.openstack.org* 
>> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___ OpenStack-dev mailing
>> list OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Policy types

2014-03-19 Thread Tim Hinrichs
Inline.

Tim

- Original Message -
| From: "prabhakar Kudva" 
| To: "OpenStack Development Mailing List (not for usage questions)" 

| Sent: Wednesday, March 19, 2014 7:09:24 AM
| Subject: Re: [openstack-dev] [Congress] Policy types
| 
| Hi Tim,
|  
| I am good with starting or own the Python __congress__builtin__ proposal, but
| will need help from the whole team to work through the details.
| Do you all envision, the data integration API (Pete's calls) to be part of
| this builtin?
| What are the exact aspects each member is interested in having in this
| builtin? It
| would be good to start a list and discuss.
| 1. Inherit all python builtins
| 2. Any builtins specific to datalog etc
| 3. builtins to test and manage primitives
|  

I'd say we should start simple.  The data types we support right now are 
integers, floats, and strings.  So let's add some basic arithmetic and string 
manipulation functions.  If we design this right, it will be easy to add more 
later.

Arithmetic: +, *, -, /, <, >, =, others?
Strings: concatenation, substring, indexof, others?

I'd say we definitely don't want class (3)--we don't want the policy to change 
the data upon which policy decisions are made, at least not directly.  
Conceptually, the tables derived from nova/Neutron/etc. represent *their* 
states.  We can change their states but only by executing *their* API calls, 
which if successful, will change their state, which will eventually be 
reflected in Congress tables.  The Condition-Action policy that I described 
below dictates which of their API calls Congress should execute and when; this 
policy *indirectly* will change the tables of Nova/Neutron/etc.  But I wouldn't 
want to change tables via builtins.



| Regarding data integration:
| My concern with periodically pushing the contents is consistency. Having two
| stores (the nova/OS
| database and a Congress specific increases the chances of the data being out
| of sync). For example
| nova:owner(vm1), might be changed in the nova database if the period is too
| large.  What would be
| the benefits of doing this?  The first one: 'ask for contents of the tables'
| reduces the consistency
| issue.  Do you see those API mapping betweeen table entries and OS calls as
| part of the builtin?

Conceptually, data integration handles the problem of grabbing information from 
other components.  Builtins handle the problem that some tables are  
hard/impossible to define within Datalog itself.  

More practically, I would NOT want a builtin that queries another cloud service 
because the policy writer does not (and is not supposed to) know the particular 
algorithms Congress will use to process those rules.  Congress could end up 
calling a builtin 100 times for a *single* computation of policy violations.
We wouldn't want to download all the data about all VM instances 100 times 
whenever someone asks for all violations.  So builtins are things that compute 
quickly.

I don't see a way around caching the contents of other OS components.  A single 
policy might utilize the same data but in different ways in multiple places in 
the policy.  The intent is that we'll always be in sync to the extent that is 
practically feasible.  And even if we were to ask for all the data that we need 
before each and every query someone asks, that data could still change before 
we answer the query, before the person asking the query looked at the answer, 
before the person asking the query made a decision based on the answer, etc.  
So staleness is a fact of life, and we're setting this up to minimize those 
problems by enabling cloud services to send us *updates* to their data whenever 
they get them.  If a cloud service does not send us updates, the best we can do 
is poll them periodically, which will make staleness a bigger problem.  But I 
don't see what else we can do.


|  
| >>2. The policy engine will need to ask for the contents of tables (or
| >>updates to tables).  Or have the >>contents/updates pushed to it
| >>periodically.  The infrastructure for doing this is what Peter (pballand)
| volunteered to do in IRC last week.  Let's wait on this to see what he
| >>has to say.
|  
| > Date: Tue, 18 Mar 2014 12:56:10 -0700
| > From: thinri...@vmware.com
| > To: openstack-dev@lists.openstack.org
| > CC: rajde...@vmware.com
| > Subject: Re: [openstack-dev] [Congress] Policy types
| > 
| > Hi Prabhakar,
| > 
| > Found time for a more detailed response.  Comments are inline.
| > 
| > Tim
| > 
| > - Original Message -
| > | From: "Tim Hinrichs" 
| > | To: "OpenStack Development Mailing List (not for usage questions)"
| > | 
| > | Sent: Tuesday, March 18, 2014 9:31:34 AM
| > | Subject: Re: [openstack-dev] [Congress] Policy types
| > | 
| > | Hi Prabhakar,
| > | 
| > | No IRC meeting this week.  Our IRC is every *other* week, and we had it
| > | last
| > | week.
| > | 
| > | Though there's been enough activity of late that maybe we should consider
| > | maki

Re: [openstack-dev] [Neutron] advanced servicevm framework: meeting time slot proposal 5:00UTC (Tue) and minutes (was Re: [Neutron] advanced servicevm framework IRC meeting March 18(Tuesday) 23:00 UTC

2014-03-19 Thread Mohammad Banikazemi



Isaku Yamahata  wrote on 03/19/2014 04:38:34 AM:

> From: Isaku Yamahata 
> To: OpenStack Development Mailing List
,
> Cc: isaku.yamah...@gmail.com
> Date: 03/19/2014 04:48 AM
> Subject: [openstack-dev] [Neutron] advanced servicevm framework:
> meeting time slot proposal 5:00UTC (Tue) and minutes (was Re:
> [Neutron] advanced servicevm framework IRC meeting March 18(Tuesday)23:00
UTC)
>
>
> * Time slot
> Weekly Tuesday 5:00UTC-
> Next meeting: March 24, 5:00UTC-
>
> Since there were many requests for new time slots, the proposed time slot
> at the meeting is 5:00UTC.
> The related timezones are
> JST(UTC+9), IST(UTC+5.30), CED(UTC+1), EST(UTC-5), PDT(UTC-7), PST(UTC-8)

Just wanted to note that if the EST refers to the Eastern Standard Time,
the conversion for this time zone (UTC-5) and some of the others are not
correct; for the EST time zone it should be UTC-4 which means a 1:00am EST
meeting time. I realize it is difficult to have a time slot that works for
everybody. Will be following this activity through the IRC logs and the ML.

Best,

Mohammad___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Alternating meeting time for more TZ friendliness

2014-03-19 Thread Sullivan, Jon Paul
> From: James Slagle [mailto:james.sla...@gmail.com]
> Sent: 18 March 2014 19:58
> Subject: [openstack-dev] [TripleO] Alternating meeting time for more TZ
> friendliness
> 
> Our current meeting time is Tuesdays at 19:00 UTC.  I think this works
> ok for most folks in and around North America.
> 
> It was proposed during today's meeting to see if there is interest is an
> alternating meeting time every other week so that we can be a bit more
> friendly to those folks that currently can't attend.
> If that interests you, speak up :).

Speaking up! :D

> 
> For reference, the current meeting schedules are at:
> https://wiki.openstack.org/wiki/Meetings

Tuesdays at 14:00 UTC on #openstack-meeting-alt is available.

> 
> --
> -- James Slagle
> --
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks, 
Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2. 
Registered Number: 361933
 
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should 
consider this message and attachments as "HP CONFIDENTIAL".
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [php-sdk] Action items from today's meeting

2014-03-19 Thread Shaunak Kashyap
Thanks for the great meeting today. Here are the action items that came out of 
it:
- [Shaunak] Look into mailing list for user support. How do other OpenStack 
projects do this?
- [Shaunak] Make trivial change to repo to learn OpenStack contribution process
- [Jamie] Look into PHPSpec vs. PHPUnit 4. Specific: how does PHPSpec do code 
coverage?
- [Matt] Look into sponsorship from PHP FIG (email Larry Garfield @crell)
- [Matt] Consider syncing up with Alex Gaynor about removing dependency on HTTP 
transport layer
- [Matt] Write code to use PSR-4 with intermediate directory ("OpenStack") 
- [Matt] Get with Sam re: regular meeting on Wednesdays at 11EST, then schedule 
recurring meeting.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] advanced servicevm framework: meeting time slot proposal 5:00UTC (Tue) and minutes (was Re: [Neutron] advanced servicevm framework IRC meeting March 18(Tuesday) 23:00 UTC

2014-03-19 Thread Stephen Wong
Hi Mohammad,

I am sorry to say that the new schedule is indeed 1am EST...

- Stephen


On Wed, Mar 19, 2014 at 9:04 AM, Mohammad Banikazemi  wrote:

> Isaku Yamahata  wrote on 03/19/2014 04:38:34 AM:
>
> > From: Isaku Yamahata 
> > To: OpenStack Development Mailing List <
> openstack-dev@lists.openstack.org>,
> > Cc: isaku.yamah...@gmail.com
> > Date: 03/19/2014 04:48 AM
> > Subject: [openstack-dev] [Neutron] advanced servicevm framework:
> > meeting time slot proposal 5:00UTC (Tue) and minutes (was Re:
> > [Neutron] advanced servicevm framework IRC meeting March
> 18(Tuesday)23:00 UTC)
>
> >
> >
> > * Time slot
> > Weekly Tuesday 5:00UTC-
> > Next meeting: March 24, 5:00UTC-
> >
> > Since there were many requests for new time slots, the proposed time slot
> > at the meeting is 5:00UTC.
> > The related timezones are
> > JST(UTC+9), IST(UTC+5.30), CED(UTC+1), EST(UTC-5), PDT(UTC-7), PST(UTC-8)
>
> Just wanted to note that if the EST refers to the Eastern Standard Time,
> the conversion for this time zone (UTC-5) and some of the others are not
> correct; for the EST time zone it should be UTC-4 which means a 1:00am EST
> meeting time. I realize it is difficult to have a time slot that works for
> everybody. Will be following this activity through the IRC logs and the ML.
>
> Best,
>
> Mohammad
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Integrating network policies and network services

2014-03-19 Thread Louis.Fourie
Mohammad,
  Agree, the information models for these two proposals are very similar.

It appears that the ODL model offers some additional flexibility in that 
direction attributes are
attached to Classifiers and Directives rather than Policies in the Openstack 
model.

Maybe the authors can comment on other differences and their motivation?

Is there any plan to merge these two models and create a common model?  Is 
there a group
within OpenStack actively working on this?  Will this work be done within 
OpenStack or OpenDaylight?


-  Louis

From: Mohammad Banikazemi [mailto:m...@us.ibm.com]
Sent: Tuesday, March 18, 2014 9:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][policy] Integrating network policies and 
network services


Louis, We are still working on the details of the new contract based model. To 
get an idea please refer to the original project google document [1] and look 
under the section titled
Use Cases: 3-tier Application with Security Policies where policies are 
described through a provider/consumer relationship. The contract model is 
similar to the model being worked out by a similarly named project in 
OpenDaylight. You can find more information on the contract model there [2].

Best,

Mohammad


[1] 
https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit#heading=h.gebyoou6khks
[2] 
https://wiki.opendaylight.org/view/Project_Proposals:Application_Policy_Plugin

[Inactive hide details for "Louis.Fourie" ---03/18/2014 03:23:05 PM---Mohammad, 
  Can you share details on the contract-based po]"Louis.Fourie" ---03/18/2014 
03:23:05 PM---Mohammad,   Can you share details on the contract-based policy 
model?

From: "Louis.Fourie" 
To: "OpenStack Development Mailing List (not for usage questions)" 
,
Date: 03/18/2014 03:23 PM
Subject: Re: [openstack-dev] [neutron][policy] Integrating network policies and 
network services





Mohammad,
  Can you share details on the contract-based policy model?
-  Louis

From: Mohammad Banikazemi [mailto:m...@us.ibm.com]
Sent: Friday, March 14, 2014 3:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron][policy] Integrating network policies and 
network services


We have started looking at how the Neutron advanced services being defined and 
developed right now can be used within the Neutron policy framework we are 
building. Furthermore, we have been looking at a new model for the policy 
framework as of the past couple of weeks. So, I have been trying to see how the 
services will fit in (or can be utilized by) the policy work in general and 
with the new contract-based model we are considering in particular. Some of the 
I like to discuss here are specific to the use of service chains with the group 
policy work but some are generic and related to service chaining itself.

If I understand it correctly, the proposed service chaining model requires the 
creation of the services in the chain without specifying their insertion 
contexts. Then, the service chain is created with specifying the services in 
the chain, a particular provider (which is specific to the chain being built) 
and possibly source and destination insertion contexts.

1- This fits ok with the policy model we had developed earlier where the policy 
would get defined between a source and a destination policy endpoint group. The 
chain could be instantiated at the time the policy gets defined. (More 
questions on the instantiation below marked as 1.a and 1.b.) How would that 
work in a contract based model for policy? At the time a contract is defined, 
it's producers and consumers are not defined yet. Would we postpone the 
instantiation of the service chain to the time a contract gets a producer and 
at least a consumer?

1.a- It seems to me, it would be helpful if not necessary to be able to define 
a chain without instantiating the chain. If I understand it correctly, in the 
current service chaining model, when the chain is created, the 
source/destination contexts are used (whether they are specified explicitly or 
implicitly) and the chain of services become operational. We may want to be 
able to define the chain and postpone its creation to a later point in time.

1.b-Is it really possible to stand up a service without knowing its insertion 
context (explicitly defined or implicitly defined) in all cases? For certain 
cases this will be ok but for others, depending on the insertion context or 
other factors such as the requirements of other services in the chain we may 
need to for example instantiate the service (e.g. create a VM) at a specific 
location that is not known when the service is created. If that may be the 
case, would it make sense to not instantiate the services of a chain at any 
level (rather than instantiating 

Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-19 Thread Zane Bitter

On 19/03/14 05:00, Stan Lagun wrote:

Steven,

Agree with your opinion on HOT expansion. I see that inclusion of
imperative workflows and ALM would require major Heat redesign and
probably would be impossible without loosing compatibility with previous
HOT syntax. It would blur Heat mission, confuse current users and rise a
lot of questions what should and what should not be in Heat. Thats why
we chose to built a system on top of Heat rather then expending HOT.


+1, I agree (as we have discussed before) that it would be a mistake to 
shoehorn workflow stuff into Heat. I do think we should implement the 
hooks I mentioned at the start of this thread to allow tighter 
integration between Heat and a workflow engine (i.e. Mistral).


So building a system on top of Heat is good. Building it on top of 
Mistral as well would also be good, and that was part of the feedback 
from the TC.


To me, building on top means building on top of the languages (which 
users will have to invest a lot of work in learning) as well, rather 
than having a completely different language and only using the 
underlying implementation(s).



Now I would like to clarify why have we chosen imperative approach with DSL.

You see a DSL as an alternative to HOT but it is not. DSL is alternative
to Python-encoded resources in Heat (heat/engine/resources/*.py).
Imagine how Heat would look like if you let untrusted users to upload
Python plugins to Heat engine and load them on the fly. Heat resources
are written in Python which is imperative language. So that MuranoPL for
the same reason.


We had this exact problem in Heat, and we decided to solve it with... 
HOT: 
http://lists.openstack.org/pipermail/openstack-dev/2013-April/007989.html


If I may be so bold as to quote myself:

"...no cloud operator in the world - not even your friendly local IT 
department - is going to let users upload Python code to run in-memory 
in their orchestration engine along with all of the other users' code.


"If only there were some sort of language for defining OpenStack 
services that could be safely executed by users...


"Of course that's exactly what we're all about on this project :). So my 
proposal is to allow users to define their own resource types using a 
Heat template."


Which we did:
https://blueprints.launchpad.net/heat/+spec/provider-resource


We want application authors to be able to express application deployment
and maintenance logic of any complexity. This may involve communication
with 3rd party REST services (APIs of applications being deployed,
external services like DNS server API, application licensing server API,
billing systems, some hardware component APIs etc) and internal
OpenStack services like Trove, Sahara, Marconi and others including
those that are not incubated yet and those to come in the future. You
cannot have such things in HOT and when you required to you need to
develop custom resource in Python. Independence  on custom plugins is
not good for Murano because they cannot be uploaded by end users and
thus he cannot write application definition that can be imported to/run
on any cloud and need to convince cloud administrator to install his
Python plugin (something that is unimaginable in real life).


Shouldn't Mistral be able to do all of those same things too?

Talking to existing OpenStack services doesn't seem hard - you could 
write plugins for those, or save time (and save the user learning 
another language) by using python-openstackclient and its syntax.


For everything else you have a small number of generic operations - e.g. 
post to a Marconi queue (ReST API calls to untrusted services are 
problematic from a security perspective) - and allow the user to handle 
them in their own code in a language of their choice running either on 
their own machine or on a sandboxed, metered Compute server, rather than 
in a custom Turing-complete DSL running unmetered on the Murano server.



Because DSL is a way to write custom resources (in Heats terminology) it
has to be Turing-complete and have all the characteristics of
general-purpose language. It also has to have domain-specific features
because we cannot expect that DSL users would be as skilled as Heat
developers and could write such resources without knowledge on hosting
engine architecture and internals.

HOT DSL is declarative because all the imperative stuff is hardcoded
into Heat engine. Thus all is left for HOT is to define "state of the
world" - desired outcome. That is analogous to Object Model in Murano
(see [1]). It is Object Model that can be compared to HOT, not DSL. As
you can see it not more complex than HOT. Object Model is what end-user
produces in Murano. And he event don't need to write it cause it can be
composed in UI.

Now because DSL provides not only a way to write sandboxed isolated code
but also a lot of declarations (classes, properties, parameters,
inheritance and contracts) that are mostly not present in Python we
don't need Paramet

Re: [openstack-dev] [neutron][policy] Integrating network policies and network services

2014-03-19 Thread Kyle Mestery
On Wed, Mar 19, 2014 at 12:23 PM, Louis.Fourie wrote:

>  Mohammad,
>
>   Agree, the information models for these two proposals are very similar.
>
>
>
> It appears that the ODL model offers some additional flexibility in that
> direction attributes are
>
> attached to Classifiers and Directives rather than Policies in the
> Openstack model.
>
>
>
> Maybe the authors can comment on other differences and their motivation?
>
>
>
> Is there any plan to merge these two models and create a common model?  Is
> there a group
>
> within OpenStack actively working on this?  Will this work be done within
> OpenStack or OpenDaylight?
>
>
>
Yes, we've been having weekly meetings since the Hong Kong Summit on this
[1]. Please join us
on IRC if you want. The two models are meant to be pretty much the same,
though the ODL models
are expanded a bit compared to the Neutron models at the moment.

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy

>  -  Louis
>
>
>
> *From:* Mohammad Banikazemi [mailto:m...@us.ibm.com]
> *Sent:* Tuesday, March 18, 2014 9:20 PM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Cc:* OpenStack Development Mailing List (not for usage questions)
>
> *Subject:* Re: [openstack-dev] [neutron][policy] Integrating network
> policies and network services
>
>
>
> Louis, We are still working on the details of the new contract based
> model. To get an idea please refer to the original project google document
> [1] and look under the section titled
> *Use Cases: **3-tier Application with Security Policies *where policies
> are described through a provider/consumer relationship. The contract model
> is similar to the model being worked out by a similarly named project in
> OpenDaylight. You can find more information on the contract model there [2].
>
> Best,
>
> Mohammad
>
>
> [1]
> https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit#heading=h.gebyoou6khks
> [2]
> https://wiki.opendaylight.org/view/Project_Proposals:Application_Policy_Plugin
>
> [image: Inactive hide details for "Louis.Fourie" ---03/18/2014 03:23:05
> PM---Mohammad, Can you share details on the contract-based po]"Louis.Fourie"
> ---03/18/2014 03:23:05 PM---Mohammad,   Can you share details on the
> contract-based policy model?
>
> From: "Louis.Fourie" 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>,
> Date: 03/18/2014 03:23 PM
> Subject: Re: [openstack-dev] [neutron][policy] Integrating network
> policies and network services
>  --
>
>
>
>
> Mohammad,
>   Can you share details on the contract-based policy model?
>
> -  Louis
>
>
> *From:* Mohammad Banikazemi [mailto:m...@us.ibm.com ]
> * Sent:* Friday, March 14, 2014 3:18 PM
> * To:* OpenStack Development Mailing List (not for usage questions)
> * Subject:* [openstack-dev] [neutron][policy] Integrating network
> policies and network services
>
>
> We have started looking at how the Neutron advanced services being defined
> and developed right now can be used within the Neutron policy framework we
> are building. Furthermore, we have been looking at a new model for the
> policy framework as of the past couple of weeks. So, I have been trying to
> see how the services will fit in (or can be utilized by) the policy work in
> general and with the new contract-based model we are considering in
> particular. Some of the I like to discuss here are specific to the use of
> service chains with the group policy work but some are generic and related
> to service chaining itself.
>
> If I understand it correctly, the proposed service chaining model requires
> the creation of the services in the chain without specifying their
> insertion contexts. Then, the service chain is created with specifying the
> services in the chain, a particular provider (which is specific to the
> chain being built) and possibly source and destination insertion contexts.
>
> 1- This fits ok with the policy model we had developed earlier where the
> policy would get defined between a source and a destination policy endpoint
> group. The chain could be instantiated at the time the policy gets defined.
> (More questions on the instantiation below marked as 1.a and 1.b.) How
> would that work in a contract based model for policy? At the time a
> contract is defined, it's producers and consumers are not defined yet.
> Would we postpone the instantiation of the service chain to the time a
> contract gets a producer and at least a consumer?
>
> 1.a- It seems to me, it would be helpful if not necessary to be able to
> define a chain without instantiating the chain. If I understand it
> correctly, in the current service chaining model, when the chain is
> created, the source/destination contexts are used (whether they are
> specified explicitly or implicitly) and the chain of services become
> operational. We may want to be able to define the cha

Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread Mike Perez

On 03/19/2014 08:20 AM, Doug Hellmann wrote:

As I understand it, all of the integrated projects have looked at Pecan,
and are anticipating the transition. Most have no reason to create a new
API version, and therefore build a new API service to avoid introducing
incompatibilities by rebuilding the existing API with a new tool. This
aligns with the plan when Pecan was proposed as a standard.

Doug


I have evaluated it for Cinder and have spoke to numerous interested 
folks in Cinder about using Pecan. It's currently what we're planning to 
move to after I did a rough prototype for some of our core API 
controllers. As you mentioned, we have no reason to do a version bump 
yet. We'll likely do a bump to be py3 compatible rather than for a 
significant change.


--
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WSGI and Python paste

2014-03-19 Thread Thomas Goirand
On 03/19/2014 09:25 PM, victor stinner wrote:
> Hi,
> 
> According to the following table, paste is blocking many OpenStack servers to 
> be ported to Python 3:
> 
>https://wiki.openstack.org/wiki/Python3

I had a look to this, and found it weird that ironicclient is marked as
supporting Python3, when it depends on keystoneclient, which doesn't
have Python3 support.

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][Mistral] Adding new core reviewers

2014-03-19 Thread Manas Kelshikar
+1


On Wed, Mar 19, 2014 at 5:33 AM, Stan Lagun  wrote:

> +1 for both
>
>
> On Wed, Mar 19, 2014 at 3:35 PM, Renat Akhmerov wrote:
>
>> Team,
>>
>> So far I've been just the only one core member of the team. I started
>> feeling lonely :) Since the project team and the project itself has now
>> grown (thanks to StackStorm and Intel) I think it's time to think about
>> extending the core team.
>>
>> I would propose:
>>
>>- Nikolay Makhotkin (nmakhotkin at launchpad). He's been working on
>>the project since almost the very beginning and made significant
>>contribution (design, reviews, code).
>>- Dmitri Zimine (i-dz at launchpad). Dmitri joined the project about
>>2 months ago. Since then he's made a series of important high-quality
>>commits, a lot of valuable reviews and, IMO most importantly, he has a
>>solid vision of the project in general (requirements, use cases, 
>> comparison
>>to other technologies) and has a pro-active viewpoint in all our
>>discussions.
>>
>>
>> Thoughts?
>>
>> Renat Akhmerov
>> @ Mirantis Inc.
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Sincerely yours
> Stanislav (Stan) Lagun
> Senior Developer
> Mirantis
> 35b/3, Vorontsovskaya St.
> Moscow, Russia
> Skype: stanlagun
> www.mirantis.com
> sla...@mirantis.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-19 Thread Miguel Angel Ajo



An advance on the changes that it's requiring to have a
py->c++ compiled rootwrap as a mitigation POC for havana/icehouse.

https://github.com/mangelajo/shedskin.rootwrap/commit/e4167a6491dfbc71e2d0f6e28ba93bc8a1dd66c0

The current translation output is included.

It looks like doable (almost killed 80% of the translation problems),
but there are two big stones:

1) As Joe said, no support for Subprocess (we're interested in popen),
   I'm using a dummy os.system() for the test.

2) No logging support.

   I'm not sure on how complicated could be getting those modules 
implemented for shedkin.


On 03/18/2014 09:14 AM, Miguel Angel Ajo wrote:

Hi Joe, thank you very much for the positive feedback,

I plan to spend a day during this week on the shedskin-compatibility
for rootwrap (I'll branch it, and tune/cut down as necessary) to make
it compile under shedskin [1] : nothing done yet.

It's a short-term alternative until we can have a rootwrap agent,
together with it's integration in neutron (for Juno). As, for the
compiled rootwrap, if it works, and if it does look good (security wise)
then we'd have a solution for Icehouse/Havana.

help in [1] is really  welcome ;-) I'm available in #openstack-neutron
as 'ajo'.

Best regards,
Miguel Ángel.

[1] https://github.com/mangelajo/shedskin.rootwrap

On 03/18/2014 12:48 AM, Joe Gordon wrote:




On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo
mailto:mangel...@redhat.com>> wrote:


I have included on the etherpad, the option to write a sudo
plugin (or several), specific for openstack.


And this is a test with shedskin, I suppose that in more complicated
dependecy scenarios it should perform better.

[majopela@redcylon tmp]$ cat  import sys
 > print "hello world"
 > sys.exit(0)
 > EOF

[majopela@redcylon tmp]$ time python test.py
hello world

real0m0.016s
user0m0.015s
sys 0m0.001s



This looks very promising!

A few gotchas:

* Very limited library support
https://code.google.com/p/shedskin/wiki/docs#Library_Limitations
   * no logging
   * no six
   * no subprocess

* no *args support
   *
https://code.google.com/p/shedskin/wiki/docs#Python_Subset_Restrictions

that being said I did a quick comparison with great results:

$ cat tmp.sh
#!/usr/bin/env bash
echo $0 $@
ip a

$ time ./tmp.sh  foo bar> /dev/null

real0m0.009s
user0m0.003s
sys 0m0.006s



$ cat tmp.py
#!/usr/bin/env python
import os
import sys

print sys.argv
print os.system("ip a")

$ time ./tmp.py  foo bar > /dev/null

min:
real0m0.016s
user0m0.004s
sys 0m0.012s

max:
real0m0.038s
user0m0.016s
sys 0m0.020s



shedskin  tmp.py && make


$ time ./tmp  foo bar > /dev/null

real0m0.010s
user0m0.007s
sys 0m0.002s



Based in these results I think a deeper dive into making rootwrap
supportshedskin is worthwhile.





[majopela@redcylon tmp]$ shedskin test.py
*** SHED SKIN Python-to-C++ Compiler 0.9.4 ***
Copyright 2005-2011 Mark Dufour; License GNU GPL version 3 (See
LICENSE)

[analyzing types..]
100%
[generating c++ code..]
[elapsed time: 1.59 seconds]
[majopela@redcylon tmp]$ make
g++  -O2 -march=native -Wno-deprecated  -I.
-I/usr/lib/python2.7/site-packages/shedskin/lib /tmp/test.cpp
/usr/lib/python2.7/site-packages/shedskin/lib/sys.cpp
/usr/lib/python2.7/site-packages/shedskin/lib/re.cpp
/usr/lib/python2.7/site-packages/shedskin/lib/builtin.cpp -lgc
-lpcre  -o test
[majopela@redcylon tmp]$ time ./test
hello world

real0m0.003s
user0m0.000s
sys 0m0.002s


- Original Message -
 > We had this same issue with the dhcp-agent. Code was added that
paralleled
 > the initial sync here: https://review.openstack.org/#/c/28914/
that made
 > things a good bit faster if I remember correctly. Might be worth
doing
 > something similar for the l3-agent.
 >
 > Best,
 >
 > Aaron
 >
 >
 > On Mon, Mar 10, 2014 at 5:07 PM, Joe Gordon <
joe.gord...@gmail.com  > wrote:
 >
 >
 >
 >
 >
 >
 > On Mon, Mar 10, 2014 at 3:57 PM, Joe Gordon <
joe.gord...@gmail.com  > wrote:
 >
 >
 >
 > I looked into the python to C options and haven't found anything
promising
 > yet.
 >
 >
 > I tried Cython, and RPython, on a trivial hello world app, but
git similar
 > startup times to standard python.
 >
 > The one thing that did work was adding a '-S' when starting
python.
 >
 > -S Disable the import of the module site and the site-dependent
manipulations
 > of sys.path that it entails.
 >
 > Using 'python -S' didn't appear to help in devstack
 >
 > #!/usr/bin/python -S
 > # PBR Generated from u'console_scripts'
 >
 > import sys

Re: [openstack-dev] [Nova][Heat] How to reliably detect VM failures?

2014-03-19 Thread Zane Bitter

On 19/03/14 02:07, Chris Friesen wrote:

On 03/18/2014 11:18 AM, Zane Bitter wrote:

On 18/03/14 12:42, Steven Dake wrote:



You should be able to use the HARestarter resource and functionality to
do healthchecking of a vm.


HARestarter is actually pretty problematic, both in a "causes major
architectural headaches for Heat and will probably be deprecated very
soon" sense and a "may do very unexpected things to your resources"
sense. I wouldn't recommend it.


Could you elaborate?  What unexpected things might it do?  And what are
the alternatives?


First of all, despite the name, it doesn't just restart but actually 
deletes the server that it's monitoring and recreates an entirely new 
one. It also deletes any resources which directly or indirectly depend 
on the server being monitored and recreates them too.


The alternative is to use Ceilometer alarms and/or some external 
monitoring system and implement recovery yourself, since the strategy 
you want depends on both your application and the type of failure.


Another avenue being explored in Heat is to have a general way of 
bringing a stack back into line with its template:

https://blueprints.launchpad.net/heat/+spec/stack-convergence

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Storing license information in openstack/requirements

2014-03-19 Thread Joshua Harlow
I started the following.

https://review.openstack.org/#/c/81589

Please feel free to check it out, most of this information we can get from pypi 
itself via its apis.

-Josh

From: David Koo mailto:kpublicm...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, February 17, 2014 at 5:21 PM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] Storing license information in 
openstack/requirements


Should we store licensing information as a comment in the
*-requirements files ? Can it be stored on the same line ? Something
like:
oslo.messaging>=1.3.0a4  # Apache-2.0

Since it's licenses we're tracking shouldn't we be tracking indirect
dependencies too (i.e. packages pulled in by required packages)? And if
we want to do that then the method above won't be sufficient.

And, of course, we want an automated way of generating this info -
dependencies (can) change from version to version. Do we have such a
tool?

--
Koo

On Mon, 17 Feb 2014 17:01:24 +0100
Thierry Carrez mailto:thie...@openstack.org>> wrote:

Hi everyone,
A year ago there was a discussion about doing a license inventory on
OpenStack dependencies, to check that they are compatible with our own
license and make sure any addition gets a proper license check.
Back then I proposed to leverage the openstack/requirements repository
to store that information, but that repository was still
science-fiction at that time. Now that it's complete and functional,
I guess it's time to revisit the idea.
Should we store licensing information as a comment in the
*-requirements files ? Can it be stored on the same line ? Something
like:
oslo.messaging>=1.3.0a4  # Apache-2.0


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-19 Thread Samuel Bercovici
+1

-Original Message-
From: Ryan O'Hara [mailto:roh...@redhat.com] 
Sent: Wednesday, March 19, 2014 2:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

On Tue, Mar 18, 2014 at 10:57:15PM +, Jorge Miramontes wrote:
> Hey Neutron LBaaS folks,
> 
> Per last week's IRC meeting I have created a preliminary requirements 
> & use case wiki page. I requested adding such a page since there 
> appears to be a lot of new interest in load balancing and feel that we 
> need a structured way to align everyone's interest in the project. 
> Furthermore, it appears that understanding everyone's requirements and 
> use cases will aid in the current object model discussion we all have 
> been having. That being said, this wiki is malleable and open to 
> discussion. I have added some preliminary requirements from my team's 
> perspective in order to start the discussion. My vision is that people 
> add requirements and use cases to the wiki for what they envision 
> Neutron LBaaS becoming. That way, we can all discuss as a group, 
> figure out what should and shouldn't be a requirement and prioritize 
> the rest in an effort to focus development efforts. ReadyŠsetŠgo!
> 
> Here is the link to the wiki ==>
> https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements
> 
> Cheers,
> --Jorge

Thank you for creating this page. I suggest that links be added to existing 
blueprints when applicable. Thoughts?

Ryan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread Kurt Griffiths
> Only one project is using swob, and it is unlikely that will change.

That begs the question, *why* is that unlikely to change? Is it because
there are fundamental needs that are not met by Pecan? If I understand the
original charter for Oslo, it was to consolidate code already in use by
projects to facilitate sharing. It would seem to me that if swob has a
compelling reason to exist, and other data plane projects see value in it
(and I’m starting to think Marconi would be on that list), it would be a
good candidate for extraction to a standalone library. I personally see a
lot of alignment between swob and Falcon, and convergence between the two
libraries could be a productive path to explore.

Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

2014-03-19 Thread Jorge Miramontes
Oleg, thanks for the updates.

Eugene, High/Medium/Low is fine with me. I really just wanted to find a way to 
rank even amongst all of 'X' priorities. As people start adding more items we 
may need more columns to add things such as this, links to blueprints (per 
Ryan's idea), etc. In terms of the requirements marked with a '?' I can try to 
clarify here:


  *   Static IP Addresses
 *   Our current Cloud Load Balancing (CLB) offering utilizes static IP 
addresses which is something our customers really like, especially when setting 
up DNS. AWS for example, gives you an A record which you CNAME to.
  *   Active/Passive Failover
 *   I think this is solved with multiple pools.
  *   IP Access Control
 *   Our current CLB offering allows the user to restrict access through 
their load balancer by blacklisting/whitelisting cidr blocks and even 
individual ip addresses. This is just a basic security feature.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, March 19, 2014 7:32 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Requirements Wiki

Hi Jorge,

Thanks for taking care of the page. I've added priorities, although I'm not 
sure we need precise priority weights.
Those features that still have '?' need further clarification.

Thanks,
Eugene.



On Wed, Mar 19, 2014 at 11:18 AM, Oleg Bondarev 
mailto:obonda...@mirantis.com>> wrote:
Hi Jorge,

Thanks for taking care of this and bringing it all together! This will be 
really useful for LBaaS discussions.
I updated the wiki to include L7 rules support and also marking already 
implemented requirements.

Thanks,
Oleg


On Wed, Mar 19, 2014 at 2:57 AM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Hey Neutron LBaaS folks,

Per last week's IRC meeting I have created a preliminary requirements &
use case wiki page. I requested adding such a page since there appears to
be a lot of new interest in load balancing and feel that we need a
structured way to align everyone's interest in the project. Furthermore,
it appears that understanding everyone's requirements and use cases will
aid in the current object model discussion we all have been having. That
being said, this wiki is malleable and open to discussion. I have added
some preliminary requirements from my team's perspective in order to start
the discussion. My vision is that people add requirements and use cases to
the wiki for what they envision Neutron LBaaS becoming. That way, we can
all discuss as a group, figure out what should and shouldn't be a
requirement and prioritize the rest in an effort to focus development
efforts. ReadyŠsetŠgo!

Here is the link to the wiki ==>
https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements

Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Alternating meeting time for more TZ friendliness

2014-03-19 Thread Clint Byrum
Excerpts from Sullivan, Jon Paul's message of 2014-03-19 09:26:44 -0700:
> > From: James Slagle [mailto:james.sla...@gmail.com]
> > Sent: 18 March 2014 19:58
> > Subject: [openstack-dev] [TripleO] Alternating meeting time for more TZ
> > friendliness
> > 
> > Our current meeting time is Tuesdays at 19:00 UTC.  I think this works
> > ok for most folks in and around North America.
> > 
> > It was proposed during today's meeting to see if there is interest is an
> > alternating meeting time every other week so that we can be a bit more
> > friendly to those folks that currently can't attend.
> > If that interests you, speak up :).
> 
> Speaking up! :D
> 
> > 
> > For reference, the current meeting schedules are at:
> > https://wiki.openstack.org/wiki/Meetings
> 
> Tuesdays at 14:00 UTC on #openstack-meeting-alt is available.
> 

If we were to have one at that time, we'd need to move the other time as
well. One driver for moving it is that our participants on the Eastern
side of Australia are already joining at 0600 their time, and will be
joining at 0500 soon.

If I've done my TZ math right, Tuesdays at 1400 UTC would be Wednesdays
at 0100 for Sydney.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread Julien Danjou
On Wed, Mar 19 2014, Donald Stufft wrote:

> I’m not sure that “number of dependencies” is a useful metric at all tbh. At 
> the
> very least it’s not a very telling metric in the way it was presented in the 
> review.

[…]

+1000

Seriously, this in itself just discredits any value in this analysis to
me.

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-19 Thread Robert Collins
On 20 March 2014 01:06, Mark McLoughlin  wrote:

> I think we need a slight reset on this discussion. The way this email
> was phrased gives a strong sense of "Marconi is a dumb idea, it's going
> to take a lot to persuade me otherwise".

Thanks Mark, thats a great point to make. I don't think Marconi is
dumb, but I sure don't understand why . Thank you!

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Proposed Core Reviewer Changes

2014-03-19 Thread Adrian Otto
Solum Cores,

Thanks for your input. The proposed changes have been applied.

Thanks,

Adrian

On Mar 17, 2014, at 10:13 PM, Adrian Otto  wrote:

> Solum Cores,
> 
> I propose the following changes to the Solum core reviewer team:
> 
> +gokrokve
> +julienvey
> +devdatta-kulkarni
> -kgriffs (inactivity)
> -russelb (inactivity)
> 
> Please reply with your +1 votes to proceed with this change, or any remarks 
> to the contrary.
> 
> Thanks,
> 
> Adrian
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][Mistral] Adding new core reviewers

2014-03-19 Thread Timur Nurlygayanov
+1 from me.
Also, in the future, we can join Kirill Izotov to the core team too.


On Wed, Mar 19, 2014 at 9:46 PM, Manas Kelshikar wrote:

> +1
>
>
> On Wed, Mar 19, 2014 at 5:33 AM, Stan Lagun  wrote:
>
>> +1 for both
>>
>>
>> On Wed, Mar 19, 2014 at 3:35 PM, Renat Akhmerov 
>> wrote:
>>
>>> Team,
>>>
>>> So far I've been just the only one core member of the team. I started
>>> feeling lonely :) Since the project team and the project itself has now
>>> grown (thanks to StackStorm and Intel) I think it's time to think about
>>> extending the core team.
>>>
>>> I would propose:
>>>
>>>- Nikolay Makhotkin (nmakhotkin at launchpad). He's been working on
>>>the project since almost the very beginning and made significant
>>>contribution (design, reviews, code).
>>>- Dmitri Zimine (i-dz at launchpad). Dmitri joined the project about
>>>2 months ago. Since then he's made a series of important high-quality
>>>commits, a lot of valuable reviews and, IMO most importantly, he has a
>>>solid vision of the project in general (requirements, use cases, 
>>> comparison
>>>to other technologies) and has a pro-active viewpoint in all our
>>>discussions.
>>>
>>>
>>> Thoughts?
>>>
>>> Renat Akhmerov
>>> @ Mirantis Inc.
>>>
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Sincerely yours
>> Stanislav (Stan) Lagun
>> Senior Developer
>> Mirantis
>> 35b/3, Vorontsovskaya St.
>> Moscow, Russia
>> Skype: stanlagun
>> www.mirantis.com
>> sla...@mirantis.com
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread Julien Danjou
On Wed, Mar 19 2014, Kurt Griffiths wrote:

> That begs the question, *why* is that unlikely to change?

Because that project is Swift.

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-19 Thread Devananda van der Veen
Let me start by saying that I want there to be a constructive discussion
around all this. I've done my best to keep my tone as non-snarky as I could
while still clearly stating my concerns. I've also spent a few hours
reviewing the current code and docs. Hopefully this contribution will be
beneficial in helping the discussion along.

For what it's worth, I don't have a clear understanding of why the Marconi
developer community chose to create a new queue rather than an abstraction
layer on top of existing queues. While my lack of understanding there isn't
a technical objection to the project, I hope they can address this in the
aforementioned FAQ.

The reference storage implementation is MongoDB. AFAIK, no integrated
projects require an AGPL package to be installed, and from the discussions
I've been part of, that would be a show-stopper if Marconi required
MongoDB. As I understand it, this is why sqlalchemy support was required
when Marconi was incubated. Saying "Marconi also supports SQLA" is
disingenuous because it is a second-class citizen, with incomplete API
support, is clearly not the recommended storage driver, and is going to be
unusuable at scale (I'll come back to this point in a bit).

Let me ask this. Which back-end is tested in Marconi's CI? That is the
back-end that matters right now. If that's Mongo, I think there's a
problem. If it's SQLA, then I think Marconi should declare any features
which SQLA doesn't support to be optional extensions, make SQLA the
default, and clearly document how to deploy Marconi at scale with a SQLA
back-end.


Then there's the db-as-a-queue antipattern, and the problems that I have
seen result from this in the past... I'm not the only one in the OpenStack
community with some experience scaling MySQL databases. Surely others have
their own experiences and opinions on whether a database (whether MySQL or
Mongo or Postgres or ...) can be used in such a way _at_scale_ and not fall
over from resource contention. I would hope that those members of the
community would chime into this discussion at some point. Perhaps they'll
even disagree with me!

A quick look at the code around claim (which, it seems, will be the most
commonly requested action) shows why this is an antipattern.

The MongoDB storage driver for claims requires _four_ queries just to get a
message, with a serious race condition (but at least it's documented in the
code) if multiple clients are claiming messages in the same queue at the
same time. For reference:

https://github.com/openstack/marconi/blob/master/marconi/queues/storage/mongodb/claims.py#L119

The SQLAlchemy storage driver is no better. It's issuing _five_ queries
just to claim a message (including a query to purge all expired claims
every time a new claim is created). The performance of this transaction
under high load is probably going to be bad...

https://github.com/openstack/marconi/blob/master/marconi/queues/storage/sqlalchemy/claims.py#L83

Lastly, it looks like the Marconi storage drivers assume the storage
back-end to be infinitely scalable. AFAICT, the mongo storage driver
supports mongo's native sharding -- which I'm happy to see -- but the SQLA
driver does not appear to support anything equivalent for other back-ends,
eg. MySQL. This relegates any deployment using the SQLA backend to the
scale of "only what one database instance can handle". It's unsuitable for
any large-scale deployment. Folks who don't want to use Mongo are likely to
use MySQL and will be promptly bitten by Marconi's lack of scalability with
this back end.

While there is a lot of room to improve the messaging around what/how/why,
and I think a FAQ will be very helpful, I don't think that Marconi should
graduate this cycle because:
(1) support for a non-AGPL-backend is a legal requirement [*] for Marconi's
graduation;
(2) deploying Marconi with sqla+mysql will result in an incomplete and
unscalable service.

It's possible that I'm wrong about the scalability of Marconi with sqla +
mysql. If anyone feels that this is going to perform blazingly fast on a
single mysql db backend, please publish a benchmark and I'll be very happy
to be proved wrong. To be meaningful, it must have a high concurrency of
clients creating and claiming messages with (num queues) << (num clients)
<< (num messages), and all clients polling on a reasonably short interval,
based on what ever the recommended client-rate-limit is. I'd like the test
to be repeated with both Mongo and SQLA back-ends on the same hardware for
comparison.


Regards,
Devananda

[*]
https://wiki.openstack.org/wiki/Marconi/Incubation/Graduation#Legal_requirements
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread Russell Bryant
On 03/19/2014 01:47 PM, Mike Perez wrote:
> On 03/19/2014 08:20 AM, Doug Hellmann wrote:
>> As I understand it, all of the integrated projects have looked at Pecan,
>> and are anticipating the transition. Most have no reason to create a new
>> API version, and therefore build a new API service to avoid introducing
>> incompatibilities by rebuilding the existing API with a new tool. This
>> aligns with the plan when Pecan was proposed as a standard.
>>
>> Doug
> 
> I have evaluated it for Cinder and have spoke to numerous interested
> folks in Cinder about using Pecan. It's currently what we're planning to
> move to after I did a rough prototype for some of our core API
> controllers. As you mentioned, we have no reason to do a version bump
> yet. We'll likely do a bump to be py3 compatible rather than for a
> significant change.
> 

FWIW, we've also discussed it for Nova.  We approved converting to use
it for the v3 API that is still being worked on.  I hope to see that get
movement again during Juno.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Alternating meeting time for more TZ friendliness

2014-03-19 Thread James Polley


> On 20 Mar 2014, at 5:44 am, Clint Byrum  wrote:
> 
> Excerpts from Sullivan, Jon Paul's message of 2014-03-19 09:26:44 -0700:
>>> From: James Slagle [mailto:james.sla...@gmail.com]
>>> Sent: 18 March 2014 19:58
>>> Subject: [openstack-dev] [TripleO] Alternating meeting time for more TZ
>>> friendliness
>>> 
>>> Our current meeting time is Tuesdays at 19:00 UTC.  I think this works
>>> ok for most folks in and around North America.
>>> 
>>> It was proposed during today's meeting to see if there is interest is an
>>> alternating meeting time every other week so that we can be a bit more
>>> friendly to those folks that currently can't attend.
>>> If that interests you, speak up :).
>> 
>> Speaking up! :D
>> 
>>> 
>>> For reference, the current meeting schedules are at:
>>> https://wiki.openstack.org/wiki/Meetings
>> 
>> Tuesdays at 14:00 UTC on #openstack-meeting-alt is available.
> 
> If we were to have one at that time, we'd need to move the other time as
> well. One driver for moving it is that our participants on the Eastern
> side of Australia are already joining at 0600 their time, and will be
> joining at 0500 soon.
> 
> If I've done my TZ math right, Tuesdays at 1400 UTC would be Wednesdays
> at 0100 for Sydney.

0100 now,  in a few weeks when DST flips here. I'd prefer 0500 to 

My first priority is making sure that theacimum 
Speaking for myself, I find the current time is slightly painful, but doable. 
0500 will be more painful.

If I'm reading the iCal feed linked from 
https://wiki.openstack.org/wiki/Meetings correctly, it looks like there's a PCI 
passthrough meeting scheduled at that time in -alt. Is that correct? Is the 
iCal feed canonical? I don't see the meeting listed on the page, but maybe 
that's also because of the tiny little screen I'm using right now.

If I'm doing the timezone math right, it looks like the iCal feed says that 
2100UTC Wednesday is free every second week. Once all the DST flips happen, 
that'll be 2100 London, 1500 SF, 0700 Sydney. 

If we ran at that time on alternate weeks I'd have one I could make easily and 
one I could probably make at a stretch.


> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-19 Thread Steven Dake

On 03/19/2014 02:00 AM, Stan Lagun wrote:

Steven,

Agree with your opinion on HOT expansion. I see that inclusion of 
imperative workflows and ALM would require major Heat redesign and 
probably would be impossible without loosing compatibility with 
previous HOT syntax. It would blur Heat mission, confuse current users 
and rise a lot of questions what should and what should not be in 
Heat. Thats why we chose to built a system on top of Heat rather then 
expending HOT.




+1

Now I would like to clarify why have we chosen imperative approach 
with DSL.


You see a DSL as an alternative to HOT but it is not. DSL is 
alternative to Python-encoded


Not accurate.  I see a HOT as one type of DSL.  I see the MuranoPL as an 
imperative language which is not a DSL.  It may have domain specific 
aspects, but it is more general purpose in nature.


resources in Heat (heat/engine/resources/*.py). Imagine how Heat would 
look like if you let untrusted users to upload Python plugins to Heat 
engine and load them on the fly. Heat resources are written in Python 
which is imperative language. So that MuranoPL for the same reason.



agree this is bad idea.

We want application authors to be able to express application 
deployment and maintenance logic of any complexity. This may involve 
communication with 3rd party REST services (APIs of applications being 
deployed, external services like DNS server API, application licensing 
server API, billing systems, some hardware component APIs etc) and 
internal OpenStack services like Trove, Sahara, Marconi and others 
including those that are not incubated yet and those to come in the 
future. You cannot have such things in HOT and when you required to 
you need to develop custom resource in Python. Independence  on custom 
plugins is not good for Murano because they cannot be uploaded by end 
users and thus he cannot write application definition that can be 
imported to/run on any cloud and need to convince cloud administrator 
to install his Python plugin (something that is unimaginable in real 
life).




I understand.  I am not critical of imperative approach for implementing 
workflow/ALM.  I believe it is mandatory.  I hold no opinion on MuranoPL 
specifically.


Because DSL is a way to write custom resources (in Heats terminology) 
it has to be Turing-complete and have all the characteristics of 
general-purpose language. It also has to have domain-specific features 
because we cannot expect that DSL users would be as skilled as Heat 
developers and could write such resources without knowledge on hosting 
engine architecture and internals.


I understand your point of view, but DSLs are declarative in nature.  I 
think the problem is that the terminology being used is incorrect as 
computer science has defined it :)  Just because MuranoPL has custom 
stuff for interacting with custom resources, doesn't mean its a DSL.  I 
can draw a parallel between implementing arbitrary length integers in a 
general purpose imperative language. Said feature is not a DSL, it is a 
language feature.


What is being discussed in the context of murano is a language feature, 
rather then a DSL.


HOT DSL is declarative because all the imperative stuff is hardcoded 
into Heat engine. Thus all is left for HOT is to define "state of the 
world" - desired outcome. That is analogous to Object Model in Murano 
(see [1]). It is Object Model that can be compared to HOT, not DSL. As 
you can see it not more complex than HOT. Object Model is what 
end-user produces in Murano. And he event don't need to write it cause 
it can be composed in UI.



cool
Now because DSL provides not only a way to write sandboxed isolated 
code but also a lot of declarations (classes, properties, parameters, 
inheritance and contracts) that are mostly not present in Python we 
don't need Parameters or Output sections in Object Model because all 
of this can be inferred from resource (classes) DSL declaration. 
Another consequence is that most of the things that can be written 
wrong in HOT can be verified on client side by validating classes' 
contracts without trying to deploy the stack and then go through error 
log debugging. Because all resources' attributes types their 
constraints are known in advance (note that resource attribute may be 
a reference to another resource with constraints on that reference 
like "I want any (regular, Galera etc) MySQL implementation") UI knows 
how to correctly compose the environment and can point out your 
mistakes at design time. This is similar to how statically typed 
languages like C++/Java can do a lot of validation at compile time 
rather then in runtime as in Python.


Personally I would love to see many of this features in HOT. What is 
your vision on this? What of the mentioned above can be contributed to 
Heat? We definitely would like to integrate more with HOT and 
eliminate all duplications between projects. I think that Murano and 
Heat are complimentary produc

Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-19 Thread Fox, Kevin M
Can someone please give more detail into why MongoDB being AGPL is a problem? 
The drivers that Marconi uses are Apache2 licensed, MongoDB is separated by the 
network stack and MongoDB is not exposed to the Marconi users so I don't think 
the 'A' part of the GPL really kicks in at all since the MongoDB "user" is the 
cloud provider, not the cloud end user?

Thanks,
Kevin


From: Devananda van der Veen [devananda@gmail.com]
Sent: Wednesday, March 19, 2014 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs 
a provisioning API?

Let me start by saying that I want there to be a constructive discussion around 
all this. I've done my best to keep my tone as non-snarky as I could while 
still clearly stating my concerns. I've also spent a few hours reviewing the 
current code and docs. Hopefully this contribution will be beneficial in 
helping the discussion along.

For what it's worth, I don't have a clear understanding of why the Marconi 
developer community chose to create a new queue rather than an abstraction 
layer on top of existing queues. While my lack of understanding there isn't a 
technical objection to the project, I hope they can address this in the 
aforementioned FAQ.

The reference storage implementation is MongoDB. AFAIK, no integrated projects 
require an AGPL package to be installed, and from the discussions I've been 
part of, that would be a show-stopper if Marconi required MongoDB. As I 
understand it, this is why sqlalchemy support was required when Marconi was 
incubated. Saying "Marconi also supports SQLA" is disingenuous because it is a 
second-class citizen, with incomplete API support, is clearly not the 
recommended storage driver, and is going to be unusuable at scale (I'll come 
back to this point in a bit).

Let me ask this. Which back-end is tested in Marconi's CI? That is the back-end 
that matters right now. If that's Mongo, I think there's a problem. If it's 
SQLA, then I think Marconi should declare any features which SQLA doesn't 
support to be optional extensions, make SQLA the default, and clearly document 
how to deploy Marconi at scale with a SQLA back-end.


Then there's the db-as-a-queue antipattern, and the problems that I have seen 
result from this in the past... I'm not the only one in the OpenStack community 
with some experience scaling MySQL databases. Surely others have their own 
experiences and opinions on whether a database (whether MySQL or Mongo or 
Postgres or ...) can be used in such a way _at_scale_ and not fall over from 
resource contention. I would hope that those members of the community would 
chime into this discussion at some point. Perhaps they'll even disagree with me!

A quick look at the code around claim (which, it seems, will be the most 
commonly requested action) shows why this is an antipattern.

The MongoDB storage driver for claims requires _four_ queries just to get a 
message, with a serious race condition (but at least it's documented in the 
code) if multiple clients are claiming messages in the same queue at the same 
time. For reference:
  
https://github.com/openstack/marconi/blob/master/marconi/queues/storage/mongodb/claims.py#L119

The SQLAlchemy storage driver is no better. It's issuing _five_ queries just to 
claim a message (including a query to purge all expired claims every time a new 
claim is created). The performance of this transaction under high load is 
probably going to be bad...
  
https://github.com/openstack/marconi/blob/master/marconi/queues/storage/sqlalchemy/claims.py#L83

Lastly, it looks like the Marconi storage drivers assume the storage back-end 
to be infinitely scalable. AFAICT, the mongo storage driver supports mongo's 
native sharding -- which I'm happy to see -- but the SQLA driver does not 
appear to support anything equivalent for other back-ends, eg. MySQL. This 
relegates any deployment using the SQLA backend to the scale of "only what one 
database instance can handle". It's unsuitable for any large-scale deployment. 
Folks who don't want to use Mongo are likely to use MySQL and will be promptly 
bitten by Marconi's lack of scalability with this back end.

While there is a lot of room to improve the messaging around what/how/why, and 
I think a FAQ will be very helpful, I don't think that Marconi should graduate 
this cycle because:
(1) support for a non-AGPL-backend is a legal requirement [*] for Marconi's 
graduation;
(2) deploying Marconi with sqla+mysql will result in an incomplete and 
unscalable service.

It's possible that I'm wrong about the scalability of Marconi with sqla + 
mysql. If anyone feels that this is going to perform blazingly fast on a single 
mysql db backend, please publish a benchmark and I'll be very happy to be 
proved wrong. To be meaningful, it must have a high concurrency of clients 
creating and claiming messages with (num queue

Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-19 Thread Thomas Spatzier
Excerpts from Zane Bitter's message on 19/03/2014 18:18:34:

> From: Zane Bitter 
> To: openstack-dev@lists.openstack.org
> Date: 19/03/2014 18:21
> Subject: Re: [openstack-dev] [Murano][Heat] MuranoPL questions?
>
> On 19/03/14 05:00, Stan Lagun wrote:
> > Steven,
> >
> > Agree with your opinion on HOT expansion. I see that inclusion of
> > imperative workflows and ALM would require major Heat redesign and
> > probably would be impossible without loosing compatibility with
previous
> > HOT syntax. It would blur Heat mission, confuse current users and rise
a
> > lot of questions what should and what should not be in Heat. Thats why
> > we chose to built a system on top of Heat rather then expending HOT.
>
> +1, I agree (as we have discussed before) that it would be a mistake to
> shoehorn workflow stuff into Heat. I do think we should implement the
> hooks I mentioned at the start of this thread to allow tighter
> integration between Heat and a workflow engine (i.e. Mistral).

+1 on not putting workflow stuff into Heat. Rather let's come up with a
nice way of Heat and a workflow service to work together.
That could be done in two ways: (1) let Heat hand off to a workflow service
for certains tasks or (2) let people define workflow tasks that can easily
work on Heat deployed resources. Maybe both make sense, but right now I am
more leaning towards (2).

>
> So building a system on top of Heat is good. Building it on top of
> Mistral as well would also be good, and that was part of the feedback
> from the TC.
>
> To me, building on top means building on top of the languages (which
> users will have to invest a lot of work in learning) as well, rather
> than having a completely different language and only using the
> underlying implementation(s).

That all sounds logical to me and would keep things clean, i.e. keep the
HOT language clean by not mixing it with imperative expression, and keep
the Heat engine clean by not blowing it up to act as a workflow engine.

When I think about the two aspects that are being brought up in this thread
(declarative description of a desired state and workflows) my thinking is
that much (and actually as much as possible) can be done declaratively the
way Heat does it with HOT. Then for bigger lifecycle management there will
be a need for additional workflows on top, because at some point it will be
hard to express management logic declaratively in a topology model.
Those additional flows on-top will have to be aware of the instance created
from a declarative template (i.e. a Heat stack) because it needs to act on
the respective resources to do something in addition.

So when thinking about a domain specific workflow language, it should be
possible to define tasks (in a template aware manner) like "on resource XYZ
of the template, do something", or "update resource XYZ of the template
with this state", then do this etc. At runtime this would resolve to the
actual resource instances created from the resource templates. Making such
constructs available to the workflow authors would make sense. Having a
workflow service able to execute this via the right underlying APIs would
be the execution part. I think from an instance API perspective, Heat
already brings a lot for this with the stack model, so workflow tasks could
be written to use the stack API to access instance information. Things like
update of resources is also something that is already there.

BTW, we have a similar concept (or are working on a refinement of it based
on latest discussions) in TOSCA and call it the "plan portability API",
i.e. an API that a declarative engine would expose so that fit-for-purpose
workflow tasks can be defined on-top.

Regards,
Thomas



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread Anne Gentle
On Wed, Mar 19, 2014 at 10:00 AM, Doug Hellmann  wrote:

>
>
>
> On Wed, Mar 19, 2014 at 7:31 AM, Thierry Carrez wrote:
>
>> Kurt Griffiths wrote:
>> > Kudos to Balaji for working so hard on this. I really appreciate his
>> candid feedback on both frameworks.
>>
>> Indeed, that analysis is very much appreciated.
>>
>> From the Technical Committee perspective, we put a high weight on a
>> factor that was not included in the report results: consistency and
>> convergence between projects we commonly release in an integrated manner
>> every 6 months. There was historically a lot of deviation, but as we add
>> more projects that deviation is becoming more costly. We want developers
>> to be able to jump from one project to another easily, and we want
>> convergence from an operators perspective.
>
>
>> Individual projects are obviously allowed to pick the best tool in their
>> toolbox. But the TC may also decide to let projects live out of the
>> "integrated release" if we feel they would add too much divergence in.
>>
>
> As Thierry points out, an important aspect of being in the integrated
> release is being aligned with the rest of the community. The evaluation
> gives "community" considerations the lowest weight among the criteria
> considered. Does that ranking reflect the opinion of the entire Marconi
> team? If so, what benefits do you see to being integrated?
>
> The evaluation does not discuss any of the infrastructure tooling being
> built up around OpenStack's use of Pecan. For example, what will Marconi do
> for API documentation generation?
>
>
Doug and I talked about this in #openstack-dev today, and I just wanted to
point out that only one of nine integrated projects uses a Pecan-based
solution for API documentation generation, using a tool called
sphinxcontrib-docbookrestapi. [1]

I consider this question a bit of a false representation of the direction
we're going with API docs. There's no standard yet established other than
"somehow create WADL so we can accurately represent a reference listing to
users." Also with extensible APIs it might be easier to just maintain WADL,
we just don't know until we get more data from more teams using the Sphinx
extension. That said, we do use a common toolset to generate configuration
reference information and I'd expect all integrated projects to save time
and effort by standardizing as much as possible.

The Marconi team has had a tech writer assigned and the team is working
within the guidelines we've given them.

Thanks,
Anne

1. https://github.com/enovance/sphinxcontrib-docbookrestapi I'd like to
rename it to sphinxcontrib-restapi or some such, since it doesn't generate
docbook.


> Pecan is currently gating changes against projects that use it, so we can
> be sure that changes to the framework do not break our applications. This
> does not appear to have been factored into the evaluation.
>
>
>>
>> > After reviewing the report below, I would recommend that Marconi
>> > continue using Falcon for the v1.1 API and then re-evaluate Pecan for
>> > v2.0 or possibly look at using swob.
>>
>> The report (and your email below) makes a compelling argument that
>> Falcon is a better match for Marconi's needs (or for a data-plane API)
>> than Pecan currently is. My question would be, can Pecan be improved to
>> also cover Marconi's use case ? Could we have the best of both worlds
>> (an appropriate tool *and* convergence) ?
>>
>
> We had several conversations with Kurt and Flavio in Hong Kong about
> adding features to Pecan to support the Marconi team, and Ryan prototyped
> some of those changes shortly after we returned home. Was any of that work
> considered in the evaluation?
>
> Doug
>
>
>>
>> If the answer is "yes, probably", then it might be an option to delay
>> inclusion in the integrated release so that we don't add (even
>> temporary) divergence. If the answer is "definitely no", then we'll have
>> to choose between convergence and functionality.
>>
>> --
>> Thierry Carrez (ttx)
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][Mistral] Adding new core reviewers

2014-03-19 Thread Dmitri Zimine
+1 for Nikolay - he's indeed the most involved in all aspect.
and yes I am up to doing it. 

DZ.

On Mar 19, 2014, at 4:35 AM, Renat Akhmerov  wrote:

> Team,
> 
> So far I’ve been just the only one core member of the team. I started feeling 
> lonely :) Since the project team and the project itself has now grown (thanks 
> to StackStorm and Intel) I think it’s time to think about extending the core 
> team.
> 
> I would propose:
> Nikolay Makhotkin (nmakhotkin at launchpad). He's been working on the project 
> since almost the very beginning and made significant contribution (design, 
> reviews, code).
> Dmitri Zimine (i-dz at launchpad). Dmitri joined the project about 2 months 
> ago. Since then he’s made a series of important high-quality commits, a lot 
> of valuable reviews and, IMO most importantly, he has a solid vision of the 
> project in general (requirements, use cases, comparison to other 
> technologies) and has a pro-active viewpoint in all our discussions.
> 
> Thoughts?
> 
> Renat Akhmerov
> @ Mirantis Inc.
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-19 Thread Georgy Okrokvertskhov
Hi,

I think notification mechanism proposed in Heat will work fine for
integration with external workflows. The approach which uses workflows
outside of Heat engine sounds consistent with our current approach in
Murano.

I am looking into new TOSCA yaml format and I also ask Mirantis management
to consider joining OASIS. The decision is not made yet, but hopefully will
be made on next week. We eager to jump onto TOSCA standard work and
contribute plan related parts.

Thanks
Georgy




On Wed, Mar 19, 2014 at 1:38 PM, Thomas Spatzier  wrote:

> Excerpts from Zane Bitter's message on 19/03/2014 18:18:34:
>
> > From: Zane Bitter 
> > To: openstack-dev@lists.openstack.org
> > Date: 19/03/2014 18:21
> > Subject: Re: [openstack-dev] [Murano][Heat] MuranoPL questions?
> >
> > On 19/03/14 05:00, Stan Lagun wrote:
> > > Steven,
> > >
> > > Agree with your opinion on HOT expansion. I see that inclusion of
> > > imperative workflows and ALM would require major Heat redesign and
> > > probably would be impossible without loosing compatibility with
> previous
> > > HOT syntax. It would blur Heat mission, confuse current users and rise
> a
> > > lot of questions what should and what should not be in Heat. Thats why
> > > we chose to built a system on top of Heat rather then expending HOT.
> >
> > +1, I agree (as we have discussed before) that it would be a mistake to
> > shoehorn workflow stuff into Heat. I do think we should implement the
> > hooks I mentioned at the start of this thread to allow tighter
> > integration between Heat and a workflow engine (i.e. Mistral).
>
> +1 on not putting workflow stuff into Heat. Rather let's come up with a
> nice way of Heat and a workflow service to work together.
> That could be done in two ways: (1) let Heat hand off to a workflow service
> for certains tasks or (2) let people define workflow tasks that can easily
> work on Heat deployed resources. Maybe both make sense, but right now I am
> more leaning towards (2).
>
> >
> > So building a system on top of Heat is good. Building it on top of
> > Mistral as well would also be good, and that was part of the feedback
> > from the TC.
> >
> > To me, building on top means building on top of the languages (which
> > users will have to invest a lot of work in learning) as well, rather
> > than having a completely different language and only using the
> > underlying implementation(s).
>
> That all sounds logical to me and would keep things clean, i.e. keep the
> HOT language clean by not mixing it with imperative expression, and keep
> the Heat engine clean by not blowing it up to act as a workflow engine.
>
> When I think about the two aspects that are being brought up in this thread
> (declarative description of a desired state and workflows) my thinking is
> that much (and actually as much as possible) can be done declaratively the
> way Heat does it with HOT. Then for bigger lifecycle management there will
> be a need for additional workflows on top, because at some point it will be
> hard to express management logic declaratively in a topology model.
> Those additional flows on-top will have to be aware of the instance created
> from a declarative template (i.e. a Heat stack) because it needs to act on
> the respective resources to do something in addition.
>
> So when thinking about a domain specific workflow language, it should be
> possible to define tasks (in a template aware manner) like "on resource XYZ
> of the template, do something", or "update resource XYZ of the template
> with this state", then do this etc. At runtime this would resolve to the
> actual resource instances created from the resource templates. Making such
> constructs available to the workflow authors would make sense. Having a
> workflow service able to execute this via the right underlying APIs would
> be the execution part. I think from an instance API perspective, Heat
> already brings a lot for this with the stack model, so workflow tasks could
> be written to use the stack API to access instance information. Things like
> update of resources is also something that is already there.
>
> BTW, we have a similar concept (or are working on a refinement of it based
> on latest discussions) in TOSCA and call it the "plan portability API",
> i.e. an API that a declarative engine would expose so that fit-for-purpose
> workflow tasks can be defined on-top.
>
> Regards,
> Thomas
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-19 Thread Stan Lagun
On Wed, Mar 19, 2014 at 11:57 PM, Steven Dake  wrote:

>
>
>  Now because DSL provides not only a way to write sandboxed isolated code
> but also a lot of declarations (classes, properties, parameters,
> inheritance and contracts) that are mostly not present in Python we don't
> need Parameters or Output sections in Object Model because all of this can
> be inferred from resource (classes) DSL declaration. Another consequence is
> that most of the things that can be written wrong in HOT can be verified on
> client side by validating classes' contracts without trying to deploy the
> stack and then go through error log debugging. Because all resources'
> attributes types their constraints are known in advance (note that resource
> attribute may be a reference to another resource with constraints on that
> reference like "I want any (regular, Galera etc) MySQL implementation") UI
> knows how to correctly compose the environment and can point out your
> mistakes at design time. This is similar to how statically typed languages
> like C++/Java can do a lot of validation at compile time rather then in
> runtime as in Python.
>
>  Personally I would love to see many of this features in HOT. What is
> your vision on this? What of the mentioned above can be contributed to
> Heat? We definitely would like to integrate more with HOT and eliminate all
> duplications between projects. I think that Murano and Heat are
> complimentary products that can effectively coexist. Murano provides access
> to all HOT features and relies on Heat for most of its activities. I
> believe that we need to find an optimal way to integrate Heat, Murano,
> Mistral, Solum, Heater, TOSCA, do some integration between ex-Thermal and
> Murano Dashboard, be united regarding Glance usage for metadata and so on.
> We are okay with throwing MuranoPL out if the issues it solves would be
> addressed by HOT.
>
>   I am not a fan of language features such as inheritance, classes,
> properties, etc.  I get that for a really general purpose language like
> python they are useful.  Python has a multi-year learning curve before
> writing really "pythonic" code.  Hot has a few day learning curve before
> writing really "hotonic" code :)
>
>
> I understand your concerns. But is not the case when you need to write
programs using classes and all that stuff. We just using OOP approach to
describe composition. Just because OOP is known to be most efficient way of
component composition. Environments consist of applications. Applications
may consist of application-specific services (for multi-node applications),
VMs, network resources etc. All of this are classes in Murano terminology
and it can be mapped to resources/resource properties/templates in HOT.
Also don't forget that TOSCA has all of this OOP things as well.

90% of classes may consist of 0-2 lines of code. Also note that in Murano
end users do not write code at all. It is not DevOps tools. It is software
engineer who write MuranoPL code. End-user doesn't do anything outside UI
dashboard



>
>
> On Wed, Mar 19, 2014 at 8:06 AM, Steven Dake  wrote:
>
>> Ruslan,
>>
>> Some of my thoughts on the evolution of the HOT DSL to date.
>>
>>
>> On 03/18/2014 05:32 PM, Ruslan Kamaldinov wrote:
>>
>>> Here is my 2 cents:
>>>
>>> I personally think that evolving Heat/HOT to what Murano needs for it's
>>> use
>>> cases is the best way to make PaaS layer of OpenStack to look and feel
>>> as a
>>> complete and fully integrated solution.
>>>
>>> Standardising these things in a project like TOSCA is another direction
>>> we all
>>> should follow. I think that TOSCA is the place where developers (like
>>> us),
>>> application developers and enterprises can collaborate to produce a
>>> common
>>> standard for application lifecycle management in the clouds.
>>>
>>>
>>> But before Murano contributors jump into direction of extending HOT to
>>> the goal
>>> of application (or system) lifecycle management, we need an agreement
>>> that this
>>> is the right direction for Heat/HOT/DSL and the Orchestration program.
>>> There are
>>> a lot of use cases that current HOT doesn't seem to be the right tool to
>>> solve.
>>> As it was said before, it's not a problem to collaborate on extending it
>>> those
>>> use cases. I'm just unsure if Heat team would like these use cases to be
>>> solved
>>> with Heat/HOT/DSL. For instance:
>>> - definition of an application which is already exposed via REST API.
>>> Think of
>>>something like Sahara (ex. Savanna) or Trove developed in-house for
>>> internal
>>>company needs. app publishers wouldn't be happy if they'll be forced
>>> to
>>>develop a new resource for Heat
>>> - definition of billing rules for an application
>>>
>>>
>>> If everyone agrees that this is the direction we all should follow, that
>>> we
>>> should expand HOT/DSL to that scope, that HOT should be the answer on
>>> "can you
>>> express it?", then awesome - we can start speaking about implementation
>>> detai

Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-19 Thread Stan Lagun
Ability to hook up to application deployment is indeed a good thing. And it
can be done both in HOT and MuranoPL. And Mistral is a good candidate to
handle those hooks. But this is not a replacement for MuranoPL but an
addition to it.

The problem with hooks is that you cannot hoot just into arbitrary place in
deployment workflow. And the author of Python code may not expose the exact
hook that you need. Hooks can work for logging purposes or triggering some
additional workflows but are not good for customization of your workflow
from outside. Hooked code may not have access to all engine's internal
state and workflow context and have even less chances to modify it in a
safe manner


On Thu, Mar 20, 2014 at 1:21 AM, Georgy Okrokvertskhov <
gokrokvertsk...@mirantis.com> wrote:

> Hi,
>
> I think notification mechanism proposed in Heat will work fine for
> integration with external workflows. The approach which uses workflows
> outside of Heat engine sounds consistent with our current approach in
> Murano.
>
> I am looking into new TOSCA yaml format and I also ask Mirantis management
> to consider joining OASIS. The decision is not made yet, but hopefully will
> be made on next week. We eager to jump onto TOSCA standard work and
> contribute plan related parts.
>
> Thanks
> Georgy
>
>
>
>
> On Wed, Mar 19, 2014 at 1:38 PM, Thomas Spatzier <
> thomas.spatz...@de.ibm.com> wrote:
>
>> Excerpts from Zane Bitter's message on 19/03/2014 18:18:34:
>>
>> > From: Zane Bitter 
>> > To: openstack-dev@lists.openstack.org
>> > Date: 19/03/2014 18:21
>> > Subject: Re: [openstack-dev] [Murano][Heat] MuranoPL questions?
>> >
>> > On 19/03/14 05:00, Stan Lagun wrote:
>> > > Steven,
>> > >
>> > > Agree with your opinion on HOT expansion. I see that inclusion of
>> > > imperative workflows and ALM would require major Heat redesign and
>> > > probably would be impossible without loosing compatibility with
>> previous
>> > > HOT syntax. It would blur Heat mission, confuse current users and rise
>> a
>> > > lot of questions what should and what should not be in Heat. Thats why
>> > > we chose to built a system on top of Heat rather then expending HOT.
>> >
>> > +1, I agree (as we have discussed before) that it would be a mistake to
>> > shoehorn workflow stuff into Heat. I do think we should implement the
>> > hooks I mentioned at the start of this thread to allow tighter
>> > integration between Heat and a workflow engine (i.e. Mistral).
>>
>> +1 on not putting workflow stuff into Heat. Rather let's come up with a
>> nice way of Heat and a workflow service to work together.
>> That could be done in two ways: (1) let Heat hand off to a workflow
>> service
>> for certains tasks or (2) let people define workflow tasks that can easily
>> work on Heat deployed resources. Maybe both make sense, but right now I am
>> more leaning towards (2).
>>
>> >
>> > So building a system on top of Heat is good. Building it on top of
>> > Mistral as well would also be good, and that was part of the feedback
>> > from the TC.
>> >
>> > To me, building on top means building on top of the languages (which
>> > users will have to invest a lot of work in learning) as well, rather
>> > than having a completely different language and only using the
>> > underlying implementation(s).
>>
>> That all sounds logical to me and would keep things clean, i.e. keep the
>> HOT language clean by not mixing it with imperative expression, and keep
>> the Heat engine clean by not blowing it up to act as a workflow engine.
>>
>> When I think about the two aspects that are being brought up in this
>> thread
>> (declarative description of a desired state and workflows) my thinking is
>> that much (and actually as much as possible) can be done declaratively the
>> way Heat does it with HOT. Then for bigger lifecycle management there will
>> be a need for additional workflows on top, because at some point it will
>> be
>> hard to express management logic declaratively in a topology model.
>> Those additional flows on-top will have to be aware of the instance
>> created
>> from a declarative template (i.e. a Heat stack) because it needs to act on
>> the respective resources to do something in addition.
>>
>> So when thinking about a domain specific workflow language, it should be
>> possible to define tasks (in a template aware manner) like "on resource
>> XYZ
>> of the template, do something", or "update resource XYZ of the template
>> with this state", then do this etc. At runtime this would resolve to the
>> actual resource instances created from the resource templates. Making such
>> constructs available to the workflow authors would make sense. Having a
>> workflow service able to execute this via the right underlying APIs would
>> be the execution part. I think from an instance API perspective, Heat
>> already brings a lot for this with the stack model, so workflow tasks
>> could
>> be written to use the stack API to access instance information. Things
>> like
>> updat

Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-19 Thread Fox, Kevin M
Its my understanding that the only case the A in the AGPL would kick in is if 
the cloud provider made a change to MongoDB and exposed the MongoDB instance to 
users. Then the users would have to be able to download the changed code. Since 
Marconi's in front, the user is Marconi, and wouldn't ever want to download the 
source. As far as I can tell, in this use case, the AGPL'ed MongoDB is not 
really any different then the GPL'ed MySQL in footprint here. MySQL is 
acceptable, so why isn't MongoDB?

It would be good to get legal's official take on this. It would be a shame to 
make major architectural decisions based on license assumptions that turn out 
not to be true. I'm cc-ing them.

Thanks,
Kevin

From: Chris Friesen [chris.frie...@windriver.com]
Sent: Wednesday, March 19, 2014 2:24 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs 
a provisioning API?

On 03/19/2014 02:24 PM, Fox, Kevin M wrote:
> Can someone please give more detail into why MongoDB being AGPL is a
> problem? The drivers that Marconi uses are Apache2 licensed, MongoDB is
> separated by the network stack and MongoDB is not exposed to the Marconi
> users so I don't think the 'A' part of the GPL really kicks in at all
> since the MongoDB "user" is the cloud provider, not the cloud end user?

Even if MongoDB was exposed to end-users, would that be a problem?

Obviously the source to MongoDB would need to be made available
(presumably it already is) but does the AGPL licence "contaminate" the
Marconi stuff?  I would have thought that would fall under "mere
aggregation".

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-19 Thread Chris Friesen

On 03/19/2014 02:24 PM, Fox, Kevin M wrote:

Can someone please give more detail into why MongoDB being AGPL is a
problem? The drivers that Marconi uses are Apache2 licensed, MongoDB is
separated by the network stack and MongoDB is not exposed to the Marconi
users so I don't think the 'A' part of the GPL really kicks in at all
since the MongoDB "user" is the cloud provider, not the cloud end user?


Even if MongoDB was exposed to end-users, would that be a problem?

Obviously the source to MongoDB would need to be made available 
(presumably it already is) but does the AGPL licence "contaminate" the 
Marconi stuff?  I would have thought that would fall under "mere 
aggregation".


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Alternating meeting time for more TZ friendliness

2014-03-19 Thread James Polley
On Thu, Mar 20, 2014 at 7:00 AM, James Polley  wrote:

>
>
> On 20 Mar 2014, at 5:44 am, Clint Byrum  wrote:
>
> Excerpts from Sullivan, Jon Paul's message of 2014-03-19 09:26:44 -0700:
>
> From: James Slagle [mailto:james.sla...@gmail.com 
> ]
>
> Sent: 18 March 2014 19:58
>
> Subject: [openstack-dev] [TripleO] Alternating meeting time for more TZ
>
> friendliness
>
>
> Our current meeting time is Tuesdays at 19:00 UTC.  I think this works
>
> ok for most folks in and around North America.
>
>
> It was proposed during today's meeting to see if there is interest is an
>
> alternating meeting time every other week so that we can be a bit more
>
> friendly to those folks that currently can't attend.
>
> If that interests you, speak up :).
>
>
> Speaking up! :D
>
>
>
> For reference, the current meeting schedules are at:
>
> https://wiki.openstack.org/wiki/Meetings
>
>
> Tuesdays at 14:00 UTC on #openstack-meeting-alt is available.
>
>
>
> If we were to have one at that time, we'd need to move the other time as
> well. One driver for moving it is that our participants on the Eastern
> side of Australia are already joining at 0600 their time, and will be
> joining at 0500 soon.
>
> If I've done my TZ math right, Tuesdays at 1400 UTC would be Wednesdays
> at 0100 for Sydney.
>
>
> 0100 now,  in a few weeks when DST flips here. I'd prefer 0500 to 
>
> My first priority is making sure that theacimum
>

Ah. So that's what happened to the email I was composing on my iPhone - it
got sent! Next time I'll ask my pocket to delete it rather than send it.

As I was trying to say, I think our priority should be maximizing the
number of people who can attend, not maximizing for me attending. I worry
that having alternating meetings will lead to having non-overlapping sets
of people in the meetings, but even if that happens I'm not sure if it's
worse than only having a subset of people in the meeting.

I'd be happy to stick with 1900UTC if it means more people can regularly
make the meeting, even if it means I can't make it all the time. Switching
to alternating 1900 Tuesday and 2100 Wednesday would mean I could make more
meetings, but I don't know how it would affect other people.

Speaking for myself, I find the current time is slightly painful, but
> doable. 0500 will be more painful.
>
> If I'm reading the iCal feed linked from
> https://wiki.openstack.org/wiki/Meetings correctly, it looks like there's
> a PCI passthrough meeting scheduled at that time in -alt. Is that correct?
> Is the iCal feed canonical? I don't see the meeting listed on the page, but
> maybe that's also because of the tiny little screen I'm using right now.
>
> If I'm doing the timezone math right, it looks like the iCal feed says
> that 2100UTC Wednesday is free every second week. Once all the DST flips
> happen, that'll be 2100 London, 1500 SF, 0700 Sydney.
>

Bad time math - I was in the middle of updating this when my phone decided
to send the mail. 2100UTC will be 2200 London, 1400 SF, 0700 Sydney.


>
> If we ran at that time on alternate weeks I'd have one I could make easily
> and one I could probably make at a stretch.
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-19 Thread Joe Gordon
On Wed, Mar 19, 2014 at 9:25 AM, Miguel Angel Ajo wrote:

>
>
> An advance on the changes that it's requiring to have a
> py->c++ compiled rootwrap as a mitigation POC for havana/icehouse.
>
> https://github.com/mangelajo/shedskin.rootwrap/commit/
> e4167a6491dfbc71e2d0f6e28ba93bc8a1dd66c0
>
> The current translation output is included.
>
> It looks like doable (almost killed 80% of the translation problems),
> but there are two big stones:
>
> 1) As Joe said, no support for Subprocess (we're interested in popen),
>I'm using a dummy os.system() for the test.
>
> 2) No logging support.
>
>I'm not sure on how complicated could be getting those modules
> implemented for shedkin.


Before sorting out of we can get those working under shedskin, any
preliminary performance numbers from neutron when using this?


>
>
> On 03/18/2014 09:14 AM, Miguel Angel Ajo wrote:
>
>> Hi Joe, thank you very much for the positive feedback,
>>
>> I plan to spend a day during this week on the shedskin-compatibility
>> for rootwrap (I'll branch it, and tune/cut down as necessary) to make
>> it compile under shedskin [1] : nothing done yet.
>>
>> It's a short-term alternative until we can have a rootwrap agent,
>> together with it's integration in neutron (for Juno). As, for the
>> compiled rootwrap, if it works, and if it does look good (security wise)
>> then we'd have a solution for Icehouse/Havana.
>>
>> help in [1] is really  welcome ;-) I'm available in #openstack-neutron
>> as 'ajo'.
>>
>> Best regards,
>> Miguel Ángel.
>>
>> [1] https://github.com/mangelajo/shedskin.rootwrap
>>
>> On 03/18/2014 12:48 AM, Joe Gordon wrote:
>>
>>>
>>>
>>>
>>> On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo
>>> mailto:mangel...@redhat.com>> wrote:
>>>
>>>
>>> I have included on the etherpad, the option to write a sudo
>>> plugin (or several), specific for openstack.
>>>
>>>
>>> And this is a test with shedskin, I suppose that in more complicated
>>> dependecy scenarios it should perform better.
>>>
>>> [majopela@redcylon tmp]$ cat >>  > import sys
>>>  > print "hello world"
>>>  > sys.exit(0)
>>>  > EOF
>>>
>>> [majopela@redcylon tmp]$ time python test.py
>>> hello world
>>>
>>> real0m0.016s
>>> user0m0.015s
>>> sys 0m0.001s
>>>
>>>
>>>
>>> This looks very promising!
>>>
>>> A few gotchas:
>>>
>>> * Very limited library support
>>> https://code.google.com/p/shedskin/wiki/docs#Library_Limitations
>>>* no logging
>>>* no six
>>>* no subprocess
>>>
>>> * no *args support
>>>*
>>> https://code.google.com/p/shedskin/wiki/docs#Python_Subset_Restrictions
>>>
>>> that being said I did a quick comparison with great results:
>>>
>>> $ cat tmp.sh
>>> #!/usr/bin/env bash
>>> echo $0 $@
>>> ip a
>>>
>>> $ time ./tmp.sh  foo bar> /dev/null
>>>
>>> real0m0.009s
>>> user0m0.003s
>>> sys 0m0.006s
>>>
>>>
>>>
>>> $ cat tmp.py
>>> #!/usr/bin/env python
>>> import os
>>> import sys
>>>
>>> print sys.argv
>>> print os.system("ip a")
>>>
>>> $ time ./tmp.py  foo bar > /dev/null
>>>
>>> min:
>>> real0m0.016s
>>> user0m0.004s
>>> sys 0m0.012s
>>>
>>> max:
>>> real0m0.038s
>>> user0m0.016s
>>> sys 0m0.020s
>>>
>>>
>>>
>>> shedskin  tmp.py && make
>>>
>>>
>>> $ time ./tmp  foo bar > /dev/null
>>>
>>> real0m0.010s
>>> user0m0.007s
>>> sys 0m0.002s
>>>
>>>
>>>
>>> Based in these results I think a deeper dive into making rootwrap
>>> supportshedskin is worthwhile.
>>>
>>>
>>>
>>>
>>>
>>> [majopela@redcylon tmp]$ shedskin test.py
>>> *** SHED SKIN Python-to-C++ Compiler 0.9.4 ***
>>> Copyright 2005-2011 Mark Dufour; License GNU GPL version 3 (See
>>> LICENSE)
>>>
>>> [analyzing types..]
>>> 100%
>>> [generating c++ code..]
>>> [elapsed time: 1.59 seconds]
>>> [majopela@redcylon tmp]$ make
>>> g++  -O2 -march=native -Wno-deprecated  -I.
>>> -I/usr/lib/python2.7/site-packages/shedskin/lib /tmp/test.cpp
>>> /usr/lib/python2.7/site-packages/shedskin/lib/sys.cpp
>>> /usr/lib/python2.7/site-packages/shedskin/lib/re.cpp
>>> /usr/lib/python2.7/site-packages/shedskin/lib/builtin.cpp -lgc
>>> -lpcre  -o test
>>> [majopela@redcylon tmp]$ time ./test
>>> hello world
>>>
>>> real0m0.003s
>>> user0m0.000s
>>> sys 0m0.002s
>>>
>>>
>>> - Original Message -
>>>  > We had this same issue with the dhcp-agent. Code was added that
>>> paralleled
>>>  > the initial sync here: https://review.openstack.org/#/c/28914/
>>> that made
>>>  > things a good bit faster if I remember correctly. Might be worth
>>> doing
>>>  > something similar for the l3-agent.
>>>  >
>>>  > Best,
>>>  >
>>>  > Aaron
>>>  >
>>>  >
>>>  > On Mon, Mar 10, 2014 at 5:07 PM, Joe Gordon <
>>> joe.gord...@gmail.com  > wrote:
>>>  >
>>>   

Re: [openstack-dev] Updating libvirt in gate jobs

2014-03-19 Thread Joe Gordon
On Wed, Mar 19, 2014 at 3:43 AM, Sean Dague  wrote:

> On 03/18/2014 08:15 PM, Joe Gordon wrote:
> >
> >
> >
> > On Tue, Mar 18, 2014 at 8:12 AM, Sean Dague  > > wrote:
> >
> > On 03/18/2014 10:11 AM, Daniel P. Berrange wrote:
> > > On Tue, Mar 18, 2014 at 07:50:15AM -0400, Davanum Srinivas wrote:
> > >> Hi Team,
> > >>
> > >> We have 2 choices
> > >>
> > >> 1) Upgrade to libvirt 0.9.8+ (See [1] for details)
> > >> 2) Enable UCA and upgrade to libvirt 1.2.2+ (see [2] for details)
> > >>
> > >> For #1, we received a patched deb from @SergeHallyn/@JamesPage
> > and ran
> > >> tests on it in review https://review.openstack.org/#/c/79816/
> > >> For #2, @SergeHallyn/@JamesPage have updated UCA
> > >> ("precise-proposed/icehouse") repo and we ran tests on it in
> review
> > >> https://review.openstack.org/#/c/74889/
> > >>
> > >> For IceHouse, my recommendation is to request Ubuntu folks to
> > push the
> > >> patched 0.9.8+ version we validated to public repos, then we can
> can
> > >> install/run gate jobs with that version. This is probably the
> > smallest
> > >> risk of the 2 choices.
> > >
> > > If we've re-run the tests in that review enough times to be
> confident
> > > we've had a chance of exercising the race conditions, then using
> the
> > > patched 0.9.8 seems like a no-brainer. We know the current version
> in
> > > ubuntu repos is broken for us, so the sooner we address that the
> > better.
> >
> >
> >
> > ++
> >
> >
> > >
> > >> As soon as Juno begins, we can switch 1.2.2+ on UCA and request
> > Ubuntu
> > >> folks to push the verified version where we can use it.
> >
> >
> > ++
> >
> >
> > >
> > > This basically re-raises the question of /what/ we should be
> > testing in
> > > the gate, which was discussed on this list a few weeks ago, and
> > I'm not
> > > clear that there was a definite decision in that thread
> > >
> > >
> >
> http://lists.openstack.org/pipermail/openstack-dev/2014-February/027734.html
> > >
> > > Testing the lowest vs highest is targetting two different scenarios
> > >
> > >   - Testing the lowest version demonstrates that OpenStack has not
> > > broken its own code by introducing use of a new feature.
> > >
> > >   - Testing the highest version demonstrates that OpenStack has not
> > > been broken by 3rd party code introducing a regression.
> > >
> > > I think it is in scope for openstack to be targetting both of these
> > > scenarios. For anything in-between though, it is upto the
> downstream
> > > vendors to test their precise combination of versions. Currently
> > though
> > > our testing policy for non-python bits is "whatever version ubuntu
> > ships",
> > > which may be neither the lowest or highest versions, just some
> > arbitrary
> > > version they wish to support. So this discussion is currently more
> > of a
> > > 'what ubuntu version should we test on' kind of decision
> >
> > I think testing 2 versions of libvirt in the gate is adding a matrix
> > dimension that we currently can't really support. We're just going to
> > have to pick one per release and be fine with it (at least for
> > icehouse).
> >
> > If people want other versions tested, please come in with 3rd party
> ci
> > on it.
> >
> > We can revisit the big test matrix at summit about the combinations
> > we're going to actually validate, because with the various
> limitations
> > we've got (concurrency limits, quota limits, upstream package limits,
> > kinds of tests we want to run) we're going to have to make a bunch of
> > compromises. Testing something new is going to require throwing
> existing
> > stuff out of the test path.
> >
> >
> > I think this is definitely worth revisiting at the summit, but I think
> > we should move Juno to Libvirt 1.2.2+ as soon as possible instead of
> > gating on a 2 year old release, and at the summit we can sort out what
> > the full test matrix can be.
> >
> > As a side note tripleo uses libvirt from Saucy (1.1.1) so moving to
> > latest libvirt would help support them.
>
> Honestly, given that we've been trying to get a working UCA for 6
> months, I'm really not thrilled by the idea of making UCA part of our
> gate. Because it's clearly not at the same level of testing as the base
> distro. I think this will be even more so with UCA post 14.04 release,
> as that's designed as a transitional stage to get you to 14.04.
>
> As has been demonstrated, Canonical's testing systems are clearly not
> finding the same bugs we are finding in their underlying packages.
>
> I think the libvirt 1.2+ plan should be moving Juno to 14.04 as soon as
> we can get that stable. That will bring in a whole fresh OS, kernel,
> etc. And we recenter our testing on that LTS

Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-19 Thread Joe Gordon
On Wed, Mar 19, 2014 at 1:52 AM, Nadya Privalova wrote:

> Ok, so we don't want to switch to UCA, let's consider this variant.
> What options do we have to make possible to run Ceilometer jobs with Mongo
> backend?
> I see only  https://review.openstack.org/#/c/81001/ or making Ceilometer
> able to work with old Mongo. But the last variant looks inappropriate at
> least in Icehouse.
> What am I missing here? Maybe there is smth else we can do?
>
>
If ceilometer says it supports MySQL then it should work, we shouldn't be
forced to switch to an alternate backend.



>
> On Tue, Mar 18, 2014 at 9:28 PM, Tim Bell  wrote:
>
>>
>>
>> If UCA is required, what would be the upgrade path for a currently
>> running OpenStack Havana site to Icehouse with this requirement ?
>>
>>
>>
>> Would it be an online upgrade (i.e. what order to upgrade the different
>> components in order to keep things running at all times) ?
>>
>>
>>
>> Tim
>>
>>
>>
>> *From:* Chmouel Boudjnah [mailto:chmo...@enovance.com]
>> *Sent:* 18 March 2014 17:58
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra]
>> Ceilometer tempest testing in gate
>>
>>
>>
>>
>>
>> On Tue, Mar 18, 2014 at 5:21 PM, Sean Dague  wrote:
>>
>>  So I'm still -1 at the point in making UCA our default run environment
>> until it's provably functional for a period of time. Because working
>> around upstream distro breaks is no fun.
>>
>>
>>
>> I agree, if UCA is not very stable ATM, this os going to cause us more
>> pain, but what would be the plan of action? a non-voting gate for
>> ceilometer as a start ? (if that's possible).
>>
>> Chmouel
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-19 Thread Doug Hellmann
The ceilometer collector is meant to scale horizontally. Have you tried
configuring the test environment to run more than one copy, to process the
notifications more quickly?

Doug


On Tue, Mar 18, 2014 at 8:09 AM, Nadya Privalova wrote:

> Hi folks,
>
> I'd like to discuss Ceilometer's tempest situation with you.
> Now we have several patch sets on review that test core functionality of
> Ceilometer: notificaton and pollstering (topic
> https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:bp/add-basic-ceilometer-tests,n,z).
> But there is a problem: Ceilometer performance is very poor on mysql and
> postgresql because of the bug
> https://bugs.launchpad.net/ceilometer/+bug/1291054. Mongo behaves much
> better even in single thread and I hope that it's performance will be
> enough to successfully run Ceilometer tempest tests.
> Let me explain in several words why tempest tests is mostly performance
> tests for Ceilometer. The thing is that Ceilometer service is running
> during all other nova, cinder and so on tests run. All the tests create
> instances, volumes and each creation produces a lot of notifications. Each
> notification is the entry to database. So Ceilometer cannot process such a
> big amount of notifications quickly. Ceilometer tests have 'telemetry'
> prefix and it means that they will be started in the last turn. And it
> makes situation even worst.
> So my proposal:
> 1. create a non-voting job with Mongo-backend
> 2. make sure that tests pass on Mongo
> 3. merge tests to tempest but skip that on postgres and mysql till
> bug/1291054 is resolved
> 4. make the new job 'voting'
>
> The problem is only in Mongo installation. I have a cr
> https://review.openstack.org/#/c/81001/ that will allow us to install
> Mongo from deb. From the other hand there is
> https://review.openstack.org/#/c/74889/ that enables UCA. I'm
> collaborating with infra-team to make the decision ASAP because AFAIU we
> need tempest tests in Icehouse (for more discussion you are welcome to
> thread  [openstack-dev] Updating libvirt in gate jobs).
>
> If you have any thoughts on this please share.
>
> Thanks for attention,
> Nadya
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday March 20th at 22:00UTC

2014-03-19 Thread Matthew Treinish
Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, March 20th at 22:00 UTC in the #openstack-meeting channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

18:00 EDT
07:00 JST
08:30 ACDT
23:00 CET
17:00 CDT
15:00 PDT

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Evaluation for Marconi

2014-03-19 Thread John Dickinson
On Mar 19, 2014, at 12:27 PM, Julien Danjou  wrote:

> On Wed, Mar 19 2014, Kurt Griffiths wrote:
> 
>> That begs the question, *why* is that unlikely to change?
> 
> Because that project is Swift.

If you look at the Swift code, you'll see that swob is not a replacement for 
either Pecan or Falcon. swob was written to replace WebOb, and we documented 
why we did this. 
https://github.com/openstack/swift/blob/master/swift/common/swob.py#L23 It's an 
in-tree module written to remove a recurring pain point. swob has allowed the 
Swift team to focus their time on adding features and fixing bugs in other 
parts of the code.

Why don't we use Pecan or Falcon in Swift? Mostly because we don't need the 
functionality that they provide, and so there is no reason to go add a 
dependency (and thus increase packaging and install requirements on deployers). 
Now if there are other uses for swob outside of Swift, let's have a 
conversation about including it in an external library so we can all benefit.

---

The comparison that Balaji did between Falcon and Pecan looks like a very good 
overview. It gives information necessary to make an informed choice based on 
real data instead of "it's what everybody is doing". If you don't like some 
criteria reported on, I'm sure Balaji would be happy to see your comparison and 
evaluation.

We all want to make informed decisions based on data, not claims. Balaji's 
analysis is a great start on figuring out what the Marconi project should 
choose. As such, it seems that the Marconi team is the responsible party to 
make the right choice for their use case, after weighing all the factors.


--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-19 Thread Sylvain Bauza
2014-03-19 22:38 GMT+01:00 Fox, Kevin M :

> Its my understanding that the only case the A in the AGPL would kick in is
> if the cloud provider made a change to MongoDB and exposed the MongoDB
> instance to users. Then the users would have to be able to download the
> changed code. Since Marconi's in front, the user is Marconi, and wouldn't
> ever want to download the source. As far as I can tell, in this use case,
> the AGPL'ed MongoDB is not really any different then the GPL'ed MySQL in
> footprint here. MySQL is acceptable, so why isn't MongoDB?
>
>

MongoDB is AGPL but MongoDB drivers are Apache licenced [1]
GPL contamination should not happen if we consider integrating only drivers
in the code.

[1] http://www.mongodb.org/about/licensing/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-19 Thread Joe Gordon
On Wed, Mar 19, 2014 at 3:09 PM, Doug Hellmann
wrote:

> The ceilometer collector is meant to scale horizontally. Have you tried
> configuring the test environment to run more than one copy, to process the
> notifications more quickly?
>

FYI:
http://logs.openstack.org/82/79182/1/check/check-tempest-dsvm-neutron/156f1d4/logs/screen-dstat.txt.gz



>
> Doug
>
>
> On Tue, Mar 18, 2014 at 8:09 AM, Nadya Privalova 
> wrote:
>
>> Hi folks,
>>
>> I'd like to discuss Ceilometer's tempest situation with you.
>> Now we have several patch sets on review that test core functionality of
>> Ceilometer: notificaton and pollstering (topic
>> https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:bp/add-basic-ceilometer-tests,n,z).
>> But there is a problem: Ceilometer performance is very poor on mysql and
>> postgresql because of the bug
>> https://bugs.launchpad.net/ceilometer/+bug/1291054. Mongo behaves much
>> better even in single thread and I hope that it's performance will be
>> enough to successfully run Ceilometer tempest tests.
>> Let me explain in several words why tempest tests is mostly performance
>> tests for Ceilometer. The thing is that Ceilometer service is running
>> during all other nova, cinder and so on tests run. All the tests create
>> instances, volumes and each creation produces a lot of notifications. Each
>> notification is the entry to database. So Ceilometer cannot process such a
>> big amount of notifications quickly. Ceilometer tests have 'telemetry'
>> prefix and it means that they will be started in the last turn. And it
>> makes situation even worst.
>> So my proposal:
>> 1. create a non-voting job with Mongo-backend
>> 2. make sure that tests pass on Mongo
>> 3. merge tests to tempest but skip that on postgres and mysql till
>> bug/1291054 is resolved
>> 4. make the new job 'voting'
>>
>> The problem is only in Mongo installation. I have a cr
>> https://review.openstack.org/#/c/81001/ that will allow us to install
>> Mongo from deb. From the other hand there is
>> https://review.openstack.org/#/c/74889/ that enables UCA. I'm
>> collaborating with infra-team to make the decision ASAP because AFAIU we
>> need tempest tests in Icehouse (for more discussion you are welcome to
>> thread  [openstack-dev] Updating libvirt in gate jobs).
>>
>> If you have any thoughts on this please share.
>>
>> Thanks for attention,
>> Nadya
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >