Re: [openstack-dev] [Neutron] New Plug-in development for Neutron

2013-09-11 Thread P Balaji-B37839

Thanks for Suggestions and Support on this.

Regards,
Balaji.P
-Original Message-
From: Alan Kavanagh [mailto:alan.kavan...@ericsson.com] 
Sent: Wednesday, September 11, 2013 1:48 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Neutron] New Plug-in development for Neutron

+1

This is the best approach and one we discussed in Portland and a track most 
folks are developing plugins under ML2 

/Alan

-Original Message-
From: Robert Kukura [mailto:rkuk...@redhat.com] 
Sent: September-10-13 12:27 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] New Plug-in development for Neutron

On 09/10/2013 10:45 AM, Mark McClain wrote:
 I think Gary and Kyle have answered this very well; however I do have a few 
 things to add.  It is definitely too late for Havana, so Icehouse is next 
 available release for new plugins.  I can work with you offline to find you a 
 core sponsor.

One more thing: Rather than implementing and maintaining an entire new plugin, 
consider whether it might make sense to integrate as an ml2 MechanismDriver 
instead. The ml2 plugin's MechanismDrivers can interface with network devices 
or controllers, and can work in conjunction with the existing L2 agents if 
needed. See
https://wiki.openstack.org/wiki/Neutron/ML2 for more info.

-Bob

 
 mark
  
 On Sep 10, 2013, at 9:37 AM, Kyle Mestery (kmestery) kmest...@cisco.com 
 wrote:
 
 On Sep 10, 2013, at 4:05 AM, P Balaji-B37839 b37...@freescale.com wrote:

 Hi,

 We have gone through the below link for new plug-in development.

 https://wiki.openstack.org/wiki/NeutronDevelopment#Developing_a_Neut
 ron_Plugin

 Just want to confirm that is it mandatory to be Core Neutron Developer for 
 submitting a new plug-in.?

 It's not necessary for you to have a core developer from your company, but 
 you will need an existing core developer to support your plugin upstream. 
 When you file the blueprint, let us know and we'll work with you on this one 
 Balaji.

 Thanks,
 Kyle

 How do we get a reviewer for this like Can we approach any Core Neutron 
 developer for review our plugin?

 We are developing new plug-in for our product and want to make it upstream 
 to Neutron Core.

 Any information on this will be helpful!.

 Regards,
 Balaji.P


 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] Meeting agenda for Tue 10th September at 19:00 UTC

2013-09-11 Thread Tomas Sedovic

The meeting happened.

You can read the notes:

http://eavesdrop.openstack.org/meetings/tuskar/2013/tuskar.2013-09-10-19.00.html

or the full IRC log if you're so inclined:

http://eavesdrop.openstack.org/meetings/tuskar/2013/tuskar.2013-09-10-19.00.log.html

On 09/09/2013 05:34 PM, Tomas Sedovic wrote:

The Tuskar team holds a meeting in #openstack-meeting-alt, see

https://wiki.openstack.org/wiki/Meetings/Tuskar

The next meeting is on Tuesday 10th September at 19:00 UTC.

Current topics for discussion:

* Documentation
* Simplify development setup
* Tests
* Releases  Milestones
* Open discussion

If you have any other topics to discuss, please add them to the wiki.

Thanks,
shadower

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Issue of cache eating up most of memory

2013-09-11 Thread shalz
Robert,

Regarding your note here:
 
http://www.gossamer-threads.com/lists/openstack/dev/30591#30591

You rightly said If it is not in memory you will hit your disk with a lot of 
extra reads.

If all data can't reside in memory - one option is to increase the memory on 
nodes (expensive). Another is to use SSD instead of traditional spinning disks 
(Also expensive if data volume is high).  What are your thoughts on use of SSD 
as a HDD caching device in the context of both Swift and Cinder in OpenStack?  
It is cost effective and can be seamlessly deployed.

Look forward to hearing from you,

best,
S___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-11 Thread Dmitry Mescheryakov
Mike, and if you looked up 'compute' in dictionary, you would never guess
what OpenStack Compute does :-).

I think that 'Data Processing' is a good name which in short describes what
Savanna is going to be. The name 'MapReduce' for the program does not cover
whole functionality provided by Savanna. Todays Hadoop distributions
include not only MapReduce frameworks, but also a bunch of other products,
not all of which are based on MapReduce. In fact the core of Hadoop 2.0,
YARN, was built with idea of supporting other, non-MapReduce frameworks.
For instance Twitter Storm was ported on YARN recently.

I am also +1 on Matthew's mission proposal:
 Mission: To provide the OpenStack community with an open, cutting edge,
performant and scalable data processing stack and associated management
interfaces.

Dmitry

2013/9/10 Mike Spreitzer mspre...@us.ibm.com

 A quick dictionary lookup of data processing yields the following.  I
 wonder if you mean something more specific.

 data processing |ˈˌdædə ˈprɑsɛsɪŋ|
 noun
 a series of operations on data, esp. by a computer, to retrieve,
 transform, or classify information.



 From:Matthew Farrellee m...@redhat.com
 To:OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org,
 Date:09/10/2013 09:53 AM
 Subject:Re: [openstack-dev] [savanna] Program name and Mission
 statement
 --



 Rough cut -

 Program: OpenStack Data Processing
 Mission: To provide the OpenStack community with an open, cutting edge,
 performant and scalable data processing stack and associated management
 interfaces.

 On 09/10/2013 09:26 AM, Sergey Lukjanov wrote:
  It sounds too broad IMO. Looks like we need to define Mission Statement
  first.
 
  Sincerely yours,
  Sergey Lukjanov
  Savanna Technical Lead
  Mirantis Inc.
 
  On Sep 10, 2013, at 17:09, Alexander Kuznetsov akuznet...@mirantis.com
  mailto:akuznet...@mirantis.com akuznet...@mirantis.com wrote:
 
  My suggestion OpenStack Data Processing.
 
 
  On Tue, Sep 10, 2013 at 4:15 PM, Sergey Lukjanov
  slukja...@mirantis.com 
  mailto:slukja...@mirantis.comslukja...@mirantis.com
 wrote:
 
  Hi folks,
 
  due to the Incubator Application we should prepare Program name
  and Mission statement for Savanna, so, I want to start mailing
  thread about it.
 
  Please, provide any ideas here.
 
  P.S. List of existing programs:
  https://wiki.openstack.org/wiki/Programs
  P.P.S. https://wiki.openstack.org/wiki/Governance/NewPrograms
 
  Sincerely yours,
  Sergey Lukjanov
  Savanna Technical Lead
  Mirantis Inc.
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  
  mailto:OpenStack-dev@lists.openstack.orgOpenStack-dev@lists.openstack.org
 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.orgOpenStack-dev@lists.openstack.org
 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Comments/questions on the instance-group-api-extension blueprint

2013-09-11 Thread Gary Kotton


From: Mike Spreitzer mspre...@us.ibm.commailto:mspre...@us.ibm.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, September 10, 2013 11:58 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [heat] Comments/questions on the 
instance-group-api-extension blueprint

First, I'm a newbie here, wondering: is this the right place for 
comments/questions on blueprints?  Supposing it is...

[Gary Kotton] Yeah, as Russel said this is the correct place

I am referring to 
https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension

In my own research group we have experience with a few systems that do 
something like that, and more (as, indeed, that blueprint explicitly states 
that it is only the start of a longer roadmap).  I would like to highlight a 
couple of differences that alarm me.  One is the general overlap between 
groups.  I am not saying this is wrong, but as a matter of natural conservatism 
we have shied away from unnecessary complexities.  The only overlap we have 
done so far is hierarchical nesting.  As the instance-group-api-extension 
explicitly contemplates groups of groups as a later development, this would 
cover the overlap that we have needed.  On the other hand, we already have 
multiple policies attached to a single group.  We have policies for a variety 
of concerns, so some can combine completely or somewhat independently.  We also 
have relationships (of various sorts) between groups (as well as between 
individuals, and between individuals and groups).  The policies and 
relationships, in general, are not simply names but also have parameters.

[Gary Kotton] The instance groups was meant to be the first step towards what 
we had presented in Portland. Please look at the presentation that we gave an 
this may highlight what the aims were: 
https://docs.google.com/presentation/d/1oDXEab2mjxtY-cvufQ8f4cOHM0vIp4iMyfvZPqg8Ivc/edit?usp=sharing.
 Sadly for this release we did not manage to get the instance groups through 
(it was an issue of timing and bad luck). We will hopefully get this though in 
the first stages of the I cycle and then carry on building on it as it has a 
huge amount of value for OpenStack. It will be great if you can also 
participate in the discussions.

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Improving Neutron L3 Agent with High Availability

2013-09-11 Thread Emilien Macchi
Hi,

The current implementation of Neutron L3 agent allows us to scale
virtual routers on multiple agents but does not provide High
Availability on :
- namespaces, virtual interfaces (both in north and south)
- established connections between external  internal network.

The idea here is to start a discussion about a new design that we could
implement in the next release.
Since there exists some conversations on this topic, I want to share my
ideas with a public document we wrote [1] with my team.

Table of contents:
- Abstract about current implementation
- Current Architecture
- Proposal #1: Health-check (which is not my final solution, but just an
existing way).
- Proposal #2: VRRP + conntrackd (new backends for improving L3 agent)
- Design session proposal for next Summit


Feel free to bring your thoughts.
After the discussion, maybe could we write new blueprints.

Note: the document is public and you are allowed to comment. If you need
more access, I can of course grant you write rights.

[1]
https://docs.google.com/document/d/1DNAqRSOIZPqUxPVicbUMWWuRBJ90qJjVYe7Ox8rVtKE/edit?usp=sharing


Regards,

-- 
Emilien Macchi

# OpenStack Engineer
// eNovance Inc.  http://enovance.com
// ✉ emil...@enovance.com ☎ +33 (0)1 49 70 99 80
// 10 rue de la Victoire 75009 Paris




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][oslo] mysql, sqlalchemy and sql_mode

2013-09-11 Thread Steven Hardy
Hi all,

I'm investigating some issues, where data stored to a text column in mysql
is silently truncated if it's too big.

It appears that the default configuration of mysql, and the sessions
established via sqlalchemy is to simply warn on truncation rather than
raise an error.

This seems to me to be almost never what you want, since on retrieval the
data is corrupt and bad/unexpected stuff is likely.

This AFAICT is a mysql specific issue[1], which can be resolved by setting
sql_mode to traditional[2,3], after which an error is raised on truncation,
allowing us to catch the error before the data is stored.

My question is, how do other projects, or oslo.db, handle this atm?

It seems we either have to make sure the DB enforces the schema/model, or
validate every single value before attempting to store, which seems like an
unreasonable burden given that the schema changes pretty regularly.

Can any mysql, sqlalchemy and oslo.db experts pitch in with opinions on
this?

Thanks!

Steve

[1] http://www.enricozini.org/2012/tips/sa-sqlmode-traditional/
[2]
http://rpbouman.blogspot.co.uk/2009/01/mysqls-sqlmode-my-suggestions.html
[3] http://dev.mysql.com/doc/refman/5.5/en/server-sql-mode.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [pci device passthrough] fails with NameError: global name '_' is not defined

2013-09-11 Thread Henry Gessau


-- Henry

On Tue, Sep 10, at 5:38 pm, David Kang dk...@isi.edu wrote:

 
 
 - Original Message -
 From: Russell Bryant rbry...@redhat.com
 To: David Kang dk...@isi.edu
 Cc: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Tuesday, September 10, 2013 5:17:15 PM
 Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails with 
 NameError: global name '_' is not defined
 On 09/10/2013 05:03 PM, David Kang wrote:
 
  - Original Message -
  From: Russell Bryant rbry...@redhat.com
  To: OpenStack Development Mailing List
  openstack-dev@lists.openstack.org
  Cc: David Kang dk...@isi.edu
  Sent: Tuesday, September 10, 2013 4:42:41 PM
  Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails
  with NameError: global name '_' is not defined
  On 09/10/2013 03:56 PM, David Kang wrote:
 
   Hi,
 
I'm trying to test pci device passthrough feature.
  Havana3 is installed using Packstack on CentOS 6.4.
  Nova-compute dies right after start with error NameError: global
  name '_' is not defined.
  I'm not sure if it is due to misconfiguration of nova.conf or bug.
  Any help will be appreciated.
 
  Here is the info:
 
  /etc/nova/nova.conf:
  pci_alias={name:test, product_id:7190, vendor_id:8086,
  device_type:ACCEL}
 
  pci_passthrough_whitelist=[{vendor_id:8086,product_id:7190}]
 
   With that configuration, nova-compute fails with the following
   log:
 
File
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py,
line 461, in _process_data
  **args)
 
File

  /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py,
line 172, in dispatch
  result = getattr(proxyobj, method)(ctxt, **kwargs)
 
File
/usr/lib/python2.6/site-packages/nova/conductor/manager.py,
line 567, in object_action
  result = getattr(objinst, objmethod)(context, *args, **kwargs)
 
File /usr/lib/python2.6/site-packages/nova/objects/base.py,
line
141, in wrapper
  return fn(self, ctxt, *args, **kwargs)
 
File
/usr/lib/python2.6/site-packages/nova/objects/pci_device.py,
line 242, in save
  self._from_db_object(context, self, db_pci)
 
  NameError: global name '_' is not defined
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup Traceback (most recent call
  last):
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup File
  /usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py,
  line 117, in wait
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup x.wait()
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup File
  /usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py,
  line 49, in wait
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup return self.thread.wait()
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup File
  /usr/lib/python2.6/site-packages/eventlet/greenthread.py, line
  166, in wait
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup return self._exit_event.wait()
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup File
  /usr/lib/python2.6/site-packages/eventlet/event.py, line 116, in
  wait
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup return hubs.get_hub().switch()
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup File
  /usr/lib/python2.6/site-packages/eventlet/hubs/hub.py, line 177,
  in switch
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup return self.greenlet.switch()
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup File
  /usr/lib/python2.6/site-packages/eventlet/greenthread.py, line
  192, in main
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup result = function(*args,
  **kwargs)
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup File
  /usr/lib/python2.6/site-packages/nova/openstack/common/service.py,
  line 65, in run_service
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup service.start()
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup File
  /usr/lib/python2.6/site-packages/nova/service.py, line 164, in
  start
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup self.manager.pre_start_hook()
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup File
  /usr/lib/python2.6/site-packages/nova/compute/manager.py, line
  805, in pre_start_hook
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup
  self.update_available_resource(nova.context.get_admin_context())
  2013-09-10 12:52:23.774 14749 TRACE
  nova.openstack.common.threadgroup File
  /usr/lib/python2.6/site-packages/nova/compute/manager.py, line
  4773, in update_available_resource
  2013-09-10 12:52:23.774 14749 TRACE
  

[openstack-dev] Doubt regarding Resource Tracker

2013-09-11 Thread Peeyush Gupta
Hi,

I have been trying to understand the working of resource tracker.
I understand that it is responsible to retrieve the data from the host and
save it to the database. What I am not able to figure out is how exactly
the compute node table is populated for the first time? Because after that
resource tracker compare the data and update the changes.
Again, does resource tracker work periodically? or is it triggered when
an instance is spawned or deleted?

Thanks.
 
~Peeyush Gupta___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [pci device passthrough] fails with NameError: global name '_' is not defined

2013-09-11 Thread yongli he

于 2013年09月11日 05:17, Russell Bryant 写道:

On 09/10/2013 05:03 PM, David Kang wrote:

- Original Message -

From: Russell Bryant rbry...@redhat.com
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Cc: David Kang dk...@isi.edu
Sent: Tuesday, September 10, 2013 4:42:41 PM
Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails with NameError: 
global name '_' is not defined
On 09/10/2013 03:56 PM, David Kang wrote:

  Hi,

   I'm trying to test pci device passthrough feature.
Havana3 is installed using Packstack on CentOS 6.4.
Nova-compute dies right after start with error NameError: global
name '_' is not defined.
I'm not sure if it is due to misconfiguration of nova.conf or bug.
Any help will be appreciated.

Here is the info:

/etc/nova/nova.conf:
pci_alias={name:test, product_id:7190, vendor_id:8086,
device_type:ACCEL}

pci_passthrough_whitelist=[{vendor_id:8086,product_id:7190}]

  With that configuration, nova-compute fails with the following log:

   File
   /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py,
   line 461, in _process_data
 **args)

   File
   /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py,
   line 172, in dispatch
 result = getattr(proxyobj, method)(ctxt, **kwargs)

   File /usr/lib/python2.6/site-packages/nova/conductor/manager.py,
   line 567, in object_action
 result = getattr(objinst, objmethod)(context, *args, **kwargs)

   File /usr/lib/python2.6/site-packages/nova/objects/base.py, line
   141, in wrapper
 return fn(self, ctxt, *args, **kwargs)

   File
   /usr/lib/python2.6/site-packages/nova/objects/pci_device.py,
   line 242, in save
 self._from_db_object(context, self, db_pci)

NameError: global name '_' is not defined
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup Traceback (most recent call last):
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py,
line 117, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup x.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py,
line 49, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self.thread.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/eventlet/greenthread.py, line
166, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self._exit_event.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/eventlet/event.py, line 116, in
wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return hubs.get_hub().switch()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py, line 177,
in switch
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self.greenlet.switch()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/eventlet/greenthread.py, line
192, in main
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup result = function(*args, **kwargs)
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/openstack/common/service.py,
line 65, in run_service
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup service.start()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/service.py, line 164, in
start
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup self.manager.pre_start_hook()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line
805, in pre_start_hook
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup
self.update_available_resource(nova.context.get_admin_context())
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line
4773, in update_available_resource
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup
rt.update_available_resource(context)
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py,
line 246, in inner
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return f(*args, **kwargs)
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/compute/resource_tracker.py,
line 318, in update_available_resource
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup 

Re: [openstack-dev] [heat][oslo] mysql, sqlalchemy and sql_mode

2013-09-11 Thread Roman Podolyaka
Hi Steven,

Nice catch! This is not the first time MySQL has played a joke on us...

I think, we can fix this easily by adding a callback function, which will
set the proper sql_mode value, when a DB connection is retrieved from a
connection pool.

We'll provide a fix to oslo-incubator soon.

Thanks,
Roman

[1] http://www.enricozini.org/2012/tips/sa-sqlmode-traditional/


On Wed, Sep 11, 2013 at 1:37 PM, Steven Hardy sha...@redhat.com wrote:

 Hi all,

 I'm investigating some issues, where data stored to a text column in mysql
 is silently truncated if it's too big.

 It appears that the default configuration of mysql, and the sessions
 established via sqlalchemy is to simply warn on truncation rather than
 raise an error.

 This seems to me to be almost never what you want, since on retrieval the
 data is corrupt and bad/unexpected stuff is likely.

 This AFAICT is a mysql specific issue[1], which can be resolved by setting
 sql_mode to traditional[2,3], after which an error is raised on
 truncation,
 allowing us to catch the error before the data is stored.

 My question is, how do other projects, or oslo.db, handle this atm?

 It seems we either have to make sure the DB enforces the schema/model, or
 validate every single value before attempting to store, which seems like an
 unreasonable burden given that the schema changes pretty regularly.

 Can any mysql, sqlalchemy and oslo.db experts pitch in with opinions on
 this?

 Thanks!

 Steve

 [1] http://www.enricozini.org/2012/tips/sa-sqlmode-traditional/
 [2]
 http://rpbouman.blogspot.co.uk/2009/01/mysqls-sqlmode-my-suggestions.html
 [3] http://dev.mysql.com/doc/refman/5.5/en/server-sql-mode.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] run_tests in debug mode fails

2013-09-11 Thread Davanum Srinivas
Clark,

This is good. every file that uses a CONF.xyz needs to have an import for
xyz. This is often overlooked

-- dims


On Tue, Sep 10, 2013 at 11:43 PM, Clark Boylan clark.boy...@gmail.comwrote:

 On Mon, Sep 9, 2013 at 4:20 AM, Rosa, Andrea (HP Cloud Services)
 andrea.r...@hp.com wrote:
  Hi all
 
  I need to debug a specific test but when I try to run it in debug mode
 using the run_tests -d (I need to attach pdb) that command fails but if I
 run the script without the -d option that works.
  I created a brand-new env so I don't think it's related to my local env.
  Anyone is experiencing the same issue?
  Should I file a nova bug for that?
 
  Error details:
  ./run_tests.sh -d
 nova.tests.integrated.test_servers.ServersTestV3.test_create_and_rebuild_server
  Traceback (most recent call last):
File nova/tests/integrated/test_servers.py, line 43, in setUp
  super(ServersTest, self).setUp()
File nova/tests/integrated/integrated_helpers.py, line 87, in setUp
  self.consoleauth = self.start_service('consoleauth')
File nova/test.py, line 279, in start_service
  svc = self.useFixture(ServiceFixture(name, host, **kwargs))
File
 /home/ubuntu/nova/.venv/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 591, in useFixture
  fixture.setUp()
File nova/test.py, line 174, in setUp
  self.service = service.Service.create(**self.kwargs)
File nova/service.py, line 245, in create
  manager = CONF.get(manager_cls, None)
File /home/ubuntu/nova/.venv/lib/python2.7/_abcoll.py, line 342, in
 get
  return self[key]
File
 /home/ubuntu/nova/.venv/local/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 1610, in __getitem__
  return self.__getattr__(key)
File
 /home/ubuntu/nova/.venv/local/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 1606, in __getattr__
  return self._get(name)
File
 /home/ubuntu/nova/.venv/local/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 1930, in _get
  value = self._substitute(self._do_get(name, group, namespace))
File
 /home/ubuntu/nova/.venv/local/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 1948, in _do_get
  info = self._get_opt_info(name, group)
File
 /home/ubuntu/nova/.venv/local/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 2029, in _get_opt_info
  raise NoSuchOptError(opt_name, group)
  NoSuchOptError: no such option: consoleauth_manager
 
  Ran 1 test in 11.296s
  FAILED (failures=1)
 
 There are a couple interesting things going on here, and I haven't
 quite untangled all of it. Basically the consoleauth_manager option
 comes from nova.consoleauth.manager and when we don't import that
 module the option isn't available to us. For some reason when running
 `python -m subunit.run

 nova.tests.integrated.test_servers.ServersTestV3.test_create_and_rebuild_server`
 or `python -m testtools.run

 nova.tests.integrated.test_servers.ServersTestV3.test_create_and_rebuild_server`
 (this is what run_tests.sh -d does) nova.consoleauth.manager isn't
 being imported, but when running `testr run

 nova.tests.integrated.test_servers.ServersTestV3.test_create_and_rebuild_server`
 it is. Not sure why there is a difference (possibly related to
 discover?).

 I did manage to confirm that the attached patch mostly fixes the
 problem. It allows me to run the above commands out of the a tox built
 virtualenv, but not a run_tests.sh built virtualenv. This is the other
 piece of the puzzle that I haven't sorted yet. I do have a hunch it
 has to do with how oslo.config is installed. As a work around you can
 source the tox virtualenv then run run_tests.sh -N -d and that should
 work given the attached patch. I would submit a change to Gerrit but
 would like to understand more of what is going on first. If someone
 else groks this more please feel free to submit the fix instead.

 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-11 Thread Jon Maron

On Sep 10, 2013, at 9:42 PM, Mike Spreitzer mspre...@us.ibm.com wrote:

 Jon Maron jma...@hortonworks.com wrote on 09/10/2013 08:50:23 PM:
 
  From: Jon Maron jma...@hortonworks.com 
  To: OpenStack Development Mailing List openstack-dev@lists.openstack.org, 
  Cc: OpenStack Development Mailing List openstack-dev@lists.openstack.org 
  Date: 09/10/2013 08:55 PM 
  Subject: Re: [openstack-dev] [savanna] Program name and Mission statement 
  
  Openstack Big Data Platform 
 
 Let's see if you mean that.  Does this project aim to cover big data things 
 besides MapReduce?  Can you give examples of other things that are in scope? 

Hive, Pig, data storage, oozie etc

 
 Thanks, 
 Mike___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [pci device passthrough] fails with NameError: global name '_' is not defined

2013-09-11 Thread Henry Gessau

For the TypeError: expected string or buffer I have filed Bug #1223874.


On Wed, Sep 11, at 7:41 am, yongli he yongli...@intel.com wrote:

 于 2013年09月11日 05:38, David Kang 写道:

 - Original Message -
 From: Russell Bryant rbry...@redhat.com
 To: David Kang dk...@isi.edu
 Cc: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Tuesday, September 10, 2013 5:17:15 PM
 Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails with 
 NameError: global name '_' is not defined
 On 09/10/2013 05:03 PM, David Kang wrote:
 - Original Message -
 From: Russell Bryant rbry...@redhat.com
 To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
 Cc: David Kang dk...@isi.edu
 Sent: Tuesday, September 10, 2013 4:42:41 PM
 Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails
 with NameError: global name '_' is not defined
 On 09/10/2013 03:56 PM, David Kang wrote:
   Hi,

I'm trying to test pci device passthrough feature.
 Havana3 is installed using Packstack on CentOS 6.4.
 Nova-compute dies right after start with error NameError: global
 name '_' is not defined.
 I'm not sure if it is due to misconfiguration of nova.conf or bug.
 Any help will be appreciated.

 Here is the info:

 /etc/nova/nova.conf:
 pci_alias={name:test, product_id:7190, vendor_id:8086,
 device_type:ACCEL}

 pci_passthrough_whitelist=[{vendor_id:8086,product_id:7190}]

   With that configuration, nova-compute fails with the following
   log:

File
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py,
line 461, in _process_data
  **args)

File

 /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py,
line 172, in dispatch
  result = getattr(proxyobj, method)(ctxt, **kwargs)

File
/usr/lib/python2.6/site-packages/nova/conductor/manager.py,
line 567, in object_action
  result = getattr(objinst, objmethod)(context, *args, **kwargs)

File /usr/lib/python2.6/site-packages/nova/objects/base.py,
line
141, in wrapper
  return fn(self, ctxt, *args, **kwargs)

File
/usr/lib/python2.6/site-packages/nova/objects/pci_device.py,
line 242, in save
  self._from_db_object(context, self, db_pci)

 NameError: global name '_' is not defined
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup Traceback (most recent call
 last):
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup File
 /usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py,
 line 117, in wait
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup x.wait()
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup File
 /usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py,
 line 49, in wait
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup return self.thread.wait()
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup File
 /usr/lib/python2.6/site-packages/eventlet/greenthread.py, line
 166, in wait
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup return self._exit_event.wait()
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup File
 /usr/lib/python2.6/site-packages/eventlet/event.py, line 116, in
 wait
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup return hubs.get_hub().switch()
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup File
 /usr/lib/python2.6/site-packages/eventlet/hubs/hub.py, line 177,
 in switch
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup return self.greenlet.switch()
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup File
 /usr/lib/python2.6/site-packages/eventlet/greenthread.py, line
 192, in main
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup result = function(*args,
 **kwargs)
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup File
 /usr/lib/python2.6/site-packages/nova/openstack/common/service.py,
 line 65, in run_service
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup service.start()
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup File
 /usr/lib/python2.6/site-packages/nova/service.py, line 164, in
 start
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup self.manager.pre_start_hook()
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup File
 /usr/lib/python2.6/site-packages/nova/compute/manager.py, line
 805, in pre_start_hook
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup
 self.update_available_resource(nova.context.get_admin_context())
 2013-09-10 12:52:23.774 14749 TRACE
 nova.openstack.common.threadgroup File
 /usr/lib/python2.6/site-packages/nova/compute/manager.py, line
 4773, in update_available_resource
 2013-09-10 12:52:23.774 14749 TRACE
 

Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-11 Thread Erik Bergenholtz

On Sep 11, 2013, at 9:19 AM, Jon Maron jma...@hortonworks.com wrote:

 
 On Sep 10, 2013, at 9:42 PM, Mike Spreitzer mspre...@us.ibm.com wrote:
 
 Jon Maron jma...@hortonworks.com wrote on 09/10/2013 08:50:23 PM:
 
  From: Jon Maron jma...@hortonworks.com 
  To: OpenStack Development Mailing List 
  openstack-dev@lists.openstack.org, 
  Cc: OpenStack Development Mailing List openstack-dev@lists.openstack.org 
  Date: 09/10/2013 08:55 PM 
  Subject: Re: [openstack-dev] [savanna] Program name and Mission statement 
  
  Openstack Big Data Platform 
 
 Let's see if you mean that.  Does this project aim to cover big data things 
 besides MapReduce?  Can you give examples of other things that are in scope? 
 
 Hive, Pig, data storage, oozie etc

Adding a few items that are on the list, including YARN, Sqoop/2, HBase, and 
Hue. Other vendors will likely want to add additional services that pertain to 
their hadoop distro. i.e. SOLR, Impala etc.



 
 
 Thanks, 
 Mike___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to 
 which it is addressed and may contain information that is confidential, 
 privileged and exempt from disclosure under applicable law. If the reader of 
 this message is not the intended recipient, you are hereby notified that any 
 printing, copying, dissemination, distribution, disclosure or forwarding of 
 this communication is strictly prohibited. If you have received this 
 communication in error, please contact the sender immediately and delete it 
 from your system. Thank You.___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-11 Thread Erik Bergenholtz

On Sep 10, 2013, at 8:50 PM, Jon Maron jma...@hortonworks.com wrote:

 Openstack Big Data Platform
 
 
 On Sep 10, 2013, at 8:39 PM, David Scott david.sc...@cloudscaling.com wrote:
 
 I vote for 'Open Stack Data'
 
 
 On Tue, Sep 10, 2013 at 5:30 PM, Zhongyue Luo zhongyue@intel.com wrote:
 Why not OpenStack MapReduce? I think that pretty much says it all?
 
 
 On Wed, Sep 11, 2013 at 3:54 AM, Glen Campbell g...@glenc.io wrote:
 performant isn't a word. Or, if it is, it means having performance. I 
 think you mean high-performance.
 
 
 On Tue, Sep 10, 2013 at 8:47 AM, Matthew Farrellee m...@redhat.com wrote:
 Rough cut -
 
 Program: OpenStack Data Processing
 Mission: To provide the OpenStack community with an open, cutting edge, 
 performant and scalable data processing stack and associated management 
 interfaces.

Proposing a slightly different mission:

To provide a simple, reliable and repeatable mechanism by which to deploy 
Hadoop and related Big Data projects, including management, monitoring and 
processing mechanisms driving further adoption of OpenStack.


 
 
 On 09/10/2013 09:26 AM, Sergey Lukjanov wrote:
 It sounds too broad IMO. Looks like we need to define Mission Statement
 first.
 
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.
 
 On Sep 10, 2013, at 17:09, Alexander Kuznetsov akuznet...@mirantis.com
 mailto:akuznet...@mirantis.com wrote:
 
 My suggestion OpenStack Data Processing.
 
 
 On Tue, Sep 10, 2013 at 4:15 PM, Sergey Lukjanov
 slukja...@mirantis.com mailto:slukja...@mirantis.com wrote:
 
 Hi folks,
 
 due to the Incubator Application we should prepare Program name
 and Mission statement for Savanna, so, I want to start mailing
 thread about it.
 
 Please, provide any ideas here.
 
 P.S. List of existing programs:
 https://wiki.openstack.org/wiki/Programs
 P.P.S. https://wiki.openstack.org/wiki/Governance/NewPrograms
 
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Glen Campbell
 http://glenc.io • @glenc
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Intel SSG/STOD/DCST/CIT
 880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai, 
 China
 +862161166500
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to 
 which it is addressed and may contain information that is confidential, 
 privileged and exempt from disclosure under applicable law. If the reader of 
 this message is not the intended recipient, you are hereby notified that any 
 printing, copying, dissemination, distribution, disclosure or forwarding of 
 this communication is strictly prohibited. If you have received this 
 communication in error, please contact the sender immediately and delete it 
 from your system. Thank You.___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

[openstack-dev] [Nova] Bug Triage Day Proposal - September 17

2013-09-11 Thread Russell Bryant
Greetings,

Now that we're in a feature freeze for the Havana release, our focus
should be on bugs.  Our immediate goal is to come up with a list of bugs
that we want to ensure get fixed before havana is released.  Bugs that
should be on that list should be targeted to havana-rc1.

https://launchpad.net/nova/+milestone/havana-rc1

The current list needs some cleanup.  We're also way behind on bug
triage.  We need to catch up on that to make sure we've caught anything
major that really needs to be fixed.  There are currently 106 New bugs.
 We need to get that down much closer to 0.

I propose that we have a bug triage day on Tuesday, September 17.  I
believe some other projects are doing the same thing that day.  The goal
will be to get through as many of the New bugs as possible and complete
a first cut on the RC1 bug list.  If time permits, we can work on some
of the other triage steps documented here:

https://wiki.openstack.org/wiki/BugTriage

For Nova bug triage, we use a set of official tags to help categorize
bugs and split up the triage workload.  For more information, see the
Nova bug triage page:

https://wiki.openstack.org/wiki/Nova/BugTriage

Comments welcome!

Thanks,
-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Need some clarity on security group protocol numbers vs names

2013-09-11 Thread Arvind Somya (asomya)
Hello all

I have a patch in review where  Akihiro made some comments about only 
restricting protocols by names and allowing all protocol numbers when creating 
security group rules. I personally disagree with this approach as names and 
numbers are just a textual/integer representation of a common protocol. The end 
result is going to be the same in both cases.

https://review.openstack.org/#/c/43725/

Akihiro suggested a community discussion around this issue before the patch is 
accepted upstream. I hope this e-mail gets the ball rolling on that. I would 
like to hear the community's opinion on this issue and any pros/cons/pitfalls 
of either approach.

Thanks
Arvind
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-11 Thread Mike Spreitzer
 To provide a simple, reliable and repeatable mechanism by which to 
 deploy Hadoop and related Big Data projects, including management, 
 monitoring and processing mechanisms driving further adoption of 
OpenStack.

That sounds like it is at about the right level of specificity.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] swift integration - optional?

2013-09-11 Thread Jon Maron
Hi,

  I noticed that the swift integration is optionally enabled via a 
configuration property?  Is there a reason for not making it available as a 
base, feature of the cluster (i.e. simply allowing access to swift should it be 
required)?  What would be a scenario in which it would be beneficial to 
explicitly disable it?

-- Jon


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]All unittest passed but Jenkins failed

2013-09-11 Thread James E. Blair
ZhiQiang Fan aji.zq...@gmail.com writes:

 currently, i don't know if it is coverage problem or something else.

 the direct cause is:

 sudo /usr/local/jenkins/slave_scripts/jenkins-sudo-grep.sh post

 Sep  9 06:57:23 precise1 sudo:  jenkins : 3 incorrect password
 attempts ; TTY=unknown ;
 PWD=/home/jenkins/workspace/gate-neutron-python27 ; USER=root ;
 COMMAND=ovs-vsctl --timeout=2 -- --columns=external_ids,name,ofport
 find Interface external_ids:iface-id=71d9fa4c-f074-46bd-96af-8c592d37c160

This is because the unit test tried to run 'sudo'.  That's not allowed
in unit tests, so it needs to be mocked out.

 meanwhile, i found that:

 2013-09-09 06:42:25.922 | + git fetch
 http://zuul.openstack.org/p/openstack/neutron
 refs/zuul/master/Z62b50e610b554304bde4aa2a9ea80193
 2013-09-09 06:42:26.747 | From http://zuul.openstack.org/p/openstack/neutron
 2013-09-09 06:42:26.747 |  * branch
 refs/zuul/master/Z62b50e610b554304bde4aa2a9ea80193 - FETCH_HEAD
 2013-09-09 06:42:26.751 | + git checkout FETCH_HEAD
 2013-09-09 06:42:26.954 | Warning: you are leaving 17 commits behind,
 not connected to
 2013-09-09 06:42:26.954 | any of your branches:
 2013-09-09 06:42:26.954 |
 2013-09-09 06:42:26.954 |   62d0927 Avoid shadowing NeutronException
 'message' attribute
 2013-09-09 06:42:26.954 |   e26639b Merge Replace assertEquals with
 assertEqual
 2013-09-09 06:42:26.954 |   5bae582 Load ML2 mech drivers as listed in
 ml2_conf.ini
 2013-09-09 06:42:26.955 |   902dc88 Replace assertEquals with assertEqual
 2013-09-09 06:42:26.955 |  ... and 13 more.
 2013-09-09 06:42:26.955 |
 2013-09-09 06:42:26.955 | If you want to keep them by creating a new
 branch, this may be a good time
 2013-09-09 06:42:26.956 | to do so with:
 2013-09-09 06:42:26.956 |
 2013-09-09 06:42:26.956 |  git branch new_branch_name
 62d09275d899237dc34cf50c81e99d50489212ff
 2013-09-09 06:42:26.956 |
 2013-09-09 06:42:26.962 | HEAD is now at 7cbd0f5... Improve 
 dhcp_agent_scheduler

You can ignore most of that; that's just Jenkins resetting its git repo
from an earlier test run.  By the time it's finished, it says:

 2013-09-09 06:42:26.962 | HEAD is now at 7cbd0f5... Improve 
 dhcp_agent_scheduler

Which is the important part.  You'll see that matches your local HEAD as well:

 but in my local env:

 $ git log --pretty=format:%h %s

 7cbd0f5 Improve dhcp_agent_scheduler

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Keystone and Multiple Identity Sources

2013-09-11 Thread Adam Young
David Chadwick wrote up an in depth API extension for Federation: 
https://review.openstack.org/#/c/39499
There is an abfab API proposal as well: 
https://review.openstack.org/#/c/42221/


After discussing this for a while, it dawned on me that Federation 
should not be something bolted on to Keystone, but rather that it was 
already central to the design.


The SQL Identity backend is a simple password store that collects users 
into groups.  This makes it an identity provider (IdP).

Now Keystone can register multiple LDAP servers as Identity backends.

There are requests for SAML and ABFAB integration into Keystone as well.

Instead of a Federation API  Keystone should take the key concepts 
from the API and make them core concepts.  What would this mean:


1.  Instead of method: federation protocol: abfab  it would be 
method: abfab,
2.  The rules about multiple round trips (phase)  would go under the 
abfab section.
3.  There would not be a protocol_data section but rather that would 
be the abfab section as well.

4.  Provider ID would be standard in the method specific section.

One question that has come up has been about Providers, and whether they 
should be considered endpoints in the Catalog.  THere is a couple issues 
wiuth this:  one is that they are not something managed by OpenStack, 
and two is that they are not necessarily Web Protocols.  As such, 
Provider should probably be First class citizen.  We already have LDAP  
handled this way, although not as an enumerated entity.  For the first 
iteration, I would like to see ABFAB, SAML, and any other protocols we 
support done the same way as LDAP:  a deliberate configuration option 
for Keystone that will require a config file change.


David and I have discussed this in a side conversation, and agree that 
it requires wider input.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-11 Thread Clint Byrum
Excerpts from Steven Hardy's message of 2013-09-11 05:59:02 -0700:
 On Wed, Sep 11, 2013 at 03:51:02AM +, Adrian Otto wrote:
  It would be better if we could explain Autoscale like this:
  
  Heat - Autoscale - Nova, etc.
  -or-
  User - Autoscale - Nova, etc.
  
  This approach allows use cases where (for whatever reason) the end user 
  does not want to use Heat at all, but still wants something simple to be 
  auto-scaled for them. Nobody would be scratching their heads wondering why 
  things are going in circles.
  
  From an implementation perspective, that means the auto-scale service needs 
  at least a simple linear workflow capability in it that may trigger a Heat 
  orchestration if there is a good reason for it. This way, the typical use 
  cases don't have anything resembling circular dependencies. The source of 
  truth for how many members are currently in an Autoscaling group should be 
  the Autoscale service, not in the Heat database. If you want to expose that 
  in list-stack-resources output, then cause Heat to call out to the 
  Autoscale service to fetch that figure as needed. It is irrelevant to 
  orchestration. Code does not need to be duplicated. Both Autoscale and Heat 
  can use the same exact source code files for the code that 
  launches/terminates instances of resources.
 
 So I take issue with the circular dependencies statement, nothing
 proposed so far has anything resembling a circular dependency.
 
 I think it's better to consider traditional encapsulation, where two
 projects may very well make use of the same class from a library.  Why is
 it less valid to consider code reuse via another interface (ReST service)?
 
 The point of the arguments to date, AIUI is to ensure orchestration actions
 and management of dependencies don't get duplicated in any AS service which
 is created.
 

This is the crux of the reason that Heat should be involved. In the
driving analogy, Heat is not some boot perched on a ladder waiting for
autoscaling to drop a bowling ball on it to turn the car. It is more
like power steering. The driver puts an input into the system, and the
power steering does the hard work. If you ever need the full power of
power steering, then designing the system to be able to bypass power
steering sometimes will make it _more_ complex, not less.

Meanwhile, as the problems that need to be solved become more complex,
Heat will be there to simplify the solutions. If it is ever making system
control more complex, that is Heat's failure and we need to make Heat
simpler. What we should get out of the habit of is bypassing Heat and
building new control systems because Heat doesn't yet do what we want
it to do.

To any who would roll their own orchestration rather than let Heat
do it:

If Heat adds an unacceptable amount of latency, please file a bug.

If Heat adds complexity, please file a bug.

If you've already done that.. I owe you a gold star. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Meeting agenda for Wed Sep 11th at 2100 UTC

2013-09-11 Thread Julien Danjou
The Ceilometer project team holds a meeting in #openstack-meeting, see
https://wiki.openstack.org/wiki/Meetings/MeteringAgenda for more details.

Next meeting is on Wed Sep 11th at 2100 UTC 

Please add your name with the agenda item, so we know who to call on during
the meeting.
* Review Havana RC1 milestone
  * https://launchpad.net/ceilometer/+milestone/havana-rc1
* Release python-ceilometerclient? 
* Open discussion

If you are not able to attend or have additional topic(s) you would like
to add, please update the agenda on the wiki.

Cheers,
-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Correct way to disable specific event collection by the collector

2013-09-11 Thread Neal, Phil
Greetings team,
I'm working on getting a very streamlined set of collections running and I'd 
like to disable all notifications except Glance. It's clear that the desired 
event types are defined in the plugins, but I can't seem to work out how to 
force the collector service to load only specific handlers in the 
ceilometer.collector namespace. I *thought* it could be accomplished by 
editing /ceilometer/setup.cfg, but removing the entry points there didn't seem 
to work (the extensions manager still picks them up).

Can someone give me a rough idea of how to do this?

- Phil 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-11 Thread Andrei Savu
+1

I guess this will also clarify how Savanna relates to other projects like
OpenStack Trove.

-- Andrei Savu

On Wed, Sep 11, 2013 at 5:16 PM, Mike Spreitzer mspre...@us.ibm.com wrote:

  To provide a simple, reliable and repeatable mechanism by which to
  deploy Hadoop and related Big Data projects, including management,
  monitoring and processing mechanisms driving further adoption of
 OpenStack.

 That sounds like it is at about the right level of specificity.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][oslo] mysql, sqlalchemy and sql_mode

2013-09-11 Thread Monty Taylor


On 09/11/2013 11:09 AM, David Ripton wrote:
 On 09/11/2013 06:37 AM, Steven Hardy wrote:
 
 I'm investigating some issues, where data stored to a text column in
 mysql
 is silently truncated if it's too big.

 It appears that the default configuration of mysql, and the sessions
 established via sqlalchemy is to simply warn on truncation rather than
 raise an error.

 This seems to me to be almost never what you want, since on retrieval the
 data is corrupt and bad/unexpected stuff is likely.

 This AFAICT is a mysql specific issue[1], which can be resolved by
 setting
 sql_mode to traditional[2,3], after which an error is raised on
 truncation,
 allowing us to catch the error before the data is stored.

 My question is, how do other projects, or oslo.db, handle this atm?

 It seems we either have to make sure the DB enforces the schema/model, or
 validate every single value before attempting to store, which seems
 like an
 unreasonable burden given that the schema changes pretty regularly.

 Can any mysql, sqlalchemy and oslo.db experts pitch in with opinions on
 this?
 
 Nova has a PostgreSQL devstack gate, which occasionally catches errors
 that MySQL lets through.  For example,
 https://bugs.launchpad.net/nova/+bug/1217167
 
 Unfortunately we have some MySQL-only code, and PostgreSQL obviously
 can't catch such errors there.
 
 I think we should consider turning off auto-truncation for MySQL on our
 CI boxes.

Should turn it off everywhere - same as how we auto-configure to use
InnoDB and not MyISAM, we should definitely set strict sql_modes
strings. There is not an operational concern - sql_modes affect app
developers, of which we are they. :)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for Raksha, a Data Protection As a Service project

2013-09-11 Thread lokesh balcha
+1

-Lokesh


On Mon, Sep 2, 2013 at 11:33 AM, Sylvain Bauza sylvain.ba...@bull.netwrote:

  Hi Murali,

 Le 02/09/2013 15:19, Murali Balcha a écrit :

 I am not an expert in Heat but the way I understood the Heat project is that 
 it is an orchestration layer that instantiate a composite application based 
 on a template definition. The template may identify the vms, networks and 
 storage resources needed to instantiate the composite application. Using the 
 template, a tenant may instantiate one of more instances of composite 
 application.


 As per Heat mission statement [1], Heat role is to manage lifecycle of
 applications. I'm not an Heat expert at all, but at least regarding Nova,
 doing a snapshot is conceptually on the same page as boot or destroy. Re:
 Cinder, I would assume snapshotting a volume is also part of its
 management, like attaching it.

 From my pov, it would then make sense to describe the *backup operation  *as
 a extended capability of the Heat API which could then calls its
 dependencies.

 -Sylvain



 Backup service such as Raksha has to backup composite application instances 
 in their entirety. It can look into Heat meta data to understand the 
 application definition and correctly backup the application.

  I am not sure Heat can manage individual application instances once they are 
 created. I need to do more research on Heat, but I also defer comments to 
 Heat experts.

 Other way to integrate Heat with Raskha is to make a backup policy part of 
 Heat template and let Heat setup correct backup policy by calling Rakha Apis 
 during composite application instantiation.

 Thanks,
 Murali Balcha

 On Sep 2, 2013, at 7:13 AM, Zane Bitter zbit...@redhat.com 
 zbit...@redhat.com wrote:


  On 01/09/13 23:11, Alex Rudenko wrote:

  Hello everyone,

 I would like to ask a question. But, first of all, I would like to say
 that I'm new to OpenStack so the question might be irrelevant. From what
 I've understood, the idea is to back up an entire stack including VMs,
 volumes, networks etc. Let's call the information about how these pieces
 are interconnected - a topology. This topology also has to be backed up
 along with VMs, volumes, networks, right? And then this topology can be
 used to restore the entire stack. As for me, it looks very similar to
 what the Heat project does. Am I right? So maybe it's possible to use
 the Heat project for this kind of backup/restore functionality?

 Best regards,
 Alex

  That's actually an excellent question.

 One of the things that's new in Heat for the Havana release is Suspend/Resume 
 operations on stacks. Basically this involves going through the stack in 
 (reverse) dependency order and calling suspend/resume APIs for each resource 
 where that makes sense. Steve Hardy has written the code for this in such a 
 way as to be pretty generic and allow us to add more operations quite easily 
 in the future.

 So to the extent that you just require something to go through every resource 
 in a stack in dependency order and call an *existing* backup API, then Heat 
 could fit the bill. If you require co-ordination between e.g. Nova and Cinder 
 then Heat is probably not a good vehicle for implementing that.

 cheers,
 Zane.



  On Sun, Sep 1, 2013 at 10:23 PM, Giri Basava 
 giri.bas...@triliodata.commailto:giri.bas...@triliodata.com 
 giri.bas...@triliodata.com wrote:

Dear All,

This is a great discussion. If I understand this correctly, this is
a proposal for data protection as a whole for the OpenStack cloud,
however this is not yet an official incubation request. We are
having a good discussion on how we can better serve the adoption of
OpenStack.

Having said that, the proposal will reuse the existing API and
contributions by the community that are already in place. For
example, Catlin's point is very valid... the Cinder storage vendor
knows the best way to implement snapshots for their storage
platforms. No doubt, Raksha should be leveraging that IP. Similarly
Raksha will be leveraging Nova, Swift as well as Glance. Don't
forget Neutron networking is very critical part of data
protection for any VM or set of VMs.

No one project has one single answer for a comprehensive data
protection. The capabilities for backup and recovery exist in silos
in various projects...

1. Images are backed-up by Nova
2. Volumes are backed-up by Cinder
3. I am not aware of a solution that can backup network configuration.
4. Not sure if we have something that can backup the resources of a
VM ( vCPUs, Memory Configuration etc.)
5. One can't schedule and automate the above very easily.

Ronen's point about consistency groups is right on the mark. We need
to treat an application as an unit that may span multiple VMs, one
or more images and one or more volumes.

Just to reiterate, some form of these capabilities exist in the
current projects, 

Re: [openstack-dev] [Neutron] Need some clarity on security group protocol numbers vs names

2013-09-11 Thread Akihiro Motoki
Let me raise another aspect of my potential concern about Arvind's
patch https://review.openstack.org/#/c/43725/ .

What I concern about this patch is that this patch changes the
existing behavior which allows unknown protocols (known protocols in
this case is members of sg_suppprted_protocols). This is the behavior
of Grizzly release. The change proposed is not backward compatible.
Assume that a user who already allows unknown protocols using a number
representation. What should he/she do? Should he/she give up the
service? The solution for this needs to be clarified. This is the main
reason.

Many folks seems to agree with the behavior change that unknown
protocols should be now disallowed. If there is a consensus about
this, I will follow the community decision.


On Thu, Sep 12, 2013 at 12:46 AM, Justin Hammond
justin.hamm...@rackspace.com wrote:
 As it seems the review is no longer the place for this discussion, I will
 copy/paste my inline comments here:

 I dislike the idea of passing magical numbers around to define protocols
 (defined or otherwise). I believe there should be a common set of
 protocols with their numbers mapped (such as this constants business) and
 a well defined way to validate/list said common constants. If a plugin
 wishes to add support for a protocol outside of the common case, it should
 be added to the list in a pluggable manner.
 Ex: common defines the constants 1, 6, 17 to be valid but my_cool_plugin
 wants to support 42. It should be my plugin's responsibility to add 42 to
 the list of valid protocols by appending to the list given a pluggable
 interface to do so. I do not believe plugins should continue to update the
 common.constants file with new protocols, but I do believe explicitly
 stating which protocols are valid is better than allowing users to
 possibly submit protocols erroneously.
 If the plugins use a system such as this, it is possible that new, common,
 protocols can be found to be core. See NETWORK_TYPE constants.

 tl;dr: magic constants are no good, but values should be validated in a
 pluggable and explicit manner.



 On 9/11/13 10:40 AM, Akihiro Motoki amot...@gmail.com wrote:

Hi all,

Arvind, thank you for initiate the discussion about the ip protocol in
security group rules.
I think the discussion point can be broken down into:

(a) how to specify ip protocol : by name, number, or both
(b) what ip protocols can be specified: known protocols only, all
protocols (or some subset of protocols including unknown protocols)
 where known protocols is defined as a list in Neutron (a list
of constants or a configurable list)

--
(b) is the main topic Arvind and I discussed in the review.
If only known protocols are allowed, we cannot allow protocols which
are not listed in the known protocol list.
For instance, if tcp, udp and icmp are registered as known
protocols (this is the current neutron implementation),
a tenant cannot allow stcp or gre.

Pros of known protocols only is the infrastructure provider can
control which protocols are allowed.
Cons is that users cannot use ip protocols not listed in a known list
and a provider needs to maintain a known protocol list.
Pros and cons of all protocols allowed is vice versa.

If a list of known protocols is configurable, we can cover both cases,
e.g., an empty list or a list [ANY] means all protocols are allowed.
The question in this case is what is the best default value.

My preference is to allow all protocols. At least a list of known
protocols needs to be configurable.
In my principle, a virtual network should be able to convery any type
of IP protocols in a virtual network. This is the reason of my
preference.

-
Regarding (a), if a name and a number refer to a same protocol, it
should be considered as identical.
For example, ip protocol number 6 is tcp, so ip protocol number 6
and protocol name tcp should be regarded as same.
My preference is to allow both name and number of IP protocol. This
will be achieved by Arvind's patch under the review.
name representation is easy to understand in general, but
maintaining all protocol names is a tough work.
This is the reason of my preference.


I understand there is a topic whether a list of known protocols should
contain name only or accepts both name and number.
I don't discuss it here because it is a simple question once we have a
consensus on the above two topic.

Thanks,
Akihiro

On Wed, Sep 11, 2013 at 11:15 PM, Arvind Somya (asomya)
aso...@cisco.com wrote:
 Hello all

 I have a patch in review where  Akihiro made some comments about only
 restricting protocols by names and allowing all protocol numbers when
 creating security group rules. I personally disagree with this approach
as
 names and numbers are just a textual/integer representation of a common
 protocol. The end result is going to be the same in both cases.

 https://review.openstack.org/#/c/43725/

 Akihiro suggested a community discussion around this issue before the
patch
 is 

Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-11 Thread Michael Basnight
On Sep 11, 2013, at 8:42 AM, Andrei Savu wrote:

 +1 
 
 I guess this will also clarify how Savanna relates to other projects like 
 OpenStack Trove. 

Yes the conversations around Trove+Savanna will be fun at the summit! I see 
overlap between our missions ;)

 
 -- Andrei Savu
 
 On Wed, Sep 11, 2013 at 5:16 PM, Mike Spreitzer mspre...@us.ibm.com wrote:
  To provide a simple, reliable and repeatable mechanism by which to 
  deploy Hadoop and related Big Data projects, including management, 
  monitoring and processing mechanisms driving further adoption of OpenStack. 
 
 That sounds like it is at about the right level of specificity. 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][oslo] mysql, sqlalchemy and sql_mode

2013-09-11 Thread Clint Byrum
Excerpts from Steven Hardy's message of 2013-09-11 03:37:40 -0700:
 Hi all,
 
 I'm investigating some issues, where data stored to a text column in mysql
 is silently truncated if it's too big.
 
 It appears that the default configuration of mysql, and the sessions
 established via sqlalchemy is to simply warn on truncation rather than
 raise an error.
 
 This seems to me to be almost never what you want, since on retrieval the
 data is corrupt and bad/unexpected stuff is likely.
 
 This AFAICT is a mysql specific issue[1], which can be resolved by setting
 sql_mode to traditional[2,3], after which an error is raised on truncation,
 allowing us to catch the error before the data is stored.
 
 My question is, how do other projects, or oslo.db, handle this atm?
 
 It seems we either have to make sure the DB enforces the schema/model, or
 validate every single value before attempting to store, which seems like an
 unreasonable burden given that the schema changes pretty regularly.
 
 Can any mysql, sqlalchemy and oslo.db experts pitch in with opinions on
 this?

I do think that setting stricter sql modes is the right way to go.

Note that I worked around this within Heat for JSON fields thusly:

https://git.openstack.org/cgit/openstack/heat/commit/?id=1e16ed2d

However, I do think we should make it a priority to protect the database
and the entire service from abnormally large values. The moment at which
we are serializing a data structure to the database is a bit late to
mitigate the cost of handling it. Here is an example of the kind of
border protection we need:

https://review.openstack.org/#/c/44585/

I want to detect that we overflowed a big column, and I think that if
it ever actually happens, it is a critical bug.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][oslo] mysql, sqlalchemy and sql_mode

2013-09-11 Thread David Ripton

On 09/11/2013 06:37 AM, Steven Hardy wrote:


I'm investigating some issues, where data stored to a text column in mysql
is silently truncated if it's too big.

It appears that the default configuration of mysql, and the sessions
established via sqlalchemy is to simply warn on truncation rather than
raise an error.

This seems to me to be almost never what you want, since on retrieval the
data is corrupt and bad/unexpected stuff is likely.

This AFAICT is a mysql specific issue[1], which can be resolved by setting
sql_mode to traditional[2,3], after which an error is raised on truncation,
allowing us to catch the error before the data is stored.

My question is, how do other projects, or oslo.db, handle this atm?

It seems we either have to make sure the DB enforces the schema/model, or
validate every single value before attempting to store, which seems like an
unreasonable burden given that the schema changes pretty regularly.

Can any mysql, sqlalchemy and oslo.db experts pitch in with opinions on
this?


Nova has a PostgreSQL devstack gate, which occasionally catches errors 
that MySQL lets through.  For example, 
https://bugs.launchpad.net/nova/+bug/1217167


Unfortunately we have some MySQL-only code, and PostgreSQL obviously 
can't catch such errors there.


I think we should consider turning off auto-truncation for MySQL on our 
CI boxes.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-11 Thread Matthew Farrellee

You caught me trying to be fancy!

On 09/10/2013 03:54 PM, Glen Campbell wrote:

performant isn't a word. Or, if it is, it means having performance.
I think you mean high-performance.


On Tue, Sep 10, 2013 at 8:47 AM, Matthew Farrellee m...@redhat.com
mailto:m...@redhat.com wrote:

Rough cut -

Program: OpenStack Data Processing
Mission: To provide the OpenStack community with an open, cutting
edge, performant and scalable data processing stack and associated
management interfaces.


On 09/10/2013 09:26 AM, Sergey Lukjanov wrote:

It sounds too broad IMO. Looks like we need to define Mission
Statement
first.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

On Sep 10, 2013, at 17:09, Alexander Kuznetsov
akuznet...@mirantis.com mailto:akuznet...@mirantis.com
mailto:akuznetsov@mirantis.__com
mailto:akuznet...@mirantis.com wrote:

My suggestion OpenStack Data Processing.


On Tue, Sep 10, 2013 at 4:15 PM, Sergey Lukjanov
slukja...@mirantis.com mailto:slukja...@mirantis.com
mailto:slukja...@mirantis.com
mailto:slukja...@mirantis.com__ wrote:

 Hi folks,

 due to the Incubator Application we should prepare
Program name
 and Mission statement for Savanna, so, I want to start
mailing
 thread about it.

 Please, provide any ideas here.

 P.S. List of existing programs:
https://wiki.openstack.org/__wiki/Programs
https://wiki.openstack.org/wiki/Programs
 P.P.S.
https://wiki.openstack.org/__wiki/Governance/NewPrograms
https://wiki.openstack.org/wiki/Governance/NewPrograms

 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.


 _
 OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.__openstack.org
mailto:OpenStack-dev@lists.openstack.org


http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.__openstack.org
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
*Glen Campbell*
http://glenc.io • @glenc


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-11 Thread Matthew Farrellee

That sounds quite good.

Best,


matt

On 09/11/2013 11:42 AM, Andrei Savu wrote:

+1

I guess this will also clarify how Savanna relates to other projects
like OpenStack Trove.

-- Andrei Savu

On Wed, Sep 11, 2013 at 5:16 PM, Mike Spreitzer mspre...@us.ibm.com
mailto:mspre...@us.ibm.com wrote:

  To provide a simple, reliable and repeatable mechanism by which to
  deploy Hadoop and related Big Data projects, including management,
  monitoring and processing mechanisms driving further adoption of
OpenStack.

That sounds like it is at about the right level of specificity.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-11 Thread Joshua Harlow
Sure,

I was thinking that since heat would do autoscaling persay, then heat would say 
ask trove to make more databases (autoscale policy here) then this would cause 
trove to actually callback into heat to make more instances.

Just feels a little weird, idk.

Why didn't heat just make those instances on behalf of trove to begin with 
and then tell trove make these instances into databases. Then trove doesn't 
really need to worry about calling into heat to do the instance creation 
work, and trove can just worry about converting those blank instances  into 
databases (for example).

But maybe I am missing other context also :)

Sent from my really tiny device...

On Sep 11, 2013, at 8:04 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Joshua Harlow's message of 2013-09-11 01:00:37 -0700:
 +1
 
 The assertions are not just applicable to autoscaling but to software in 
 general. I hope we can make autoscaling just enough simple to work.
 
 The circular heat=trove example is one of those that does worry me a 
 little. It feels like something is not structured right if that it is needed 
 (rube goldberg like). I am not sure what could be done differently, just my 
 gut feeling that something is off.
 
 Joshua, can you elaborate on the circular heat=trove example?
 
 I don't see Heat and Trove's relationship as circular. Heat has a Trove
 resource, and (soon? now?) Trove can use Heat to simplify its control
 of underlying systems. This is a stack, not a circle, or did I miss
 something?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone and Multiple Identity Sources

2013-09-11 Thread Dolph Mathews
On Wed, Sep 11, 2013 at 10:25 AM, Adam Young ayo...@redhat.com wrote:

 David Chadwick wrote up an in depth API extension for Federation:
 https://review.openstack.org/#**/c/39499https://review.openstack.org/#/c/39499
 There is an abfab API proposal as well: https://review.openstack.org/#**
 /c/42221/ https://review.openstack.org/#/c/42221/

 After discussing this for a while, it dawned on me that Federation should
 not be something bolted on to Keystone, but rather that it was already
 central to the design.

 The SQL Identity backend is a simple password store that collects users
 into groups.  This makes it an identity provider (IdP).
 Now Keystone can register multiple LDAP servers as Identity backends.

 There are requests for SAML and ABFAB integration into Keystone as well.

 Instead of a Federation API  Keystone should take the key concepts from
 the API and make them core concepts.  What would this mean:

 1.  Instead of method: federation protocol: abfab  it would be
 method: abfab,
 2.  The rules about multiple round trips (phase)  would go under the
 abfab section.
 3.  There would not be a protocol_data section but rather that would be
 the abfab section as well.
 4.  Provider ID would be standard in the method specific section.


That sounds like it fits with the original intention of the method
portion of the auth API.



 One question that has come up has been about Providers, and whether they
 should be considered endpoints in the Catalog.  THere is a couple issues
 wiuth this:  one is that they are not something managed by OpenStack, and
 two is that they are not necessarily Web Protocols.


What's the use case for including providers in the service catalog? i.e.
why do Identity API clients need to be aware of the Identity Providers?

As such, Provider should probably be First class citizen.  We already have
 LDAP  handled this way, although not as an enumerated entity.


Can you be more specific? What does it mean to be a first class citizen in
this context? The fact that identity is backed by LDAP today is abstracted
away from Identity API clients, for example.


 For the first iteration, I would like to see ABFAB, SAML, and any other
 protocols we support done the same way as LDAP:  a deliberate configuration
 option for Keystone that will require a config file change.

 David and I have discussed this in a side conversation, and agree that it
 requires wider input.




 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [pci device passthrough] fails with NameError: global name '_' is not defined

2013-09-11 Thread Jiang, Yunhong
Sorry for slow response, I'm out of office to IDF, I will have a look on it 
today.

Thanks
--jyh

 -Original Message-
 From: David Kang [mailto:dk...@isi.edu]
 Sent: Wednesday, September 11, 2013 6:11 AM
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails with
 NameError: global name '_' is not defined
 
 
 
 - Original Message -
  From: yongli he yongli...@intel.com
  To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
  Sent: Wednesday, September 11, 2013 4:41:13 AM
  Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails with
 NameError: global name '_' is not defined
  于 2013年09月11日 05:38, David Kang 写道:
  
   - Original Message -
   From: Russell Bryant rbry...@redhat.com
   To: David Kang dk...@isi.edu
   Cc: OpenStack Development Mailing List
   openstack-dev@lists.openstack.org
   Sent: Tuesday, September 10, 2013 5:17:15 PM
   Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails
   with NameError: global name '_' is not defined
   On 09/10/2013 05:03 PM, David Kang wrote:
   - Original Message -
   From: Russell Bryant rbry...@redhat.com
   To: OpenStack Development Mailing List
   openstack-dev@lists.openstack.org
   Cc: David Kang dk...@isi.edu
   Sent: Tuesday, September 10, 2013 4:42:41 PM
   Subject: Re: [openstack-dev] [nova] [pci device passthrough]
   fails
   with NameError: global name '_' is not defined
   On 09/10/2013 03:56 PM, David Kang wrote:
 Hi,
  
  I'm trying to test pci device passthrough feature.
   Havana3 is installed using Packstack on CentOS 6.4.
   Nova-compute dies right after start with error NameError:
   global
   name '_' is not defined.
   I'm not sure if it is due to misconfiguration of nova.conf or
   bug.
   Any help will be appreciated.
  
   Here is the info:
  
   /etc/nova/nova.conf:
   pci_alias={name:test, product_id:7190,
   vendor_id:8086,
   device_type:ACCEL}
  
  
 pci_passthrough_whitelist=[{vendor_id:8086,product_id:7190}]
  
 With that configuration, nova-compute fails with the following
 log:
  
  File
  
 /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py
 ,
  line 461, in _process_data
**args)
  
  File
  
 /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatch
 er.py,
  line 172, in dispatch
result = getattr(proxyobj, method)(ctxt, **kwargs)
  
  File
  
 /usr/lib/python2.6/site-packages/nova/conductor/manager.py,
  line 567, in object_action
result = getattr(objinst, objmethod)(context, *args,
**kwargs)
  
  File /usr/lib/python2.6/site-packages/nova/objects/base.py,
  line
  141, in wrapper
return fn(self, ctxt, *args, **kwargs)
  
  File
  
 /usr/lib/python2.6/site-packages/nova/objects/pci_device.py,
  line 242, in save
self._from_db_object(context, self, db_pci)
  
   NameError: global name '_' is not defined
   2013-09-10 12:52:23.774 14749 TRACE
   nova.openstack.common.threadgroup Traceback (most recent
 call
   last):
   2013-09-10 12:52:23.774 14749 TRACE
   nova.openstack.common.threadgroup File
  
 /usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.
 py,
   line 117, in wait
   2013-09-10 12:52:23.774 14749 TRACE
   nova.openstack.common.threadgroup x.wait()
   2013-09-10 12:52:23.774 14749 TRACE
   nova.openstack.common.threadgroup File
  
 /usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.
 py,
   line 49, in wait
   2013-09-10 12:52:23.774 14749 TRACE
   nova.openstack.common.threadgroup return self.thread.wait()
   2013-09-10 12:52:23.774 14749 TRACE
   nova.openstack.common.threadgroup File
   /usr/lib/python2.6/site-packages/eventlet/greenthread.py, line
   166, in wait
   2013-09-10 12:52:23.774 14749 TRACE
   nova.openstack.common.threadgroup return
 self._exit_event.wait()
   2013-09-10 12:52:23.774 14749 TRACE
   nova.openstack.common.threadgroup File
   /usr/lib/python2.6/site-packages/eventlet/event.py, line 116,
   in
   wait
   2013-09-10 12:52:23.774 14749 TRACE
   nova.openstack.common.threadgroup return
 hubs.get_hub().switch()
   2013-09-10 12:52:23.774 14749 TRACE
   nova.openstack.common.threadgroup File
   /usr/lib/python2.6/site-packages/eventlet/hubs/hub.py, line
   177,
   in switch
   2013-09-10 12:52:23.774 14749 TRACE
   nova.openstack.common.threadgroup return
 self.greenlet.switch()
   2013-09-10 12:52:23.774 14749 TRACE
   nova.openstack.common.threadgroup File
   /usr/lib/python2.6/site-packages/eventlet/greenthread.py, line
   192, in main
   2013-09-10 12:52:23.774 14749 TRACE
   nova.openstack.common.threadgroup result = function(*args,
   **kwargs)
   2013-09-10 12:52:23.774 14749 TRACE
   nova.openstack.common.threadgroup File
  
 /usr/lib/python2.6/site-packages/nova/openstack/common/service.py,
   line 65, in run_service
   2013-09-10 12:52:23.774 14749 TRACE
   

Re: [openstack-dev] [Neutron] Need some clarity on security group protocol numbers vs names

2013-09-11 Thread Justin Hammond
As it seems the review is no longer the place for this discussion, I will
copy/paste my inline comments here:

I dislike the idea of passing magical numbers around to define protocols
(defined or otherwise). I believe there should be a common set of
protocols with their numbers mapped (such as this constants business) and
a well defined way to validate/list said common constants. If a plugin
wishes to add support for a protocol outside of the common case, it should
be added to the list in a pluggable manner.
Ex: common defines the constants 1, 6, 17 to be valid but my_cool_plugin
wants to support 42. It should be my plugin's responsibility to add 42 to
the list of valid protocols by appending to the list given a pluggable
interface to do so. I do not believe plugins should continue to update the
common.constants file with new protocols, but I do believe explicitly
stating which protocols are valid is better than allowing users to
possibly submit protocols erroneously.
If the plugins use a system such as this, it is possible that new, common,
protocols can be found to be core. See NETWORK_TYPE constants.

tl;dr: magic constants are no good, but values should be validated in a
pluggable and explicit manner.



On 9/11/13 10:40 AM, Akihiro Motoki amot...@gmail.com wrote:

Hi all,

Arvind, thank you for initiate the discussion about the ip protocol in
security group rules.
I think the discussion point can be broken down into:

(a) how to specify ip protocol : by name, number, or both
(b) what ip protocols can be specified: known protocols only, all
protocols (or some subset of protocols including unknown protocols)
 where known protocols is defined as a list in Neutron (a list
of constants or a configurable list)

--
(b) is the main topic Arvind and I discussed in the review.
If only known protocols are allowed, we cannot allow protocols which
are not listed in the known protocol list.
For instance, if tcp, udp and icmp are registered as known
protocols (this is the current neutron implementation),
a tenant cannot allow stcp or gre.

Pros of known protocols only is the infrastructure provider can
control which protocols are allowed.
Cons is that users cannot use ip protocols not listed in a known list
and a provider needs to maintain a known protocol list.
Pros and cons of all protocols allowed is vice versa.

If a list of known protocols is configurable, we can cover both cases,
e.g., an empty list or a list [ANY] means all protocols are allowed.
The question in this case is what is the best default value.

My preference is to allow all protocols. At least a list of known
protocols needs to be configurable.
In my principle, a virtual network should be able to convery any type
of IP protocols in a virtual network. This is the reason of my
preference.

-
Regarding (a), if a name and a number refer to a same protocol, it
should be considered as identical.
For example, ip protocol number 6 is tcp, so ip protocol number 6
and protocol name tcp should be regarded as same.
My preference is to allow both name and number of IP protocol. This
will be achieved by Arvind's patch under the review.
name representation is easy to understand in general, but
maintaining all protocol names is a tough work.
This is the reason of my preference.


I understand there is a topic whether a list of known protocols should
contain name only or accepts both name and number.
I don't discuss it here because it is a simple question once we have a
consensus on the above two topic.

Thanks,
Akihiro

On Wed, Sep 11, 2013 at 11:15 PM, Arvind Somya (asomya)
aso...@cisco.com wrote:
 Hello all

 I have a patch in review where  Akihiro made some comments about only
 restricting protocols by names and allowing all protocol numbers when
 creating security group rules. I personally disagree with this approach
as
 names and numbers are just a textual/integer representation of a common
 protocol. The end result is going to be the same in both cases.

 https://review.openstack.org/#/c/43725/

 Akihiro suggested a community discussion around this issue before the
patch
 is accepted upstream. I hope this e-mail gets the ball rolling on that.
I
 would like to hear the community's opinion on this issue and any
 pros/cons/pitfalls of either approach.

 Thanks
 Arvind

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Akihiro MOTOKI amot...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Need some clarity on security group protocol numbers vs names

2013-09-11 Thread Akihiro Motoki
Hi all,

Arvind, thank you for initiate the discussion about the ip protocol in
security group rules.
I think the discussion point can be broken down into:

(a) how to specify ip protocol : by name, number, or both
(b) what ip protocols can be specified: known protocols only, all
protocols (or some subset of protocols including unknown protocols)
 where known protocols is defined as a list in Neutron (a list
of constants or a configurable list)

--
(b) is the main topic Arvind and I discussed in the review.
If only known protocols are allowed, we cannot allow protocols which
are not listed in the known protocol list.
For instance, if tcp, udp and icmp are registered as known
protocols (this is the current neutron implementation),
a tenant cannot allow stcp or gre.

Pros of known protocols only is the infrastructure provider can
control which protocols are allowed.
Cons is that users cannot use ip protocols not listed in a known list
and a provider needs to maintain a known protocol list.
Pros and cons of all protocols allowed is vice versa.

If a list of known protocols is configurable, we can cover both cases,
e.g., an empty list or a list [ANY] means all protocols are allowed.
The question in this case is what is the best default value.

My preference is to allow all protocols. At least a list of known
protocols needs to be configurable.
In my principle, a virtual network should be able to convery any type
of IP protocols in a virtual network. This is the reason of my
preference.

-
Regarding (a), if a name and a number refer to a same protocol, it
should be considered as identical.
For example, ip protocol number 6 is tcp, so ip protocol number 6
and protocol name tcp should be regarded as same.
My preference is to allow both name and number of IP protocol. This
will be achieved by Arvind's patch under the review.
name representation is easy to understand in general, but
maintaining all protocol names is a tough work.
This is the reason of my preference.


I understand there is a topic whether a list of known protocols should
contain name only or accepts both name and number.
I don't discuss it here because it is a simple question once we have a
consensus on the above two topic.

Thanks,
Akihiro

On Wed, Sep 11, 2013 at 11:15 PM, Arvind Somya (asomya)
aso...@cisco.com wrote:
 Hello all

 I have a patch in review where  Akihiro made some comments about only
 restricting protocols by names and allowing all protocol numbers when
 creating security group rules. I personally disagree with this approach as
 names and numbers are just a textual/integer representation of a common
 protocol. The end result is going to be the same in both cases.

 https://review.openstack.org/#/c/43725/

 Akihiro suggested a community discussion around this issue before the patch
 is accepted upstream. I hope this e-mail gets the ball rolling on that. I
 would like to hear the community's opinion on this issue and any
 pros/cons/pitfalls of either approach.

 Thanks
 Arvind

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Akihiro MOTOKI amot...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][oslo] mysql, sqlalchemy and sql_mode

2013-09-11 Thread David Ripton

On 09/11/2013 12:28 PM, Monty Taylor wrote:



On 09/11/2013 11:09 AM, David Ripton wrote:

On 09/11/2013 06:37 AM, Steven Hardy wrote:


I'm investigating some issues, where data stored to a text column in
mysql
is silently truncated if it's too big.

It appears that the default configuration of mysql, and the sessions
established via sqlalchemy is to simply warn on truncation rather than
raise an error.

This seems to me to be almost never what you want, since on retrieval the
data is corrupt and bad/unexpected stuff is likely.

This AFAICT is a mysql specific issue[1], which can be resolved by
setting
sql_mode to traditional[2,3], after which an error is raised on
truncation,
allowing us to catch the error before the data is stored.

My question is, how do other projects, or oslo.db, handle this atm?

It seems we either have to make sure the DB enforces the schema/model, or
validate every single value before attempting to store, which seems
like an
unreasonable burden given that the schema changes pretty regularly.

Can any mysql, sqlalchemy and oslo.db experts pitch in with opinions on
this?


Nova has a PostgreSQL devstack gate, which occasionally catches errors
that MySQL lets through.  For example,
https://bugs.launchpad.net/nova/+bug/1217167

Unfortunately we have some MySQL-only code, and PostgreSQL obviously
can't catch such errors there.

I think we should consider turning off auto-truncation for MySQL on our
CI boxes.


Should turn it off everywhere - same as how we auto-configure to use
InnoDB and not MyISAM, we should definitely set strict sql_modes
strings. There is not an operational concern - sql_modes affect app
developers, of which we are they. :)


If it's our DB, we can configure it however we want.  If it's a user's 
DB, and it's potentially also used by other programs, then we need to be 
careful.


We can set strict mode either globally for the DB server, or 
per-session.  My gut says we should do it per-session, even though it's 
a bit annoying to run the code every time we start a session rather than 
once at setup, Just In Case someone is running OpenStack on a MySQL 
server that also does other things, and might not appreciate excessive 
global meddling.


Anyway, I'll propose a patch for this in Icehouse.

--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-11 Thread John Speidel

+1

On 9/11/13 1:13 PM, Matthew Farrellee wrote:

That sounds quite good.

Best,


matt

On 09/11/2013 11:42 AM, Andrei Savu wrote:

+1

I guess this will also clarify how Savanna relates to other projects
like OpenStack Trove.

-- Andrei Savu

On Wed, Sep 11, 2013 at 5:16 PM, Mike Spreitzer mspre...@us.ibm.com
mailto:mspre...@us.ibm.com wrote:

  To provide a simple, reliable and repeatable mechanism by 
which to
  deploy Hadoop and related Big Data projects, including 
management,

  monitoring and processing mechanisms driving further adoption of
OpenStack.

That sounds like it is at about the right level of specificity.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone and Multiple Identity Sources

2013-09-11 Thread David Chadwick
Further supplementary information to Adam's email below, is that there 
are already one further federation protocol profiles that has been 
published:

for an external Keystone acting as an IdP at
https://review.openstack.org/#/c/42107/

and another for SAML has been prepared and is ready for publication.

I would expect several additional federation profiles to be published in 
the future, for example, for OpenID Connect and what ever else might be 
just around the corner.


Given the fact that the number of federation protocols is not fixed, and 
will evolve with time, then I would prefer their method of integration 
into Keystone to be common, so that one federation module can handle 
all the non-protocol specific federation features, such as policy and 
trust checking, and this module can have multiple different protocol 
handling modules plugged into it that deal with the protocol specific 
features only. This is the method we have adopted in our current 
implementation of federation, and have shown that it is a viable and 
efficient way of implementation as we currently support three protocol 
profiles (SAML, ABFAB and External Keystone).


Thus I prefer

method: federation protocol: abfab

in which the abfab part would be replaced by the particular protocol, 
and there are common parameters to be used by the federation module


instead of method: abfab

as the latter removes the common parameters from federation, and also 
means that common code wont be used, unless it is cut and paste into 
each protocol specific module.


Comments?

David


On 11/09/2013 16:25, Adam Young wrote:

David Chadwick wrote up an in depth API extension for Federation:
https://review.openstack.org/#/c/39499
There is an abfab API proposal as well:
https://review.openstack.org/#/c/42221/

After discussing this for a while, it dawned on me that Federation
should not be something bolted on to Keystone, but rather that it was
already central to the design.

The SQL Identity backend is a simple password store that collects users
into groups.  This makes it an identity provider (IdP).
Now Keystone can register multiple LDAP servers as Identity backends.

There are requests for SAML and ABFAB integration into Keystone as well.

Instead of a Federation API  Keystone should take the key concepts
from the API and make them core concepts.  What would this mean:

1.  Instead of method: federation protocol: abfab  it would be
method: abfab,
2.  The rules about multiple round trips (phase)  would go under the
abfab section.
3.  There would not be a protocol_data section but rather that would
be the abfab section as well.
4.  Provider ID would be standard in the method specific section.

One question that has come up has been about Providers, and whether they
should be considered endpoints in the Catalog.  THere is a couple issues
wiuth this:  one is that they are not something managed by OpenStack,
and two is that they are not necessarily Web Protocols.  As such,
Provider should probably be First class citizen.  We already have LDAP
handled this way, although not as an enumerated entity.  For the first
iteration, I would like to see ABFAB, SAML, and any other protocols we
support done the same way as LDAP:  a deliberate configuration option
for Keystone that will require a config file change.

David and I have discussed this in a side conversation, and agree that
it requires wider input.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-11 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2013-09-11 09:11:06 -0700:
 Sure,
 
 I was thinking that since heat would do autoscaling persay, then heat would 
 say ask trove to make more databases (autoscale policy here) then this would 
 cause trove to actually callback into heat to make more instances.
 
 Just feels a little weird, idk.
 
 Why didn't heat just make those instances on behalf of trove to begin with 
 and then tell trove make these instances into databases. Then trove doesn't 
 really need to worry about calling into heat to do the instance creation 
 work, and trove can just worry about converting those blank instances  
 into databases (for example).
 
 But maybe I am missing other context also :)
 

That sort of optimization would violate encapsulation and make the system
more complex.

Heat doing Trove's provisioning and coordinating Trove's interaction with
other pieces of the system is an implementation detail, safely hidden
behind Trove. Interaction between other pieces of the end user's stack
and Trove is limited to what Trove wants to expose.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Correct way to disable specific event collection by the collector

2013-09-11 Thread Doug Hellmann
You can configure the collector's pipeline to only listen to certain
events, but you shouldn't need to worry about which plugins it actually
loads. See etc/pipeline.yaml in the source tree for an example file. I
don't see any docs for that file, but I might be looking in the wrong
place. If you add the meters you want to the meters list, replacing the
*, then ceilometer should only collect data for the meters you care about.


On Wed, Sep 11, 2013 at 12:17 PM, Neal, Phil phil.n...@hp.com wrote:

 Greetings team,
 I'm working on getting a very streamlined set of collections running and
 I'd like to disable all notifications except Glance. It's clear that the
 desired event types are defined in the plugins, but I can't seem to work
 out how to force the collector service to load only specific handlers in
 the ceilometer.collector namespace. I *thought* it could be accomplished
 by editing /ceilometer/setup.cfg, but removing the entry points there
 didn't seem to work (the extensions manager still picks them up).

 Can someone give me a rough idea of how to do this?

 - Phil


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Need some clarity on security group protocol numbers vs names

2013-09-11 Thread Justin Hammond
I agree with you. Plugin was a mere example and it does make sense to
allow the provider to define custom protocols.

+1

On 9/11/13 12:46 PM, Akihiro Motoki amot...@gmail.com wrote:

Hi Justin,

My point is what

On Thu, Sep 12, 2013 at 12:46 AM, Justin Hammond
justin.hamm...@rackspace.com wrote:
 As it seems the review is no longer the place for this discussion, I
will
 copy/paste my inline comments here:

 I dislike the idea of passing magical numbers around to define protocols
 (defined or otherwise). I believe there should be a common set of
 protocols with their numbers mapped (such as this constants business)
and
 a well defined way to validate/list said common constants.

I agree that value should be validated appropriately in general.
A configurable list of allowed protocols looks good to me.

 wishes to add support for a protocol outside of the common case, it
should
 be added to the list in a pluggable manner.
 Ex: common defines the constants 1, 6, 17 to be valid but my_cool_plugin
 wants to support 42. It should be my plugin's responsibility to add 42
to
 the list of valid protocols by appending to the list given a pluggable
 interface to do so. I do not believe plugins should continue to update
the
 common.constants file with new protocols, but I do believe explicitly
 stating which protocols are valid is better than allowing users to
 possibly submit protocols erroneously.

I think this is just a case a backend plugin defines allowed protocols.

I also see a different case: a cloud provider defines allowed protocols.
For example VLAN network type of OVS plugin can convey any type of packets
including GRE, STCP and so on if a provider wants to do so.
We need to allow a provider to configure the list.

Considering the above, what we need to do looks:
(a) to validate values properly,
(b) to allow a plugin to define what protocols should be allowed
(I think we need two types of lists: possible protocols and
default allowed protocols)
(c) to allow a cloud provider (deployer) to customize allow protocols.
(Of course (c) is a subnet of possible protocols in (b))

Does it make sense?
The above is just a start point of the discussion and some list can be
omitted.

# Whether (c) is needed or not depends on the default list of (b).
# If it is wide enough (c) is not needed. The current list of (b) is
[tcp, udp, icmp]
# and it looks too small set to me, so it is better to have (c) too.

 If the plugins use a system such as this, it is possible that new,
common,
 protocols can be found to be core. See NETWORK_TYPE constants.

I think the situation is a bit different. What network types are
allowed is tightly
coupled with a plugin implementation, and a cloud provider choose a plugin
based on their needs. Thus the mechanism of NETWORK_TYPE constants
make sense to me too.

 tl;dr: magic constants are no good, but values should be validated in a
 pluggable and explicit manner.

As I said above, I agree it is important to validate values properly in
general.

Thanks,
Akihiro




 On 9/11/13 10:40 AM, Akihiro Motoki amot...@gmail.com wrote:

Hi all,

Arvind, thank you for initiate the discussion about the ip protocol in
security group rules.
I think the discussion point can be broken down into:

(a) how to specify ip protocol : by name, number, or both
(b) what ip protocols can be specified: known protocols only, all
protocols (or some subset of protocols including unknown protocols)
 where known protocols is defined as a list in Neutron (a list
of constants or a configurable list)

--
(b) is the main topic Arvind and I discussed in the review.
If only known protocols are allowed, we cannot allow protocols which
are not listed in the known protocol list.
For instance, if tcp, udp and icmp are registered as known
protocols (this is the current neutron implementation),
a tenant cannot allow stcp or gre.

Pros of known protocols only is the infrastructure provider can
control which protocols are allowed.
Cons is that users cannot use ip protocols not listed in a known list
and a provider needs to maintain a known protocol list.
Pros and cons of all protocols allowed is vice versa.

If a list of known protocols is configurable, we can cover both cases,
e.g., an empty list or a list [ANY] means all protocols are allowed.
The question in this case is what is the best default value.

My preference is to allow all protocols. At least a list of known
protocols needs to be configurable.
In my principle, a virtual network should be able to convery any type
of IP protocols in a virtual network. This is the reason of my
preference.

-
Regarding (a), if a name and a number refer to a same protocol, it
should be considered as identical.
For example, ip protocol number 6 is tcp, so ip protocol number 6
and protocol name tcp should be regarded as same.
My preference is to allow both name and number of IP protocol. This
will be achieved by Arvind's patch under the review.
name representation is easy to 

Re: [openstack-dev] Keystone and Multiple Identity Sources

2013-09-11 Thread David Chadwick



On 11/09/2013 19:05, Dolph Mathews wrote:


On Wed, Sep 11, 2013 at 12:31 PM, David Chadwick
d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk wrote:

Further supplementary information to Adam's email below, is that
there are already one further federation protocol profiles that has
been published:
for an external Keystone acting as an IdP at
https://review.openstack.org/#__/c/42107/
https://review.openstack.org/#/c/42107/

and another for SAML has been prepared and is ready for publication.

I would expect several additional federation profiles to be
published in the future, for example, for OpenID Connect and what
ever else might be just around the corner.

Given the fact that the number of federation protocols is not fixed,
and will evolve with time, then I would prefer their method of
integration into Keystone to be common, so that one federation
module can handle all the non-protocol specific federation features,
such as policy and trust checking, and this module can have multiple
different protocol handling modules plugged into it that deal with
the protocol specific features only. This is the method we have
adopted in our current implementation of federation, and have shown
that it is a viable and efficient way of implementation as we
currently support three protocol profiles (SAML, ABFAB and External
Keystone).

Thus I prefer

method: federation protocol: abfab

in which the abfab part would be replaced by the particular
protocol, and there are common parameters to be used by the
federation module


instead of method: abfab

as the latter removes the common parameters from federation, and
also means that common code wont be used, unless it is cut and paste
into each protocol specific module.


That sounds like a pretty strong argument in favor of the current
design, assuming the abfab parameters are children of the common
federation parameters (rather than a sibling of the federation
parameters)... which does appear to be the case the current patchset-
https://review.openstack.org/#__/c/42221/
https://review.openstack.org/#/c/42221/


this would require protocol_data to become a child of the other three 
parameters, which can easily be done. The protocol_data is an array of 
any parameters that the protocol specific code wants to put in there. 
The protocol specific profile document specifies what these are.


regards

David





Comments?

David



On 11/09/2013 16:25, Adam Young wrote:

David Chadwick wrote up an in depth API extension for Federation:
https://review.openstack.org/#__/c/39499
https://review.openstack.org/#/c/39499
There is an abfab API proposal as well:
https://review.openstack.org/#__/c/42221/
https://review.openstack.org/#/c/42221/

After discussing this for a while, it dawned on me that Federation
should not be something bolted on to Keystone, but rather that
it was
already central to the design.

The SQL Identity backend is a simple password store that
collects users
into groups.  This makes it an identity provider (IdP).
Now Keystone can register multiple LDAP servers as Identity
backends.

There are requests for SAML and ABFAB integration into Keystone
as well.

Instead of a Federation API  Keystone should take the key concepts
from the API and make them core concepts.  What would this mean:

1.  Instead of method: federation protocol: abfab  it
would be
method: abfab,
2.  The rules about multiple round trips (phase)  would go under the
abfab section.
3.  There would not be a protocol_data section but rather that
would
be the abfab section as well.
4.  Provider ID would be standard in the method specific section.

One question that has come up has been about Providers, and
whether they
should be considered endpoints in the Catalog.  THere is a
couple issues
wiuth this:  one is that they are not something managed by
OpenStack,
and two is that they are not necessarily Web Protocols.  As such,
Provider should probably be First class citizen.  We already
have LDAP
handled this way, although not as an enumerated entity.  For the
first
iteration, I would like to see ABFAB, SAML, and any other
protocols we
support done the same way as LDAP:  a deliberate configuration
option
for Keystone that will require a config file change.

David and I have discussed this in a side conversation, and
agree that
it requires wider input.




_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org

Re: [openstack-dev] Keystone and Multiple Identity Sources

2013-09-11 Thread Dolph Mathews
On Wed, Sep 11, 2013 at 12:31 PM, David Chadwick d.w.chadw...@kent.ac.ukwrote:

 Further supplementary information to Adam's email below, is that there are
 already one further federation protocol profiles that has been published:
 for an external Keystone acting as an IdP at
 https://review.openstack.org/#**/c/42107/https://review.openstack.org/#/c/42107/

 and another for SAML has been prepared and is ready for publication.

 I would expect several additional federation profiles to be published in
 the future, for example, for OpenID Connect and what ever else might be
 just around the corner.

 Given the fact that the number of federation protocols is not fixed, and
 will evolve with time, then I would prefer their method of integration into
 Keystone to be common, so that one federation module can handle all the
 non-protocol specific federation features, such as policy and trust
 checking, and this module can have multiple different protocol handling
 modules plugged into it that deal with the protocol specific features only.
 This is the method we have adopted in our current implementation of
 federation, and have shown that it is a viable and efficient way of
 implementation as we currently support three protocol profiles (SAML, ABFAB
 and External Keystone).

 Thus I prefer

 method: federation protocol: abfab

 in which the abfab part would be replaced by the particular protocol, and
 there are common parameters to be used by the federation module


 instead of method: abfab

 as the latter removes the common parameters from federation, and also
 means that common code wont be used, unless it is cut and paste into each
 protocol specific module.


That sounds like a pretty strong argument in favor of the current design,
assuming the abfab parameters are children of the common federation
parameters (rather than a sibling of the federation parameters)... which
does appear to be the case the current patchset-
https://review.openstack.org/#**/c/42221/https://review.openstack.org/#/c/42221/



 Comments?

 David



 On 11/09/2013 16:25, Adam Young wrote:

 David Chadwick wrote up an in depth API extension for Federation:
 https://review.openstack.org/#**/c/39499https://review.openstack.org/#/c/39499
 There is an abfab API proposal as well:
 https://review.openstack.org/#**/c/42221/https://review.openstack.org/#/c/42221/

 After discussing this for a while, it dawned on me that Federation
 should not be something bolted on to Keystone, but rather that it was
 already central to the design.

 The SQL Identity backend is a simple password store that collects users
 into groups.  This makes it an identity provider (IdP).
 Now Keystone can register multiple LDAP servers as Identity backends.

 There are requests for SAML and ABFAB integration into Keystone as well.

 Instead of a Federation API  Keystone should take the key concepts
 from the API and make them core concepts.  What would this mean:

 1.  Instead of method: federation protocol: abfab  it would be
 method: abfab,
 2.  The rules about multiple round trips (phase)  would go under the
 abfab section.
 3.  There would not be a protocol_data section but rather that would
 be the abfab section as well.
 4.  Provider ID would be standard in the method specific section.

 One question that has come up has been about Providers, and whether they
 should be considered endpoints in the Catalog.  THere is a couple issues
 wiuth this:  one is that they are not something managed by OpenStack,
 and two is that they are not necessarily Web Protocols.  As such,
 Provider should probably be First class citizen.  We already have LDAP
 handled this way, although not as an enumerated entity.  For the first
 iteration, I would like to see ABFAB, SAML, and any other protocols we
 support done the same way as LDAP:  a deliberate configuration option
 for Keystone that will require a config file change.

 David and I have discussed this in a side conversation, and agree that
 it requires wider input.




 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Need some clarity on security group protocol numbers vs names

2013-09-11 Thread Akihiro Motoki
Hi Justin,

My point is what

On Thu, Sep 12, 2013 at 12:46 AM, Justin Hammond
justin.hamm...@rackspace.com wrote:
 As it seems the review is no longer the place for this discussion, I will
 copy/paste my inline comments here:

 I dislike the idea of passing magical numbers around to define protocols
 (defined or otherwise). I believe there should be a common set of
 protocols with their numbers mapped (such as this constants business) and
 a well defined way to validate/list said common constants.

I agree that value should be validated appropriately in general.
A configurable list of allowed protocols looks good to me.

 wishes to add support for a protocol outside of the common case, it should
 be added to the list in a pluggable manner.
 Ex: common defines the constants 1, 6, 17 to be valid but my_cool_plugin
 wants to support 42. It should be my plugin's responsibility to add 42 to
 the list of valid protocols by appending to the list given a pluggable
 interface to do so. I do not believe plugins should continue to update the
 common.constants file with new protocols, but I do believe explicitly
 stating which protocols are valid is better than allowing users to
 possibly submit protocols erroneously.

I think this is just a case a backend plugin defines allowed protocols.

I also see a different case: a cloud provider defines allowed protocols.
For example VLAN network type of OVS plugin can convey any type of packets
including GRE, STCP and so on if a provider wants to do so.
We need to allow a provider to configure the list.

Considering the above, what we need to do looks:
(a) to validate values properly,
(b) to allow a plugin to define what protocols should be allowed
(I think we need two types of lists: possible protocols and
default allowed protocols)
(c) to allow a cloud provider (deployer) to customize allow protocols.
(Of course (c) is a subnet of possible protocols in (b))

Does it make sense?
The above is just a start point of the discussion and some list can be omitted.

# Whether (c) is needed or not depends on the default list of (b).
# If it is wide enough (c) is not needed. The current list of (b) is
[tcp, udp, icmp]
# and it looks too small set to me, so it is better to have (c) too.

 If the plugins use a system such as this, it is possible that new, common,
 protocols can be found to be core. See NETWORK_TYPE constants.

I think the situation is a bit different. What network types are
allowed is tightly
coupled with a plugin implementation, and a cloud provider choose a plugin
based on their needs. Thus the mechanism of NETWORK_TYPE constants
make sense to me too.

 tl;dr: magic constants are no good, but values should be validated in a
 pluggable and explicit manner.

As I said above, I agree it is important to validate values properly in general.

Thanks,
Akihiro




 On 9/11/13 10:40 AM, Akihiro Motoki amot...@gmail.com wrote:

Hi all,

Arvind, thank you for initiate the discussion about the ip protocol in
security group rules.
I think the discussion point can be broken down into:

(a) how to specify ip protocol : by name, number, or both
(b) what ip protocols can be specified: known protocols only, all
protocols (or some subset of protocols including unknown protocols)
 where known protocols is defined as a list in Neutron (a list
of constants or a configurable list)

--
(b) is the main topic Arvind and I discussed in the review.
If only known protocols are allowed, we cannot allow protocols which
are not listed in the known protocol list.
For instance, if tcp, udp and icmp are registered as known
protocols (this is the current neutron implementation),
a tenant cannot allow stcp or gre.

Pros of known protocols only is the infrastructure provider can
control which protocols are allowed.
Cons is that users cannot use ip protocols not listed in a known list
and a provider needs to maintain a known protocol list.
Pros and cons of all protocols allowed is vice versa.

If a list of known protocols is configurable, we can cover both cases,
e.g., an empty list or a list [ANY] means all protocols are allowed.
The question in this case is what is the best default value.

My preference is to allow all protocols. At least a list of known
protocols needs to be configurable.
In my principle, a virtual network should be able to convery any type
of IP protocols in a virtual network. This is the reason of my
preference.

-
Regarding (a), if a name and a number refer to a same protocol, it
should be considered as identical.
For example, ip protocol number 6 is tcp, so ip protocol number 6
and protocol name tcp should be regarded as same.
My preference is to allow both name and number of IP protocol. This
will be achieved by Arvind's patch under the review.
name representation is easy to understand in general, but
maintaining all protocol names is a tough work.
This is the reason of my preference.


I understand there is a topic whether a list of known protocols should

Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-11 Thread Joshua Harlow
I just have this idea that if u imagine a factory. Heat is the 'robot' in
an assembly line that ensures the 'assembly line' is done correctly. At
different stages heat makes sure the 'person/thing' putting a part on does
it correctly and heat verifies that the part is in the right place (for
example, nova didn't put the wheel on backwards). The 'robot' then moves
the partially completed part to the next person and repeats the same
checks.

So to me, autoscaling say a database would be like going through the
stages of that assembly line via a non-user triggered system (the
autoscaler) and then the final 'paint job' on the vms would be done by the
handoff from heat - trove. Then trove doesn't need to call back into heat
to make vms that it uses, heat does this for trove as part of the assembly
line.

+2 for factory example, ha.

On 9/11/13 9:11 AM, Joshua Harlow harlo...@yahoo-inc.com wrote:

Sure,

I was thinking that since heat would do autoscaling persay, then heat
would say ask trove to make more databases (autoscale policy here) then
this would cause trove to actually callback into heat to make more
instances.

Just feels a little weird, idk.

Why didn't heat just make those instances on behalf of trove to begin
with and then tell trove make these instances into databases. Then
trove doesn't really need to worry about calling into heat to do the
instance creation work, and trove can just worry about converting those
blank instances  into databases (for example).

But maybe I am missing other context also :)

Sent from my really tiny device...

On Sep 11, 2013, at 8:04 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Joshua Harlow's message of 2013-09-11 01:00:37 -0700:
 +1
 
 The assertions are not just applicable to autoscaling but to software
in general. I hope we can make autoscaling just enough simple to work.
 
 The circular heat=trove example is one of those that does worry me a
little. It feels like something is not structured right if that it is
needed (rube goldberg like). I am not sure what could be done
differently, just my gut feeling that something is off.
 
 Joshua, can you elaborate on the circular heat=trove example?
 
 I don't see Heat and Trove's relationship as circular. Heat has a Trove
 resource, and (soon? now?) Trove can use Heat to simplify its control
 of underlying systems. This is a stack, not a circle, or did I miss
 something?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Meeting / Savanna Incubation Follow-Up

2013-09-11 Thread Sergey Lukjanov
Hi folks,

Initial discussions of Savanna Incubation request have been started yesterday. 
Two major topics being discussed were Heat integration and “clustering library” 
[1].

To start with let me give a brief overview of key Savanna features:
1. Provisioning of underlying OpenStack resources (like compute, volume, 
network) required for Hadoop cluster.
2. Hadoop cluster deployment and configuration.
3. Integration with different Hadoop distributions through plugin mechanism 
with single control plan for all of them. In future can be used to integrate 
with other Data Processing frameworks, for example, Twitter Storm.
4. Reliability and performance optimizations to ensure Hadoop cluster 
performance on top of OpenStack, like enabling Swift to be used as underlying 
HDFS and exposing information on Swift data locality to Hadoop scheduler.
5. Set of Elastic Data Processing features:
  * Hadoop jobs on-demand execution
  * Pool of different external data sources, like Swift, external Hadoop 
cluster, NoSQL and traditional databases
  * Pig and Hive integration
6. OpenStack Dashboard plugin for all above.

I highly recommend to view our screencast about Savanna 0.2 release (mid July) 
[2] to better understand Savanna functionality. 

As you can see, resources provisioning is just one of the features and the 
implementation details are not critical for overall architecture. It performs 
only the first step of the cluster setup. We’ve been considering Heat for a 
while, but ended up direct API calls in favor of speed and simplicity. Going 
forward Heat integration will be done by implementing extension mechanism [3] 
and [4] as part of Icehouse release.

The next part, Hadoop cluster configuration, already extensible and we have 
several plugins - Vanilla, Hortonworks Data Platform and Cloudera plugin 
started too. This allow to unify management of different Hadoop distributions 
under single control plane. The plugins are responsible for correct Hadoop 
ecosystem configuration at already provisioned resources and use different 
Hadoop management tools like Ambari to setup and configure all cluster  
services, so, there are no actual provisioning configs on Savanna side in this 
case. Savanna and its plugins encapsulate the knowledge of Hadoop internals and 
default configuration for Hadoop services.



The next topic is “Cluster API”.

The concern that was raised is how to extract general clustering functionality 
to the common library. Cluster provisioning and management topic currently 
relevant for a number of projects within OpenStack ecosystem: Savanna, Trove, 
TripleO, Heat, Taskflow.

Still each of the projects has their own understanding of what the cluster 
provisioning is. The idea of extracting common functionality sounds reasonable, 
but details still need to be worked out. 

I’ll try to highlight Savanna team current perspective on this question. Notion 
of “Cluster management” in my perspective has several levels:
1. Resources provisioning and configuration (like instances, networks, 
storages). Heat is the main tool with possibly additional support from 
underlying services. For example, instance grouping API extension [5] in Nova 
would be very useful. 
2. Distributed communication/task execution. There is a project in OpenStack 
ecosystem with the mission to provide a framework for distributed task 
execution - TaskFlow [6]. It’s been started quite recently. In Savanna we are 
really looking forward to use more and more of its functionality in I and J 
cycles as TaskFlow itself getting more mature.
3. Higher level clustering - management of the actual services working on top 
of the infrastructure. For example, in Savanna configuring HDFS data nodes or 
in Trove setting up MySQL cluster with Percona or Galera. This operations are 
typical very specific for the project domain. As for Savanna specifically, we 
use lots of benefits of Hadoop internals knowledge to deploy and configure it 
properly.

Overall conclusion it seems to be that it make sense to enhance Heat 
capabilities and invest in Taskflow development, leaving domain-specific 
operations to the individual projects.

I also would like to emphasize that in Savanna Hadoop cluster management is 
already implemented including scaling support.

With all this I do believe Savanna fills an important gap in OpenStack by 
providing Data Processing capabilities in cloud environment in general and 
integration with Hadoop ecosystem as the first particular step. 

Hadoop ecosystem on its own is huge and integration will add significant value 
to OpenStack community and users [7].


[1] http://eavesdrop.openstack.org/meetings/tc/2013/tc.2013-09-10-20.02.log.html
[2] http://www.youtube.com/watch?v=SrlHM0-q5zI
[3] https://blueprints.launchpad.net/savanna/+spec/infra-provisioning-extensions
[4] 
https://blueprints.launchpad.net/savanna/+spec/heat-backed-resources-provisioning
[5] 

[openstack-dev] [qa] nominations for tempest-core

2013-09-11 Thread Sean Dague
We're in Feature Freeze for the Open Stack projects, which actually 
means we're starting the busy cycle for Tempest in people landing 
additional tests for verification of features that hadn't gone in until 
recently. As such, I think now is a good time to consider some new core 
members. There are two people I think have been doing an exceptional job 
that we should include in the core group


Mark Koderer has been spear heading the stress testing in Tempest, 
completing the new stress testing for the H3 milestone, and has gotten 
very active in reviews over the last three months.


You can see his contributions here: 
https://review.openstack.org/#/q/project:openstack/tempest+owner:m.koderer%2540telekom.de,n,z


And his code reviews here: his reviews here - 
https://review.openstack.org/#/q/project:openstack/tempest+reviewer:m.koderer%2540telekom.de,n,z



Giulio Fidente did a lot of great work bringing our volumes testing up 
to par early in the cycle, and has been very active in reviews since the 
Havana cycle opened up.


You can see his contributions here: 
https://review.openstack.org/#/q/project:openstack/tempest+owner:gfidente%2540redhat.com,n,z


And his code reviews here: his reviews here - 
https://review.openstack.org/#/q/project:openstack/tempest+reviewer:gfidente%2540redhat.com,n,z



Both have been active in blueprints and the openstack-qa meetings all 
summer long, and I think would make excellent additions to the Tempest 
core team.


Current QA core members, please vote +1 or -1 to these nominations when 
you get a chance. We'll keep the polls open for 5 days or until everyone 
has voiced their votes.


For reference here are the 90 day review stats for Tempest as of today:

Reviews for the last 90 days in tempest
** -- tempest-core team member
+--+---+
|   Reviewer   | Reviews (-2|-1|+1|+2) (+/- ratio) |
+--+---+
| afazekas **  | 275 (1|29|18|227) (89.1%) |
|  sdague **   |  198 (4|60|0|134) (67.7%) |
|   gfidente   |  130 (0|55|75|0) (57.7%)  |
|david-kranz **|  112 (1|24|0|87) (77.7%)  |
| treinish **  |  109 (5|32|0|72) (66.1%)  |
|  cyeoh-0 **  |   87 (0|19|4|64) (78.2%)  |
|   mkoderer   |   69 (0|20|49|0) (71.0%)  |
| jaypipes **  |   65 (0|22|0|43) (66.2%)  |
|igawa |   49 (0|10|39|0) (79.6%)  |
|   oomichi|   30 (0|9|21|0) (70.0%)   |
| jogo |   26 (0|12|14|0) (53.8%)  |
|   adalbas|   22 (0|4|18|0) (81.8%)   |
| ravikumar-venkatesan |   22 (0|2|20|0) (90.9%)   |
|   ivan-zhu   |   21 (0|10|11|0) (52.4%)  |
|   mriedem|13 (0|4|9|0) (69.2%)   |
|   andrea-frittoli|12 (0|4|8|0) (66.7%)   |
|   mkollaro   |10 (0|5|5|0) (50.0%)   |
|  zhikunliu   |10 (0|4|6|0) (60.0%)   |
|Anju5 |9 (0|0|9|0) (100.0%)   |
|   anteaya|7 (0|3|4|0) (57.1%)|
| Anju |7 (0|0|7|0) (100.0%)   |
|   steve-stevebaker   |6 (0|3|3|0) (50.0%)|
|   prekarat   |5 (0|3|2|0) (40.0%)|
|rahmu |5 (0|2|3|0) (60.0%)|
|   psedlak|4 (0|3|1|0) (25.0%)|
|minsel|4 (0|3|1|0) (25.0%)|
|zhiteng-huang |3 (0|2|1|0) (33.3%)|
| maru |3 (0|1|2|0) (66.7%)|
|   iwienand   |3 (0|1|2|0) (66.7%)|
|FujiokaYuuichi|3 (0|1|2|0) (66.7%)|
|dolph |3 (0|0|3|0) (100.0%)   |
| cthiel-suse  |3 (0|0|3|0) (100.0%)   |
|walter-boring | 2 (0|2|0|0) (0.0%)|
|bnemec| 2 (0|2|0|0) (0.0%)|
|   lifeless   |2 (0|1|1|0) (50.0%)|
|fabien-boucher|2 (0|1|1|0) (50.0%)|
| alex_gaynor  |2 (0|1|1|0) (50.0%)|
|alaski|2 (0|1|1|0) (50.0%)|
|   krtaylor   |2 (0|0|2|0) (100.0%)   |
|   cbehrens   |2 (0|0|2|0) (100.0%)   |
|   Sumanth|2 (0|0|2|0) (100.0%)   |
| ttx  | 1 (0|1|0|0) (0.0%)|
|   rvaknin| 1 (0|1|0|0) (0.0%)|
| rohitkarajgi | 1 (0|1|0|0) (0.0%)|
|   ndipanov   | 1 (0|1|0|0) (0.0%)|
|   michaeltchapman| 1 (0|1|0|0) (0.0%)|
|   maurosr| 1 (0|1|0|0) (0.0%)|
|  mate-lakat  | 1 (0|1|0|0) (0.0%)|
|   jecarey| 1 (0|1|0|0) (0.0%)|
| jdc  |

Re: [openstack-dev] [qa] nominations for tempest-core

2013-09-11 Thread Matthew Treinish
+1 for both of them. They've both done great work.

-Matt Treinish

On Wed, Sep 11, 2013 at 04:32:11PM -0400, Sean Dague wrote:
 We're in Feature Freeze for the Open Stack projects, which actually
 means we're starting the busy cycle for Tempest in people landing
 additional tests for verification of features that hadn't gone in
 until recently. As such, I think now is a good time to consider some
 new core members. There are two people I think have been doing an
 exceptional job that we should include in the core group
 
 Mark Koderer has been spear heading the stress testing in Tempest,
 completing the new stress testing for the H3 milestone, and has
 gotten very active in reviews over the last three months.
 
 You can see his contributions here: 
 https://review.openstack.org/#/q/project:openstack/tempest+owner:m.koderer%2540telekom.de,n,z
 
 And his code reviews here: his reviews here - 
 https://review.openstack.org/#/q/project:openstack/tempest+reviewer:m.koderer%2540telekom.de,n,z
 
 
 Giulio Fidente did a lot of great work bringing our volumes testing
 up to par early in the cycle, and has been very active in reviews
 since the Havana cycle opened up.
 
 You can see his contributions here: 
 https://review.openstack.org/#/q/project:openstack/tempest+owner:gfidente%2540redhat.com,n,z
 
 And his code reviews here: his reviews here - 
 https://review.openstack.org/#/q/project:openstack/tempest+reviewer:gfidente%2540redhat.com,n,z
 
 
 Both have been active in blueprints and the openstack-qa meetings
 all summer long, and I think would make excellent additions to the
 Tempest core team.
 
 Current QA core members, please vote +1 or -1 to these nominations
 when you get a chance. We'll keep the polls open for 5 days or until
 everyone has voiced their votes.
 
 For reference here are the 90 day review stats for Tempest as of today:
 
 Reviews for the last 90 days in tempest
 ** -- tempest-core team member
 +--+---+
 |   Reviewer   | Reviews (-2|-1|+1|+2) (+/- ratio) |
 +--+---+
 | afazekas **  | 275 (1|29|18|227) (89.1%) |
 |  sdague **   |  198 (4|60|0|134) (67.7%) |
 |   gfidente   |  130 (0|55|75|0) (57.7%)  |
 |david-kranz **|  112 (1|24|0|87) (77.7%)  |
 | treinish **  |  109 (5|32|0|72) (66.1%)  |
 |  cyeoh-0 **  |   87 (0|19|4|64) (78.2%)  |
 |   mkoderer   |   69 (0|20|49|0) (71.0%)  |
 | jaypipes **  |   65 (0|22|0|43) (66.2%)  |
 |igawa |   49 (0|10|39|0) (79.6%)  |
 |   oomichi|   30 (0|9|21|0) (70.0%)   |
 | jogo |   26 (0|12|14|0) (53.8%)  |
 |   adalbas|   22 (0|4|18|0) (81.8%)   |
 | ravikumar-venkatesan |   22 (0|2|20|0) (90.9%)   |
 |   ivan-zhu   |   21 (0|10|11|0) (52.4%)  |
 |   mriedem|13 (0|4|9|0) (69.2%)   |
 |   andrea-frittoli|12 (0|4|8|0) (66.7%)   |
 |   mkollaro   |10 (0|5|5|0) (50.0%)   |
 |  zhikunliu   |10 (0|4|6|0) (60.0%)   |
 |Anju5 |9 (0|0|9|0) (100.0%)   |
 |   anteaya|7 (0|3|4|0) (57.1%)|
 | Anju |7 (0|0|7|0) (100.0%)   |
 |   steve-stevebaker   |6 (0|3|3|0) (50.0%)|
 |   prekarat   |5 (0|3|2|0) (40.0%)|
 |rahmu |5 (0|2|3|0) (60.0%)|
 |   psedlak|4 (0|3|1|0) (25.0%)|
 |minsel|4 (0|3|1|0) (25.0%)|
 |zhiteng-huang |3 (0|2|1|0) (33.3%)|
 | maru |3 (0|1|2|0) (66.7%)|
 |   iwienand   |3 (0|1|2|0) (66.7%)|
 |FujiokaYuuichi|3 (0|1|2|0) (66.7%)|
 |dolph |3 (0|0|3|0) (100.0%)   |
 | cthiel-suse  |3 (0|0|3|0) (100.0%)   |
 |walter-boring | 2 (0|2|0|0) (0.0%)|
 |bnemec| 2 (0|2|0|0) (0.0%)|
 |   lifeless   |2 (0|1|1|0) (50.0%)|
 |fabien-boucher|2 (0|1|1|0) (50.0%)|
 | alex_gaynor  |2 (0|1|1|0) (50.0%)|
 |alaski|2 (0|1|1|0) (50.0%)|
 |   krtaylor   |2 (0|0|2|0) (100.0%)   |
 |   cbehrens   |2 (0|0|2|0) (100.0%)   |
 |   Sumanth|2 (0|0|2|0) (100.0%)   |
 | ttx  | 1 (0|1|0|0) (0.0%)|
 |   rvaknin| 1 (0|1|0|0) (0.0%)|
 | rohitkarajgi | 1 (0|1|0|0) (0.0%)|
 |   ndipanov   | 1 (0|1|0|0) (0.0%)|
 |   michaeltchapman| 1 (0|1|0|0) (0.0%)|
 |   maurosr   

Re: [openstack-dev] [Neutron] New plugin

2013-09-11 Thread Salvatore Orlando
Hi Marc,

Perhaps this guide [1] might help you going through the process of signign
the CLA and pushing your code to gerrit for review.

Salvatore

[1] https://wiki.openstack.org/wiki/How_To_Contribute


On 11 September 2013 23:13, Marc PINHEDE pinhede.m...@netvirt.ca wrote:

 Hello,

 I am Marc Pinhède, working in Netvirt with professor Omar Cherkaoui.

 We started working on a Neutron plugin. A first version is now almost
 ready.
 To inform the community, we posted a blueprint:

 https://blueprints.launchpad.net/neutron/+spec/modular-adaptative-plugin

 We would like to make our code available soon. But wiki page
 https://wiki.openstack.org/wiki/NeutronDevelopment does not gives many
 clues on where and how to post code.

 As Havana is in feature-freeze stage, discussions on the blueprint and
 eventual integration in Neutron core may come once Havana would be released.

 Waiting for this, where is the better place to make our code available?

 Marc Pinhède

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Need some clarity on security group protocol numbers vs names

2013-09-11 Thread Mark McClain

On Sep 11, 2013, at 1:46 PM, Akihiro Motoki amot...@gmail.com wrote:
 
 On Thu, Sep 12, 2013 at 12:46 AM, Justin Hammond
 justin.hamm...@rackspace.com wrote:
 As it seems the review is no longer the place for this discussion, I will
 copy/paste my inline comments here:
 
 I dislike the idea of passing magical numbers around to define protocols
 (defined or otherwise). I believe there should be a common set of
 protocols with their numbers mapped (such as this constants business) and
 a well defined way to validate/list said common constants.
 
 I agree that value should be validated appropriately in general.
 A configurable list of allowed protocols looks good to me.

I'm -2.  The original bug has morphed into a mini-feature and is not allowable 
under our feature freeze rules.

There are many valid reasons for allowing 41, 47, etc to a guest and we should 
continue to allow 0=proto_num=255 in Havana.  We should also refocus on the 
original bug intent and normalize the data to prevent duplicate rules in the 
common cases (tcp, udp, icmp, icmp, icmpv6).

Any other changes should be open for discussion in Icehouse as we'll need to 
consider the deployment and backwards compatibility issues.  Feel free to 
proposal a session on this for the Hong Kong summit.

mark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone and Multiple Identity Sources

2013-09-11 Thread Adam Young

On 09/11/2013 12:35 PM, Dolph Mathews wrote:


On Wed, Sep 11, 2013 at 10:25 AM, Adam Young ayo...@redhat.com 
mailto:ayo...@redhat.com wrote:


David Chadwick wrote up an in depth API extension for Federation:
https://review.openstack.org/#/c/39499
There is an abfab API proposal as well:
https://review.openstack.org/#/c/42221/

After discussing this for a while, it dawned on me that Federation
should not be something bolted on to Keystone, but rather that it
was already central to the design.

The SQL Identity backend is a simple password store that collects
users into groups.  This makes it an identity provider (IdP).
Now Keystone can register multiple LDAP servers as Identity backends.

There are requests for SAML and ABFAB integration into Keystone as
well.

Instead of a Federation API  Keystone should take the key
concepts from the API and make them core concepts.  What would
this mean:

1.  Instead of method: federation protocol: abfab  it
would be method: abfab,
2.  The rules about multiple round trips (phase)  would go under
the abfab section.
3.  There would not be a protocol_data section but rather that
would be the abfab section as well.
4.  Provider ID would be standard in the method specific section.


That sounds like it fits with the original intention of the method 
portion of the auth API.



One question that has come up has been about Providers, and
whether they should be considered endpoints in the Catalog.  THere
is a couple issues wiuth this:  one is that they are not something
managed by OpenStack, and two is that they are not necessarily Web
Protocols.


What's the use case for including providers in the service catalog? 
i.e. why do Identity API clients need to be aware of the Identity 
Providers?
In the federation protocol API, the user can specify the IdP that they 
are using. Keystone needs to know what are the set of acceptable IdPs, 
somehow.  The first thought was reuse of the Service catalog.
It probably makes sense to let an administrator enumerate the IdPs 
registered with Keystone, and what protocol each supports.





As such, Provider should probably be First class citizen.  We
already have LDAP  handled this way, although not as an enumerated
entity.


Can you be more specific? What does it mean to be a first class 
citizen in this context? The fact that identity is backed by LDAP 
today is abstracted away from Identity API clients, for example.


For the first iteration, I would like to see ABFAB, SAML, and any
other protocols we support done the same way as LDAP:  a
deliberate configuration option for Keystone that will require a
config file change.

David and I have discussed this in a side conversation, and agree
that it requires wider input.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

-Dolph


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] nominations for tempest-core

2013-09-11 Thread David Kranz

+1 to both!

On 09/11/2013 04:32 PM, Sean Dague wrote:
We're in Feature Freeze for the Open Stack projects, which actually 
means we're starting the busy cycle for Tempest in people landing 
additional tests for verification of features that hadn't gone in 
until recently. As such, I think now is a good time to consider some 
new core members. There are two people I think have been doing an 
exceptional job that we should include in the core group


Mark Koderer has been spear heading the stress testing in Tempest, 
completing the new stress testing for the H3 milestone, and has gotten 
very active in reviews over the last three months.


You can see his contributions here: 
https://review.openstack.org/#/q/project:openstack/tempest+owner:m.koderer%2540telekom.de,n,z


And his code reviews here: his reviews here - 
https://review.openstack.org/#/q/project:openstack/tempest+reviewer:m.koderer%2540telekom.de,n,z



Giulio Fidente did a lot of great work bringing our volumes testing up 
to par early in the cycle, and has been very active in reviews since 
the Havana cycle opened up.


You can see his contributions here: 
https://review.openstack.org/#/q/project:openstack/tempest+owner:gfidente%2540redhat.com,n,z


And his code reviews here: his reviews here - 
https://review.openstack.org/#/q/project:openstack/tempest+reviewer:gfidente%2540redhat.com,n,z



Both have been active in blueprints and the openstack-qa meetings all 
summer long, and I think would make excellent additions to the Tempest 
core team.


Current QA core members, please vote +1 or -1 to these nominations 
when you get a chance. We'll keep the polls open for 5 days or until 
everyone has voiced their votes.


For reference here are the 90 day review stats for Tempest as of today:

Reviews for the last 90 days in tempest
** -- tempest-core team member
+--+---+
|   Reviewer   | Reviews (-2|-1|+1|+2) (+/- ratio) |
+--+---+
| afazekas **  | 275 (1|29|18|227) (89.1%) |
|  sdague **   |  198 (4|60|0|134) (67.7%) |
|   gfidente   |  130 (0|55|75|0) (57.7%)  |
|david-kranz **|  112 (1|24|0|87) (77.7%)  |
| treinish **  |  109 (5|32|0|72) (66.1%)  |
|  cyeoh-0 **  |   87 (0|19|4|64) (78.2%)  |
|   mkoderer   |   69 (0|20|49|0) (71.0%)  |
| jaypipes **  |   65 (0|22|0|43) (66.2%)  |
|igawa |   49 (0|10|39|0) (79.6%)  |
|   oomichi|   30 (0|9|21|0) (70.0%)   |
| jogo |   26 (0|12|14|0) (53.8%)  |
|   adalbas|   22 (0|4|18|0) (81.8%)   |
| ravikumar-venkatesan |   22 (0|2|20|0) (90.9%)   |
|   ivan-zhu   |   21 (0|10|11|0) (52.4%)  |
|   mriedem|13 (0|4|9|0) (69.2%)   |
|   andrea-frittoli|12 (0|4|8|0) (66.7%)   |
|   mkollaro   |10 (0|5|5|0) (50.0%)   |
|  zhikunliu   |10 (0|4|6|0) (60.0%)   |
|Anju5 |9 (0|0|9|0) (100.0%)   |
|   anteaya|7 (0|3|4|0) (57.1%)|
| Anju |7 (0|0|7|0) (100.0%)   |
|   steve-stevebaker   |6 (0|3|3|0) (50.0%)|
|   prekarat   |5 (0|3|2|0) (40.0%)|
|rahmu |5 (0|2|3|0) (60.0%)|
|   psedlak|4 (0|3|1|0) (25.0%)|
|minsel|4 (0|3|1|0) (25.0%)|
|zhiteng-huang |3 (0|2|1|0) (33.3%)|
| maru |3 (0|1|2|0) (66.7%)|
|   iwienand   |3 (0|1|2|0) (66.7%)|
|FujiokaYuuichi|3 (0|1|2|0) (66.7%)|
|dolph |3 (0|0|3|0) (100.0%)   |
| cthiel-suse  |3 (0|0|3|0) (100.0%)   |
|walter-boring | 2 (0|2|0|0) (0.0%)|
|bnemec| 2 (0|2|0|0) (0.0%)|
|   lifeless   |2 (0|1|1|0) (50.0%)|
|fabien-boucher|2 (0|1|1|0) (50.0%)|
| alex_gaynor  |2 (0|1|1|0) (50.0%)|
|alaski|2 (0|1|1|0) (50.0%)|
|   krtaylor   |2 (0|0|2|0) (100.0%)   |
|   cbehrens   |2 (0|0|2|0) (100.0%)   |
|   Sumanth|2 (0|0|2|0) (100.0%)   |
| ttx  | 1 (0|1|0|0) (0.0%)|
|   rvaknin| 1 (0|1|0|0) (0.0%)|
| rohitkarajgi | 1 (0|1|0|0) (0.0%)|
|   ndipanov   | 1 (0|1|0|0) (0.0%)|
|   michaeltchapman| 1 (0|1|0|0) (0.0%)|
|   maurosr| 1 (0|1|0|0) (0.0%)|
|  mate-lakat  | 1 (0|1|0|0) (0.0%)|
|   jecarey| 1 

Re: [openstack-dev] Keystone and Multiple Identity Sources

2013-09-11 Thread Adam Young

On 09/11/2013 02:05 PM, Dolph Mathews wrote:


On Wed, Sep 11, 2013 at 12:31 PM, David Chadwick 
d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk wrote:


Further supplementary information to Adam's email below, is that
there are already one further federation protocol profiles that
has been published:
for an external Keystone acting as an IdP at
https://review.openstack.org/#/c/42107/

and another for SAML has been prepared and is ready for publication.

I would expect several additional federation profiles to be
published in the future, for example, for OpenID Connect and what
ever else might be just around the corner.

Given the fact that the number of federation protocols is not
fixed, and will evolve with time, then I would prefer their method
of integration into Keystone to be common, so that one
federation module can handle all the non-protocol specific
federation features, such as policy and trust checking, and this
module can have multiple different protocol handling modules
plugged into it that deal with the protocol specific features
only. This is the method we have adopted in our current
implementation of federation, and have shown that it is a viable
and efficient way of implementation as we currently support three
protocol profiles (SAML, ABFAB and External Keystone).

Thus I prefer

method: federation protocol: abfab

in which the abfab part would be replaced by the particular
protocol, and there are common parameters to be used by the
federation module 



instead of method: abfab

as the latter removes the common parameters from federation, and
also means that common code wont be used, unless it is cut and
paste into each protocol specific module.


That sounds like a pretty strong argument in favor of the current 
design, assuming the abfab parameters are children of the common 
federation parameters (rather than a sibling of the federation 
parameters)... which does appear to be the case the current patchset- 
https://review.openstack.org/#/c/42221/


And this is where David and I disagree.  I don't think Federation is in 
addition to Keystone but rather it is fundamental to Keystone. I don't 
think method : federation  is a necessary abstraction. I think what 
David is trying to acheive is best done as a set of standards on how to 
add any provider:  we don't need a wrapper around a wrapper.




Comments?

David



On 11/09/2013 16:25, Adam Young wrote:

David Chadwick wrote up an in depth API extension for Federation:
https://review.openstack.org/#/c/39499
There is an abfab API proposal as well:
https://review.openstack.org/#/c/42221/

After discussing this for a while, it dawned on me that Federation
should not be something bolted on to Keystone, but rather that
it was
already central to the design.

The SQL Identity backend is a simple password store that
collects users
into groups.  This makes it an identity provider (IdP).
Now Keystone can register multiple LDAP servers as Identity
backends.

There are requests for SAML and ABFAB integration into
Keystone as well.

Instead of a Federation API  Keystone should take the key
concepts
from the API and make them core concepts.  What would this mean:

1.  Instead of method: federation protocol: abfab  it
would be
method: abfab,
2.  The rules about multiple round trips (phase)  would go
under the
abfab section.
3.  There would not be a protocol_data section but rather
that would
be the abfab section as well.
4.  Provider ID would be standard in the method specific section.

One question that has come up has been about Providers, and
whether they
should be considered endpoints in the Catalog.  THere is a
couple issues
wiuth this:  one is that they are not something managed by
OpenStack,
and two is that they are not necessarily Web Protocols.  As such,
Provider should probably be First class citizen.  We already
have LDAP
handled this way, although not as an enumerated entity.  For
the first
iteration, I would like to see ABFAB, SAML, and any other
protocols we
support done the same way as LDAP:  a deliberate configuration
option
for Keystone that will require a config file change.

David and I have discussed this in a side conversation, and
agree that
it requires wider input.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-11 Thread Keith Bray
There is context missing here.  heat==trove interaction is through the
trove API.  trove==heat interaction is a _different_ instance of Heat,
internal to trove's infrastructure setup, potentially provisioning
instances.   Public Heat wouldn't be creating instances and then telling
trove to make them into databases.

At least, that's what I understand from conversations with the Trove
folks.  I could be wrong here also.

-Keith

On 9/11/13 11:11 AM, Joshua Harlow harlo...@yahoo-inc.com wrote:

Sure,

I was thinking that since heat would do autoscaling persay, then heat
would say ask trove to make more databases (autoscale policy here) then
this would cause trove to actually callback into heat to make more
instances.

Just feels a little weird, idk.

Why didn't heat just make those instances on behalf of trove to begin
with and then tell trove make these instances into databases. Then
trove doesn't really need to worry about calling into heat to do the
instance creation work, and trove can just worry about converting those
blank instances  into databases (for example).

But maybe I am missing other context also :)

Sent from my really tiny device...

On Sep 11, 2013, at 8:04 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Joshua Harlow's message of 2013-09-11 01:00:37 -0700:
 +1
 
 The assertions are not just applicable to autoscaling but to software
in general. I hope we can make autoscaling just enough simple to work.
 
 The circular heat=trove example is one of those that does worry me a
little. It feels like something is not structured right if that it is
needed (rube goldberg like). I am not sure what could be done
differently, just my gut feeling that something is off.
 
 Joshua, can you elaborate on the circular heat=trove example?
 
 I don't see Heat and Trove's relationship as circular. Heat has a Trove
 resource, and (soon? now?) Trove can use Heat to simplify its control
 of underlying systems. This is a stack, not a circle, or did I miss
 something?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [pci device passthrough] fails with NameError: global name '_' is not defined

2013-09-11 Thread yongli he

于 2013年09月11日 21:27, Henry Gessau 写道:

For the TypeError: expected string or buffer I have filed Bug #1223874.

got, thanks。



On Wed, Sep 11, at 7:41 am, yongli he yongli...@intel.com wrote:


于 2013年09月11日 05:38, David Kang 写道:

- Original Message -

From: Russell Bryant rbry...@redhat.com
To: David Kang dk...@isi.edu
Cc: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Sent: Tuesday, September 10, 2013 5:17:15 PM
Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails with NameError: 
global name '_' is not defined
On 09/10/2013 05:03 PM, David Kang wrote:

- Original Message -

From: Russell Bryant rbry...@redhat.com
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Cc: David Kang dk...@isi.edu
Sent: Tuesday, September 10, 2013 4:42:41 PM
Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails
with NameError: global name '_' is not defined
On 09/10/2013 03:56 PM, David Kang wrote:

   Hi,

I'm trying to test pci device passthrough feature.
Havana3 is installed using Packstack on CentOS 6.4.
Nova-compute dies right after start with error NameError: global
name '_' is not defined.
I'm not sure if it is due to misconfiguration of nova.conf or bug.
Any help will be appreciated.

Here is the info:

/etc/nova/nova.conf:
pci_alias={name:test, product_id:7190, vendor_id:8086,
device_type:ACCEL}

pci_passthrough_whitelist=[{vendor_id:8086,product_id:7190}]

   With that configuration, nova-compute fails with the following
   log:

File
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py,
line 461, in _process_data
  **args)

File
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py,
line 172, in dispatch
  result = getattr(proxyobj, method)(ctxt, **kwargs)

File
/usr/lib/python2.6/site-packages/nova/conductor/manager.py,
line 567, in object_action
  result = getattr(objinst, objmethod)(context, *args, **kwargs)

File /usr/lib/python2.6/site-packages/nova/objects/base.py,
line
141, in wrapper
  return fn(self, ctxt, *args, **kwargs)

File
/usr/lib/python2.6/site-packages/nova/objects/pci_device.py,
line 242, in save
  self._from_db_object(context, self, db_pci)

NameError: global name '_' is not defined
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup Traceback (most recent call
last):
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py,
line 117, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup x.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py,
line 49, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self.thread.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/eventlet/greenthread.py, line
166, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self._exit_event.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/eventlet/event.py, line 116, in
wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return hubs.get_hub().switch()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py, line 177,
in switch
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self.greenlet.switch()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/eventlet/greenthread.py, line
192, in main
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup result = function(*args,
**kwargs)
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/openstack/common/service.py,
line 65, in run_service
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup service.start()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/service.py, line 164, in
start
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup self.manager.pre_start_hook()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line
805, in pre_start_hook
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup
self.update_available_resource(nova.context.get_admin_context())
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line
4773, in update_available_resource
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup

Re: [openstack-dev] [heat] Comments/questions on the instance-group-api-extension blueprint

2013-09-11 Thread shalz
Mike,

You mention  We are now extending that example to include storage, and we are 
also working examples with Hadoop. 

In the context of your examples / scenarios, do these placement decisions 
consider storage performance and capacity on a physical node?

For example: Based on application needs, and IOPS, latency requirements - 
carving out a SSD storage or a traditional spinning disk block volume?  Or say 
for cost-efficiency reasons using SSD caching on Hadoop name nodes? 

I'm investigating  a) Per node PCIe SSD deployment need in Openstack 
environment /  Hadoop environment and ,b) selected node SSD caching, 
specifically for OpenStack Cinder.  Hope this is the right forum to ask this 
question.

rgds,
S

On Sep 12, 2013, at 12:29 AM, Mike Spreitzer mspre...@us.ibm.com wrote:

 Yes, I've seen that material.  In my group we have worked larger and more 
 complex examples.  I have a proposed breakout session at the Hong Kong summit 
 to talk about one, you might want to vote for it.  The URL is 
 http://www.openstack.org/summit/openstack-summit-hong-kong-2013/become-a-speaker/TalkDetails/109
  and the title is Continuous Delivery of Lotus Connections on OpenStack.  
 We used our own technology to do the scheduling (make placement decisions) 
 and orchestration, calling Nova and Quantum to carry out the decisions our 
 software made.  Above the OpenStack infrastructure we used two layers of our 
 own software, one focused on infrastructure and one adding concerns for the 
 software running on that infrastructure.  Each used its own language for a 
 whole topology AKA pattern AKA application AKA cluster.  For example, our 
 pattern has 16 VMs running the WebSphere application server, organized into 
 four homogenous groups (members are interchangeable) of four each.  For each 
 group, we asked that it both (a) be spread across at least two racks, with no 
 more than half the VMs on any one rack and (b) have no two VMs on the same 
 hypervisor.  You can imagine how this would involve multiple levels of 
 grouping and relationships between groups (and you will probably be surprised 
 by the particulars).  We also included information on licensed products, so 
 that the placement decision can optimize license cost (for the IBM 
 sub-capacity licenses, placement of VMs can make a cost difference).  Thus, 
 multiple policies per thing.  We are now extending that example to include 
 storage, and we are also working examples with Hadoop. 
 
 Regards, 
 Mike 
 
 
 
 From:Gary Kotton gkot...@vmware.com 
 To:OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org, 
 Date:09/11/2013 06:06 AM 
 Subject:Re: [openstack-dev] [heat] Comments/questions on the 
 instance-group-api-extension blueprint 
 
 
 
 
 
 From: Mike Spreitzer mspre...@us.ibm.com
 Reply-To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 Date: Tuesday, September 10, 2013 11:58 PM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [heat] Comments/questions on the 
 instance-group-api-extension blueprint 
 
 First, I'm a newbie here, wondering: is this the right place for 
 comments/questions on blueprints?  Supposing it is... 
 
 [Gary Kotton] Yeah, as Russel said this is the correct place 
 
 I am referring to 
 https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension
 
 In my own research group we have experience with a few systems that do 
 something like that, and more (as, indeed, that blueprint explicitly states 
 that it is only the start of a longer roadmap).  I would like to highlight a 
 couple of differences that alarm me.  One is the general overlap between 
 groups.  I am not saying this is wrong, but as a matter of natural 
 conservatism we have shied away from unnecessary complexities.  The only 
 overlap we have done so far is hierarchical nesting.  As the 
 instance-group-api-extension explicitly contemplates groups of groups as a 
 later development, this would cover the overlap that we have needed.  On the 
 other hand, we already have multiple policies attached to a single group.  
 We have policies for a variety of concerns, so some can combine completely or 
 somewhat independently.  We also have relationships (of various sorts) 
 between groups (as well as between individuals, and between individuals and 
 groups).  The policies and relationships, in general, are not simply names 
 but also have parameters. 
 
 [Gary Kotton] The instance groups was meant to be the first step towards what 
 we had presented in Portland. Please look at the presentation that we gave an 
 this may highlight what the aims were: 
 https://docs.google.com/presentation/d/1oDXEab2mjxtY-cvufQ8f4cOHM0vIp4iMyfvZPqg8Ivc/edit?usp=sharing.
  Sadly for this release we did not manage to get the instance groups through 
 (it was an issue of timing and bad luck). We will hopefully get this though 
 in the first stages of the 

[openstack-dev] [Keystone] Enforcing cert validation in auth_token middleware

2013-09-11 Thread Jamie Lennox
With the aim of replacing httplib and cert validation with requests[1]
I've put forward the following review to use the requests library for
auth_token middleware. 

https://review.openstack.org/#/c/34161/

This adds 2 new config options.
- The ability to provide CAs to validate https connections against.
- The ability to set insecure to ignore https validation. 

By default request will validate connections against the system CAs by
default. So given that we currently don't verify SSL connections, do we
need to default insecure to true?

Maintaining compatibility should win here as i imagine there are a great
number of auth_token deployments using SSL with invalid/self-signed
certificates that would be broken, but defaulting to insecure just seems
wrong. 

Given that keystone isn't the only project moving away from httplib, how
are other projects handling this? How do we end up with reasonable
defaults? Is there any amount of warning that we could give to change a
default like this - or is this another one of those version 1.0 issues?


Jamie



[1] https://bugs.launchpad.net/keystone/+bug/1188189 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Enforcing cert validation in auth_token middleware

2013-09-11 Thread Dolph Mathews
On Wed, Sep 11, 2013 at 10:25 PM, Jamie Lennox jlen...@redhat.com wrote:

 With the aim of replacing httplib and cert validation with requests[1]
 I've put forward the following review to use the requests library for
 auth_token middleware.

 https://review.openstack.org/#/c/34161/

 This adds 2 new config options.
 - The ability to provide CAs to validate https connections against.
 - The ability to set insecure to ignore https validation.

 By default request will validate connections against the system CAs by
 default. So given that we currently don't verify SSL connections, do we
 need to default insecure to true?


I vote no; and yes to secure by default.



 Maintaining compatibility should win here as i imagine there are a great
 number of auth_token deployments using SSL with invalid/self-signed
 certificates that would be broken, but defaulting to insecure just seems
 wrong.

 Given that keystone isn't the only project moving away from httplib, how
 are other projects handling this?


The last time keystoneclient made this same change (thanks Dean!), we
provided no warning:

  https://review.openstack.org/#/c/17624/

Which added the --insecure flag to opt back into the old behavior.

How do we end up with reasonable
 defaults? Is there any amount of warning that we could give to change a
 default like this - or is this another one of those version 1.0 issues?


 Jamie



 [1] https://bugs.launchpad.net/keystone/+bug/1188189


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-11 Thread Joshua Harlow
Ah, thx keith, that seems to make a little more sense with that context.

Maybe that different instance will be doing other stuff also?

Is that the general heat 'topology' that should/is recommended for trove?

For say autoscaling trove, will trove emit a set of metrics via ceilometer
that heat (or a separate autoscaling thing) will use to analyze if
autoscaling should occur? I suppose nova would also emit its own set and
it will be up to the autoscaler to merge those together (as trove
instances are nova instances). Its a very interesting set of problems to
make an autoscaling entity that works well without making that autoscaling
entity to aware of the internals of the various projects. Making it to
aware and then the whole system is fragile but not making it aware enough
and then it will not do its job very well.

On 9/11/13 6:07 PM, Keith Bray keith.b...@rackspace.com wrote:

There is context missing here.  heat==trove interaction is through the
trove API.  trove==heat interaction is a _different_ instance of Heat,
internal to trove's infrastructure setup, potentially provisioning
instances.   Public Heat wouldn't be creating instances and then telling
trove to make them into databases.

At least, that's what I understand from conversations with the Trove
folks.  I could be wrong here also.

-Keith

On 9/11/13 11:11 AM, Joshua Harlow harlo...@yahoo-inc.com wrote:

Sure,

I was thinking that since heat would do autoscaling persay, then heat
would say ask trove to make more databases (autoscale policy here) then
this would cause trove to actually callback into heat to make more
instances.

Just feels a little weird, idk.

Why didn't heat just make those instances on behalf of trove to begin
with and then tell trove make these instances into databases. Then
trove doesn't really need to worry about calling into heat to do the
instance creation work, and trove can just worry about converting those
blank instances  into databases (for example).

But maybe I am missing other context also :)

Sent from my really tiny device...

On Sep 11, 2013, at 8:04 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Joshua Harlow's message of 2013-09-11 01:00:37 -0700:
 +1
 
 The assertions are not just applicable to autoscaling but to software
in general. I hope we can make autoscaling just enough simple to
work.
 
 The circular heat=trove example is one of those that does worry me a
little. It feels like something is not structured right if that it is
needed (rube goldberg like). I am not sure what could be done
differently, just my gut feeling that something is off.
 
 Joshua, can you elaborate on the circular heat=trove example?
 
 I don't see Heat and Trove's relationship as circular. Heat has a Trove
 resource, and (soon? now?) Trove can use Heat to simplify its control
 of underlying systems. This is a stack, not a circle, or did I miss
 something?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] I met some trouble when I use DEVSTACK this morning

2013-09-11 Thread 苌智
details:
Error processing line 1 of
/usr/local/lib/python2.7/dist-packages/easy-install.pth:

  Traceback (most recent call last):
File /usr/lib/python2.7/site.py, line 161, in addpackage
  if not dircase in known_paths and os.path.exists(dir):
File /usr/lib/python2.7/genericpath.py, line 18, in exists
  os.stat(path)
  TypeError: must be encoded string without NULL bytes, not str

I created a instance yesterday but it's Status is Error.
I don't know how to fix it..
Could you tell me some advice?
Thanks a lot !!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Backwards incompatible migration changes - Discussion

2013-09-11 Thread Joshua Hesketh

On 9/4/13 6:47 AM, Michael Still wrote:

On Wed, Sep 4, 2013 at 1:54 AM, Vishvananda Ishaya
vishvana...@gmail.com wrote:

+1 I think we should be reconstructing data where we can, but keeping track of
deleted data in a backup table so that we can restore it on a downgrade seems
like overkill.

I guess it comes down to use case... Do we honestly expect admins to
regret and upgrade and downgrade instead of just restoring from
backup? If so, then we need to have backup tables for the cases where
we can't reconstruct the data (i.e. it was provided by users and
therefore not something we can calculate).


So assuming we don't keep the data in some kind of backup state is there 
a way we should be documenting which migrations are backwards 
incompatible? Perhaps there should be different classifications for 
data-backwards incompatible and schema incompatibilities.


Having given it some more thought, I think I would like to see 
migrations keep backups of obsolete data. I don't think it is 
unforeseeable that an administrator would upgrade a test instance (or 
less likely, a production) by accident or not realising their backups 
are corrupted, outdated or invalid. Being able to roll back from this 
point could be quite useful. I think potentially more useful than that 
though is that if somebody ever needs to go back and look at some data 
that would otherwise be lost it is still in the backup table.


As such I think it might be good to see all migrations be downgradable 
through the use of backup tables where necessary. To couple this I think 
it would be good to have a standard for backup table naming and maybe 
schema (similar to shadow tables) as well as an official list of backup 
tables in the documentation stating which migration they were introduced 
and how to expire them.


In regards to the backup schema, it could be exactly the same as the 
table being backed up (my preference) or the backup schema could contain 
just the lost columns/changes.


In regards to the name, I quite like backup_table-name_migration_214. 
The backup table name could also contain a description of what is backed 
up (for example, 'uuid_column').


In terms of expiry they could be dropped after a certain release/version 
or left to the administrator to clear out similar to shadow tables.


Thoughts?

Cheers,
Josh

--
Rackspace Australia



Michael




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Backwards incompatible migration changes - Discussion

2013-09-11 Thread Robert Collins
I think having backup tables adds substantial systematic complexity,
for a small use case.

Perhaps a better answer is to document in 'take a backup here' as part
of the upgrade documentation and let sysadmins make a risk assessment.
We can note that downgrades are not possible.

Even in a public cloud doing trunk deploys, taking a backup shouldn't
be a big deal: *those* situations are where you expect backups to be
well understood; and small clouds don't have data scale issues to
worry about.

-Rob

-Rob

On 12 September 2013 17:09, Joshua Hesketh joshua.hesk...@rackspace.com wrote:
 On 9/4/13 6:47 AM, Michael Still wrote:

 On Wed, Sep 4, 2013 at 1:54 AM, Vishvananda Ishaya
 vishvana...@gmail.com wrote:

 +1 I think we should be reconstructing data where we can, but keeping
 track of
 deleted data in a backup table so that we can restore it on a downgrade
 seems
 like overkill.

 I guess it comes down to use case... Do we honestly expect admins to
 regret and upgrade and downgrade instead of just restoring from
 backup? If so, then we need to have backup tables for the cases where
 we can't reconstruct the data (i.e. it was provided by users and
 therefore not something we can calculate).


 So assuming we don't keep the data in some kind of backup state is there a
 way we should be documenting which migrations are backwards incompatible?
 Perhaps there should be different classifications for data-backwards
 incompatible and schema incompatibilities.

 Having given it some more thought, I think I would like to see migrations
 keep backups of obsolete data. I don't think it is unforeseeable that an
 administrator would upgrade a test instance (or less likely, a production)
 by accident or not realising their backups are corrupted, outdated or
 invalid. Being able to roll back from this point could be quite useful. I
 think potentially more useful than that though is that if somebody ever
 needs to go back and look at some data that would otherwise be lost it is
 still in the backup table.

 As such I think it might be good to see all migrations be downgradable
 through the use of backup tables where necessary. To couple this I think it
 would be good to have a standard for backup table naming and maybe schema
 (similar to shadow tables) as well as an official list of backup tables in
 the documentation stating which migration they were introduced and how to
 expire them.

 In regards to the backup schema, it could be exactly the same as the table
 being backed up (my preference) or the backup schema could contain just the
 lost columns/changes.

 In regards to the name, I quite like backup_table-name_migration_214. The
 backup table name could also contain a description of what is backed up (for
 example, 'uuid_column').

 In terms of expiry they could be dropped after a certain release/version or
 left to the administrator to clear out similar to shadow tables.

 Thoughts?

 Cheers,
 Josh

 --
 Rackspace Australia


 Michael



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev