Re: [openstack-dev] 答复: Proposal for approving Auto HA development blueprint.

2013-08-13 Thread Alex Glikson
Agree. Some enhancements to Nova might be still required (e.g., to handle 
resource reservations, so that there is enough capacity), but the 
end-to-end framework probably should be outside of existing services, 
probably talking to Nova, Ceilometer and potentially other components 
(maybe Cinder, Neutron, Ironic), and 'orchestrating' failure detection, 
fencing and recovery.
Probably worth a discussion at the upcoming summit.


Regards,
Alex



From:   Konglingxian konglingx...@huawei.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   13/08/2013 07:07 AM
Subject:[openstack-dev] 答复:  Proposal for approving Auto HA 
development blueprint.



Hi yongiman:
 
Your idea is good, but I think the auto HA operation is not OpenStack’s 
business. IMO, Ceilometer offers ‘monitoring’, Nova  offers ‘evacuation’, 
and you can combine them to realize HA operation.
 
So, I’m afraid I can’t understand the specific implementation details very 
well.
 
Any different opinions?
 
发件人: yongi...@gmail.com [mailto:yongi...@gmail.com] 
发送时间: 2013年8月12日 20:52
收件人: openstack-dev@lists.openstack.org
主题: Re: [openstack-dev] Proposal for approving Auto HA development 
blueprint.
 
 
 
Hi,
 
Now, I am developing auto ha operation for vm high availability.
 
This function is all progress automatically.
 
It needs other service like ceilometer.
 
ceilometer monitors compute nodes.
 
When ceilometer detects broken compute node, it send a api call to Nova, 
nova exposes for auto ha API.
 
When received auto ha call, nova progress auto ha operation.
 
All auto ha enabled VM where are running on broken host are all migrated 
to auto ha Host which is extra compute node for using only Auto-HA 
function.
 
Below is my blueprint and wiki page.
 
Wiki page is not yet completed. Now I am adding lots of information for 
this function.
 
Thanks
 
https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken
 
https://wiki.openstack.org/wiki/Autoha
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Proposal to support new Cinder driver for CloudByte's Elastistor

2013-08-13 Thread Amit Das
Hi Team,

We have implemented a CINDER driver for our QoS aware storage solution
(CloudByte Elastistor).

We would like to integrate this driver code with the next version of
OpenStack (Havana).

Please let us know the approval processes to be followed for this new
driver support.

Regards,
Amit
*CloudByte Inc.* http://www.cloudbyte.com/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] can't install devstack - nova-api did not start

2013-08-13 Thread Roman Gorodeckij
Updating devstack to latest revision solves my problem. 

Sent from my iPhone

On 2013 Rugp. 13, at 05:00, XINYU ZHAO xyzje...@gmail.com wrote:

 Hi Sean
 I uninstalled the oslo.config 1.1.1 version and run devstack, but this time 
 it stopped at 
 
 2013-08-09 18:55:16 + /opt/stack/new/keystone/bin/keystone-manage db_sync
 2013-08-09 18:55:16 Traceback (most recent call last):
 2013-08-09 18:55:16   File /opt/stack/new/keystone/bin/keystone-manage, 
 line 16, in module
 2013-08-09 18:55:16 from keystone import cli
 2013-08-09 18:55:16   File /opt/stack/new/keystone/keystone/cli.py, line 
 23, in module
 2013-08-09 18:55:16 from oslo.config import cfg
 2013-08-09 18:55:16 ImportError: No module named config
 2013-08-09 18:55:16 + [[ PKI == \P\K\I ]]
 
 An unexpected error prevented the server from fulfilling your request. 
 (ProgrammingError) (1146, Table 'keystone.service' doesn't exist) 'INSERT 
 INTO service (id, type, extra) VALUES (%s, %s, %s)' 
 ('32578395572b4cf2a70ba70b6031cd1d', 'identity', '{name: keystone, 
 description: Keystone Identity Service}') (HTTP 500)
 2013-08-12 18:36:45 + KEYSTONE_SERVICE=
 2013-08-12 18:36:45 + keystone endpoint-create --region RegionOne 
 --service_id --publicurl http://127.0.0.1:5000/v2.0 
 --adminurlhttp://127.0.0.1:35357/v2.0 --internalurl http://127.0.0.1:5000/v2.0
 
 it seems that  oslo.config was not properly imported after i re-installed it. 
 but when i list the pip installations, it is there. 
 
 /usr/local/bin/pip freeze |grep oslo.config
 -e 
 git+http://10.145.81.234/openstackci/gerrit/p/oslo.config@c65d70c02494805ce50b88f343f8fafe7a521724#egg=oslo.config-master
 root@devstack-4:/# /usr/local/bin/pip search oslo.config
 oslo.config   - Oslo configuration API
   INSTALLED: 1.2.0.a192.gc65d70c
   LATEST:1.1.1
 
 
 
 On Sat, Aug 10, 2013 at 7:07 AM, Sean Dague s...@dague.net wrote:
 Silly pip, trix are for kids.
 
 Ok, well:
 
 sudo pip install -I oslo.config==1.1.1
 
 then pip uninstall oslo.config
 
 On 08/09/2013 06:58 PM, Roman Gorodeckij wrote:
 stack@hp:~/devstack$ sudo pip install oslo.config
 Requirement already satisfied (use --upgrade to upgrade): oslo.config in 
 /opt/stack/oslo.config
 Requirement already satisfied (use --upgrade to upgrade): six in 
 /usr/local/lib/python2.7/dist-packages (from oslo.config)
 Cleaning up...
 stack@hp:~/devstack$ sudo pip uninstall oslo.config
 Can't uninstall 'oslo.config'. No files were found to uninstall.
 stack@hp:~/devstack$
 
 stack@hp:~/devstack$ cat /tmp/devstack/log//screen-n-api.log
 | touch /opt/stack/status/stack/n-api.failurenova  
 /usr/local/bin/nova-api |
 
 Traceback (most recent call last):
File /usr/local/bin/nova-api, line 6, in module
  from nova.cmd.api import main
File /opt/stack/nova/nova/cmd/api.py, line 29, in module
  from nova import config
File /opt/stack/nova/nova/config.py, line 22, in module
  from nova.openstack.common.db.sqlalchemy import session as db_session
File /opt/stack/nova/nova/openstack/common/db/sqlalchemy/session.py, 
 line 279, in module
  deprecated_opts=[cfg.DeprecatedOpt('sql_connection',
 AttributeError: 'module' object has no attribute 'DeprecatedOpt'
 
 nothing changed.
 
 On Aug 9, 2013, at 6:11 PM, Sean Dague s...@dague.net wrote:
 
 This should be addressed by the latest devstack, however because we moved 
 to oslo.config out of git, some install environments might still have 
 oslo.config 1.1.0 somewhere, that pip no longer sees (so can't uninstall)
 
 sudo pip install oslo.config
 sudo pip uninstall oslo.config
 
 rerun devstack, see if it works.
 
 -Sean
 
 On 08/09/2013 09:14 AM, Roman Gorodeckij wrote:
 Tried to install devstack to dedicated server, ip's are defined.
 
 Here's the output:
 
 13-08-09 09:06:28 ++ echo -ne '\015'
 
 2013-08-09 09:06:28 + NL=$'\r'
 2013-08-09 09:06:28 + screen -S stack -p n-api -X stuff 'cd 
 /opt/stack/nova  /'sr/local/bin/nova-api || touch 
 /opt/stack/status/stack/n-api.failure
 2013-08-09 09:06:28 + echo 'Waiting for nova-api to start...'
 2013-08-09 09:06:28 Waiting for nova-api to start...
 2013-08-09 09:06:28 + wait_for_service 60http://192.168.1.6:8774
 2013-08-09 09:06:28 + local timeout=60
 2013-08-09 09:06:28 + local url=http://192.168.1.6:8774
 2013-08-09 09:06:28 + timeout 60 sh -c 'while ! http_proxy= https_proxy= 
 curl -shttp://192.168.1.6:8774  /dev/null; do sleep 1; done'
 2013-08-09 09:07:28 + die 698 'nova-api did not start'
 2013-08-09 09:07:28 + local exitcode=0
 stack@hp:~/devstack$ 2013-08-09 09:07:28 + set +o xtrace
 
 Here's the log:
 
 2013-08-09 09:07:28 [ERROR] ./stack.sh:698 nova-api did not start
 stack@hp:~/devstack$ cat /tmp/devstack/log//screen-n-api.log
 t/stack/status/stack/n-api.failurenova  /usr/local/bin/nova-api || 
 touch /op
 
 Traceback (most recent call last):
File /usr/local/bin/nova-api, line 6, in module
  from nova.cmd.api import main
File /opt/stack/nova/nova/cmd/api.py, line 29, in module
  from 

Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Yee, Guang
Passing the query parameters, whatever they are, into the driver if the
given driver supports pagination and allowing the driver to override the
manager default pagination functionality seem reasonable to me.

 

 

Guang

 

 

From: Dolph Mathews [mailto:dolph.math...@gmail.com] 
Sent: Monday, August 12, 2013 8:22 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [keystone] Pagination

 

 

On Mon, Aug 12, 2013 at 7:51 PM, Jamie Lennox jlen...@redhat.com wrote:

I'm not sure where it would make sense within the API to return the name
of the page/per_page variables to the client that doesn't involve having
already issued the call (ie returning the names within the links box
means you've already issued the query).

 

I think you're missing the point (and you're right: that wouldn't make sense
at all). The API client follows links. The controller builds links. The
driver defines it's own pagination interface to build related links.

 

If the client is forced to understand the pagination interface then the
abstraction is broken.

 

If we standardize on the
page/per_page combination

 

There doesn't need to be a standard.

 

then this can be handled at the controller
level then the driver has permission to simply ignore it - or have the
controller do the slicing after the driver has returned.

 

Correct. This sort of default pagination can be implemented by the
manager, and overridden by a specific driver.

 


To weigh in on the other question i think it should be checked that page
is an integer, unless per_page is specified in which case default to 1.

For example:

GET /v3/users?page=

I would expect to return all users as page is not set. However:

GET /v3/users?per_page=30

As per_page is useless without a page i think we can default to page=1.

As an aside are we indexing from 1?

 

Rhetorical: why not index from -1 and count in base 64? This is all
arbitrary and can vary by driver.

 


On Mon, 2013-08-12 at 19:05 -0500, Dolph Mathews wrote:
 The way paginated links are defined by the v3 API (via `next` and
 `previous` links), it can be completely up to the driver as to what
 the query parameters look like. So, the client shouldn't have (nor
 require) any knowledge of how to build query parameters for
 pagination. It just needs to follow the links it's given.


 'page' and 'per_page' are trivial for the controller to implement (as
 it's just slicing into an list... as shown)... so that's a reasonable
 default behavior (for when a driver does not support pagination).
 However, if the underlying driver DOES support pagination, it should
 provide a way for the controller to ask for the query parameters
 required to specify the next/previous links (so, one driver could
 return `marker` and `limit` parameters while another only exposes the
 `page` number, but not quantity `per_page`).


 On Mon, Aug 12, 2013 at 4:34 PM, Henry Nash
 hen...@linux.vnet.ibm.com wrote:
 Hi


 I'm working on extending the pagination into the backends.
  Right now, we handle the pagination in the v3 controller
 classand in fact it is disabled right now and we return
 the whole list irrespective of whether page/per-page is set in
 the query string, e.g.:


 def paginate(cls, context, refs):
 Paginates a list of references by page  per_page
 query strings.
 # FIXME(dolph): client needs to support pagination
 first
 return refs


 page = context['query_string'].get('page', 1)
 per_page = context['query_string'].get('per_page', 30)
 return refs[per_page * (page - 1):per_page * page]


 I wonder both for the V3 controller (which still needs to
 handle pagination for backends that do not support it) and the
 backends that dowhether we could use wether 'page' is
 defined in the query-string as an indicator as to whether we
 should paginate or not?  That way clients who can handle it
 can ask for it, those that don'twill just get everything.


 Henry






 --


 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 

-- 

 

-Dolph 



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ANVIL] Missing openvswitch dependency for basic-neutron.yaml persona

2013-08-13 Thread Sylvain Bauza

Cross-posting to openstack-ops@.
Maybe someone experienced the same issue and workarounded it ?

-Sylvain

Le 12/08/2013 18:10, Sylvain Bauza a écrit :

Hi,

./smithy -a install -p conf/personas/in-a-box/basic-neutron.yaml is 
failing because of openvswitch missing.

See logs here [1].

Does anyone knows why openvswitch is needed when asking for 
linuxbridge in components/neutron.yaml ?

Shall I update distros/rhel.yaml ?

-Sylvain



[1] : http://pastebin.com/7ZUR2TyUhttp://pastebin.com/TFkDrrDc




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Can we use two nova schedulers at the same time?

2013-08-13 Thread sudheesh sk
Hi,

1) Can nova have more than one scheduler at a time?   Standard Scheduler + one 
custom scheduler?

2) If its possible to add multiple schedulers - how we should configure it. 
lets say I have  a scheduler called 'Scheduler'  . So nova conf may look like 
below
scheduler_manager = nova.scheduler.filters.SchedulerManager
scheduler_driver = nova.scheduler.filter.Scheduler Then how can I add a second 
scheduler

3) If there are 2 schedulers - will both of these called when creating a VM?


I am asking these questions based on a response I got from ask openstack forum

Thanks,
Sudheesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Weight normalization in scheduler

2013-08-13 Thread Álvaro López García
Hi again.

Thank you for your reply, Sandy. Some more comments inline.

On Thu 01 Aug 2013 (10:04), Sandy Walsh wrote:
 On 08/01/2013 09:51 AM, Álvaro López García wrote:
  On Thu 01 Aug 2013 (09:07), Sandy Walsh wrote:
  On 08/01/2013 04:24 AM, Álvaro López García wrote:
  Hi all.
 
  TL;DR: I've created a blueprint [1] regarding weight normalization.
  I would be very glad if somebody could examine and comment it.
 
  Something must have changed. It's been a while since I've done anything
  with the scheduler, but normalized weights is the way it was designed
  and implemented.
  
  It seems reasonable, but it is not there anymore:
  
  class RAMWeigher(weights.BaseHostWeigher):
  (...)
  def _weigh_object(self, host_state, weight_properties):
  Higher weights win.  We want spreading to be the default.
  return host_state.free_ram_mb
 
 Hmm, that's unfortunate. We use our own weighing functions internally,
 so perhaps we were unaffected by this change.

And that is why we spoted this. We wanted to implement our very own
functions apart from the RAMWeigher and we found that RAW values were
used.

  The separate Weighing plug-ins are responsible for taking the specific
  units (cpu load, disk, ram, etc) and converting them into normalized
  0.0-1.0 weights. Internally the plug-ins can work however they like, but
  their output should be 0-1.
  
  With the current code, this is not true. Anyway, I think this responsability
  should be implemented in the BaseWeightHandler rather than each weigher.
  This way each weigher can return whatever they want, but we will be
  always using a correct value.
 
 I think the problem with moving it to the base handler is that the base
 doesn't know the max range of the value ... of course, this could be
 passed down. But yeah, we wouldn't want to duplicate the normalization
 code itself in every function.

With the code in [1] the weigher can specify the maximum and minimum
values where a weight can range if it is needed (it most cases just
taking these values from the list of returned values should be enough)
and the BaseWeightHandler will normalize the list before adding them
up to the objects.

I do not see any real advantage in doing it into each weigher. Apart
from code duplication it is difficult to maintain in the long term,
since any change to the normalization should be propagated to all the
weighers (ok, now there's only one ;-) ).

[1] https://review.openstack.org/#/c/27160

Cheers,
-- 
Álvaro López García  al...@ifca.unican.es
Instituto de Física de Cantabria http://alvarolopez.github.io
Ed. Juan Jordá, Campus UC  tel: (+34) 942 200 969
Avda. de los Castros s/n
39005 Santander (SPAIN)
_
http://xkcd.com/571/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Henry Nash
Hi

So few comebacks to the various comments:

1) While I understand the idea that a client would follow the next/prev links 
returned in collections, I wasn't aware that we considered 'page'/'per-page' as 
not standardized.   We list these explicitly throughout the identity API spec 
(look in each List 'entity' example).  How I imagined it would work would be:

a) If a client did not include 'page' in the url we would not paginate
b) Once we are paginating, a client can either build the next/prevs urls 
themselves if they want (by incrementing/decrementing the page number), or just 
follow the next/prev links (which come with the appropriate 'page=x' in them) 
returned in the collection which saves them having to do this.
c) Regarding implementation, the controller would continue to be able to 
paginate on behalf of drivers that couldn't, but those paginate-aware drivers 
would take over that capability (and indicate this to the controller the state 
of the pagination so that it can build the correct next/prev links)

2) On the subject of huge enumerates, options are:
a) Support a backend manager scoped (i.e. identity/assignent/token) limit in 
the conf file which would be honored by drivers.  Assuming that you set this 
larger than your pagination limit, this would make sense whether your driver is 
paginating or not in terms of minimizing the delay in responding data as well 
as not messing up pagination.  In the non-paginated case when we hit the limit, 
should we indicate this to the client?  Maybe a 206 return code?  Although i) 
not quite sure that meets http standards, and ii) would we break a bunch of 
clients by doing this?
b) We scrap the whole idea of pagination, and just set a conf limit as in 2a).  
To make this work of course, we must implement any defined filters in the 
backend (otherwise we still end up with today's performance problems - remember 
that today, in general,  filtering is done in the controller on a full 
enumeration of the entities in question).  I was planning to implement this 
backend filtering anyway as part of (or on top of) my change, since we are 
holding (at least one of) our hands behind our backs right now by not doing so. 
 And our filters need to be powerful, do we support wildcards for example, e.g. 
GET /users?name = fred*  ?
 
Henry

On 13 Aug 2013, at 04:40, Adam Young wrote:

 On 08/12/2013 09:22 PM, Miller, Mark M (EB SW Cloud - RD - Corvallis) wrote:
 The main reason I use user lists (i.e. keystone user-list) is to get the 
 list of usernames/IDs for other keystone commands. I do not see the value of 
 showing all of the users in an LDAP server when they are not part of the 
 keystone database (i.e. do not have roles assigned to them). Performing a 
 “keystone user-list” command against the HP Enterprise Directory locks up 
 keystone for about 1 ½ hours in that it will not perform any other commands 
 until it is done.  If it is decided that user lists are necessary, then at a 
 minimum they need to be paged to return control back to keystone for another 
 command.
 
 We need a way to tell HP ED to limit the number of rows, and to do filtering.
 
 We have a bug for the second part.  I'll open one for the limit.
 
  
 Mark
  
 From: Adam Young [mailto:ayo...@redhat.com] 
 Sent: Monday, August 12, 2013 5:27 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [keystone] Pagination
  
 On 08/12/2013 05:34 PM, Henry Nash wrote:
 Hi
  
 I'm working on extending the pagination into the backends.  Right now, we 
 handle the pagination in the v3 controller classand in fact it is 
 disabled right now and we return the whole list irrespective of whether 
 page/per-page is set in the query string, e.g.:
 Pagination is a broken concept. We should not be returning lists so long 
 that we need to paginate.  Instead, we should have query limits, and filters 
 to refine the queries.
 
 Some people are doing full user lists against LDAP.  I don't need to tell 
 you how broken that is.  Why do we allow user-list at the Domain (or 
 unscoped level)?  
 
 I'd argue that we should drop enumeration of objects in general, and 
 certainly limit the number of results that come back.  Pagination in LDAP 
 requires cursors, and thus continuos connections from Keystone to 
 LDAP...this is not a scalable solution.
 
 Do we really need this?
 
 
 
  
 def paginate(cls, context, refs):
 Paginates a list of references by page  per_page query 
 strings.
 # FIXME(dolph): client needs to support pagination first
 return refs
  
 page = context['query_string'].get('page', 1)
 per_page = context['query_string'].get('per_page', 30)
 return refs[per_page * (page - 1):per_page * page]
  
 I wonder both for the V3 controller (which still needs to handle pagination 
 for backends that do not support it) and the backends that dowhether we 
 could use wether 'page' is defined in the query-string as an indicator as to 
 

Re: [openstack-dev] Can we use two nova schedulers at the same time?

2013-08-13 Thread Alex Glikson
There are roughly three cases.
1. Multiple identical instances of the scheduler service. This is 
typically done to increase scalability, and is already supported (although 
sometimes may result in provisioning failures due to race conditions 
between scheduler instances). There is a single queue of provisioning 
requests, all the scheduler instances are subscribed, and each request 
will be processed by one of the instances (randomly, more or less). I 
think this is not the option that you referred to, though.
2. Multiple cells, each having its own scheduler. This is also supported, 
but is applicable only if you decide to use cells (e.g., in large-scale 
geo-distributed deployments).
3. Multiple scheduler configurations within a single (potentially 
heterogeneous) Nova deployment, with dynamic selection of 
configuration/policy at run time (for simplicity let's assume just one 
scheduler service/runtime). This capability is under development (
https://review.openstack.org/#/c/37407/) , targeting Havana. The current 
design is that the admin will be able to override scheduler properties 
(such as driver, filters, etc) using flavor extra specs. In some cases you 
would want to combine this capability with a mechanism that would ensure 
disjoint partitioning of the managed compute nodes between the drivers. 
This can be currently achieved by using host aggregates and 
AggregateInstanceExtraSpec filter of FilterScheduler. For example, if you 
want to apply driver_A on hosts in aggregate_X, and dirver_B on hosts in 
aggregate_Y, you would have flavor AX specifying driver_A and properties 
that would map to aggregate_X, and similarly for BY.

Hope this helps.

Regards,
Alex



From:   sudheesh sk sud...@yahoo.com
To: openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.org, 
Date:   13/08/2013 10:30 AM
Subject:[openstack-dev] Can we use two nova schedulers at the same 
time?



Hi,

1) Can nova have more than one scheduler at a time? Standard Scheduler + 
one custom scheduler?

2) If its possible to add multiple schedulers - how we should configure 
it. lets say I have a scheduler called 'Scheduler' . So nova conf may look 
like below scheduler_manager = nova.scheduler.filters.SchedulerManager 
scheduler_driver = nova.scheduler.filter.Scheduler Then how can I add a 
second scheduler

3) If there are 2 schedulers - will both of these called when creating a 
VM?


I am asking these questions based on a response I got from ask openstack 
forum

Thanks,
Sudheesh___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Nova_tests failing in jenkins

2013-08-13 Thread Julien Danjou
On Mon, Aug 12 2013, Herndon, John Luke (HPCS - Ft. Collins) wrote:

 The nova_tests are failing for a couple of different Ceilometer reviews,
 due to 'module' object has no attribute 'add_driver'.

 This review (https://review.openstack.org/#/c/41316/) had nothing to do
 with the nova_tests, yet they are failing. Any clue what's going on?

 Apologies if there is an obvious answer - I've never encountered this
 before.

FTR, Terri opened a bug about it:
  https://bugs.launchpad.net/ceilometer/+bug/1211532

-- 
Julien Danjou
# Free Software hacker # freelance consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] security_groups extension in nova api v3

2013-08-13 Thread Day, Phil
Hi All,

If we really want to get clean separation between Nova and Neutron in the V3 
API should we consider making the Nov aV3 API only accept lists o port ids in 
the server create command ?

That way there would be no need to every pass security group information into 
Nova.

Any cross project co-ordination (for example automatically creating ports) 
could be handled in the client layer, rather than inside Nova.

Phil 

 -Original Message-
 From: Melanie Witt [mailto:melw...@yahoo-inc.com]
 Sent: 09 August 2013 23:05
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [nova] security_groups extension in nova api v3
 
 Hi All,
 
 I did the initial port of the security_groups api extension to v3 and have 
 been
 testing it out in devstack while adding the expected_errors decorator to it.
 
 The guidance so far on network-related extensions in v3 is not to duplicate
 actions that can be accomplished through the neutron api and assuming nova-
 network deprecation is imminent. So, the only actions left in the extension 
 are
 the associate/disassociate security group with instance.
 
 However, when security_group_api = neutron, all associate/disassociate will do
 is call the neutron api to update the port for the instance (port device_id ==
 instance uuid) and append the specified security group. I'm wondering if this
 falls under the nova proxying we don't want to be doing and if
 associate/disassociate should be removed from the extension for v3.
 
 If removed, it would leave the extension only providing support for
 server_create (cyeoh has a patch up for review).
 
 Any opinions?
 
 Thanks,
 Melanie
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Can we use two nova schedulers at the same time?

2013-08-13 Thread sudheesh sk
I have one quick question regarding the 3rd point you have mentioned (Multiple 
scheduler configurations
within a single (potentially heterogeneous) Nova deployment)

In this case ultimately when a VM is created - would it have gone through all 
the schedulers or just one scheduler which was dynamically selected?
Is there any chance of having 2 schedulers  impacting creation of one VM?


Thanks,
Sudheesh


 From: Alex Glikson glik...@il.ibm.com
To: sudheesh sk sud...@yahoo.com; OpenStack Development Mailing List 
openstack-dev@lists.openstack.org 
Sent: Tuesday, 13 August 2013 1:45 PM
Subject: Re: [openstack-dev] Can we use two nova schedulers at the same time?
 


There are roughly three cases. 
1. Multiple identical instances of the
scheduler service. This is typically done to increase scalability, and
is already supported (although sometimes may result in provisioning failures
due to race conditions between scheduler instances). There is a single
queue of provisioning requests, all the scheduler instances are subscribed,
and each request will be processed by one of the instances (randomly, more
or less). I think this is not the option that you referred to, though. 
2. Multiple cells, each having its own
scheduler. This is also supported, but is applicable only if you decide
to use cells (e.g., in large-scale geo-distributed deployments). 
3. Multiple scheduler configurations
within a single (potentially heterogeneous) Nova deployment, with dynamic
selection of configuration/policy at run time (for simplicity let's assume
just one scheduler service/runtime). This capability is under development
(https://review.openstack.org/#/c/37407/) , targeting Havana. The current design
is that the admin will be able to override scheduler properties (such as
driver, filters, etc) using flavor extra specs. In some cases you would
want to combine this capability with a mechanism that would ensure disjoint
partitioning of the managed compute nodes between the drivers. This can
be currently achieved by using host aggregates and AggregateInstanceExtraSpec
filter of FilterScheduler. For example, if you want to apply driver_A on
hosts in aggregate_X, and dirver_B on hosts in aggregate_Y, you would have
flavor AX specifying driver_A and properties that would map to aggregate_X,
and similarly for BY. 

Hope this helps. 

Regards, 
Alex 



From:      
 sudheesh sk sud...@yahoo.com 
To:      
 openstack-dev@lists.openstack.org
openstack-dev@lists.openstack.org,  
Date:      
 13/08/2013 10:30 AM 
Subject:    
   [openstack-dev]
Can we use two nova schedulers at the same time? 

 


Hi, 

1) Can nova have more than one scheduler
at a time? Standard Scheduler + one custom scheduler? 

2) If its possible to add multiple schedulers
- how we should configure it. lets say I have a scheduler called 'Scheduler'
. So nova conf may look like below scheduler_manager = 
nova.scheduler.filters.SchedulerManager
scheduler_driver = nova.scheduler.filter.Scheduler Then how can I add a
second scheduler 

3) If there are 2 schedulers - will both
of these called when creating a VM? 


I am asking these questions based on a response
I got from ask openstack forum 

Thanks, 
Sudheesh___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Can we use two nova schedulers at the same time?

2013-08-13 Thread Russell Bryant
On 08/13/2013 05:57 AM, sudheesh sk wrote:
 I have one quick question regarding the 3rd point you have mentioned
 (Multiple scheduler configurations within a single (potentially
 heterogeneous) Nova deployment)
 
 In this case ultimately when a VM is created - would it have gone
 through all the schedulers or just one scheduler which was dynamically
 selected?

The plan has been to dynamically choose a scheduler (and its config) and
use only that.

 Is there any chance of having 2 schedulers  impacting creation of one VM?

Can you explain a bit more about your use case here and how you would
expect such a thing to work?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: Proposal for approving Auto HA development blueprint.

2013-08-13 Thread balaji patnala
Potential candidate as new service like Ceilometer, Heat etc for OpenStack
and provide High Availability of VMs. Good topic to discuss at Summit for
implementation post Havana Release.

On Tue, Aug 13, 2013 at 12:03 PM, Alex Glikson glik...@il.ibm.com wrote:

 Agree. Some enhancements to Nova might be still required (e.g., to handle
 resource reservations, so that there is enough capacity), but the
 end-to-end framework probably should be outside of existing services,
 probably talking to Nova, Ceilometer and potentially other components
 (maybe Cinder, Neutron, Ironic), and 'orchestrating' failure detection,
 fencing and recovery.
 Probably worth a discussion at the upcoming summit.


 Regards,
 Alex



 From:Konglingxian konglingx...@huawei.com
 To:OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org,
 Date:13/08/2013 07:07 AM
 Subject:[openstack-dev] 答复:  Proposal for approving Auto HA
 developmentblueprint.
 --



 Hi yongiman:

 Your idea is good, but I think the auto HA operation is not OpenStack’s
 business. IMO, Ceilometer offers ‘monitoring’, Nova  offers ‘evacuation’,
 and you can combine them to realize HA operation.

 So, I’m afraid I can’t understand the specific implementation details very
 well.

 Any different opinions?

 *发件人:* yongi...@gmail.com [mailto:yongi...@gmail.com yongi...@gmail.com]
 *
 发送时间:* 2013年8月12日 20:52*
 收件人:* openstack-dev@lists.openstack.org*
 主题:* Re: [openstack-dev] Proposal for approving Auto HA development
 blueprint.



 Hi,

 Now, I am developing auto ha operation for vm high availability.

 This function is all progress automatically.

 It needs other service like ceilometer.

 ceilometer monitors compute nodes.

 When ceilometer detects broken compute node, it send a api call to Nova,
 nova exposes for auto ha API.

 When received auto ha call, nova progress auto ha operation.

 All auto ha enabled VM where are running on broken host are all migrated
 to auto ha Host which is extra compute node for using only Auto-HA function.

 Below is my blueprint and wiki page.

 Wiki page is not yet completed. Now I am adding lots of information for
 this function.

 Thanks

 *https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken*https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken

 *https://wiki.openstack.org/wiki/Autoha*https://wiki.openstack.org/wiki/Autoha
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Planning to build Openstack Private Cloud...

2013-08-13 Thread Thierry Carrez
Jay Kumbhani wrote:
 I am planning to build Openstack Private cloud for my company. We have bunch 
 of Dell blade servers but very old - so probably investing on resources on it 
 is not good idea.
 
 I am looking for acquiring new Server with high amount of CPU and Memory 
 resources. Can anyone suggest the best suited server brand and model for 
 Openstack deployment (Basically for Openstack compute node)? We are looking 
 for building infrastructure for minimum of ~100 VM's can run concurrently 
 with 1-4 GB of RAM allocation. 
 
 It would be great help if you can provide suitable server brand and model  
 with reason.
 
 Appreciate and Thanks in advance

This is a development mailing-list, focused on discussing the future of
OpenStack -- your question is unlikely to get the best answer here, if
any. You should post to the general openstack mailing-list instead
(openst...@lists.openstack.org). For more information, see:

https://wiki.openstack.org/wiki/Mailing_Lists

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Savanna PTL election proposal

2013-08-13 Thread Thierry Carrez
Matthew Farrellee wrote:
  2. Candidate nomination -
   a. anyone can list names in
 https://etherpad.openstack.org/savanna-ptl-candidates-0
   b. anyone mentioned during this week's IRC meeting
   c. both (a) and (b)
   - Current direction is to be inclusive and thus (c)

We do self-nomination (people who want to run nominate themselves)
because then you don't have to go through the painful step of
*confirming* candidates (people may not agree to run).

  3. Electorate -
   a. all AUTHORS on the Savanna repositories
   b. all committers (git log --author) on Savanna repos since Grizzly
 release
   c. all committers since Savanna inception
   d. savanna-core members (currently 2 people)
   e. committers w/ filter on number of commits or size of commits
   - Current direction is to be broadly inclusive (not (d) or (e)) thus
 (a), it is believed that (a) ~= (b) ~= (c).

If you want to make it like OpenStack it should be all Savanna recent
authors (last year), as given by git. Maybe the infra team could even
give you a list of emails for use in CIVS.

  4. Duration of election -
   a. 1 week (from 15 Aug meeting to 22 Aug meeting)
  5. Term -
   a. effective immediately through next full OpenStack election cycle
 (i.e. now until I release, 6 mo+)
   b. effective immediately until min(6 mo, incubation)
   c. effective immediately until end of incubation
   - Current direction is any option that aligns with the standard
 OpenStack election cycle

I think (a) would work well.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder: Project release status meeting - 21:00 UTC

2013-08-13 Thread Thierry Carrez
Today in the Project  release status meeting, more havana-3 goodness.

Feel free to add extra topics to the agenda:
[1] http://wiki.openstack.org/Meetings/ProjectMeeting

All Technical Leads for integrated programs should be present (if you
can't make it, please name a substitute on [1]). Other program leads and
everyone else is very welcome to attend.

The meeting will be held at 21:00 UTC on the #openstack-meeting channel
on Freenode IRC. You can look up how this time translates locally at:
[2] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20130813T21

See you there,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] On the road to v0.2: the stable branch 'release-0.2' is created

2013-08-13 Thread Denis Koryavov
Hello folks,

We are in the homestretch to a new stable release - Murano v0.2. All
planned blueprints (see [1]) are implemented or are in 'beta' state. Thus,
today we prepared a branch 'release-v0.2' which is intended to be the
stable release.

Starting today, all v0.2-related commits should be pushed to this branch.
To do this, just do the next:

git checkout release-0.2
git checkout -b MY-TOPIC-BRANCH
git commit
git review release-0.2

(for more information please see [2]).

First of all the branch is intended for bugs fixing and stabilization of
our code base, so reception of new code will be limited. If you want to
commit a big change is better to push it to the 'master' branch which is
open for reception new features from today.

The final release is scheduled on 5th September.

[1] https://launchpad.net/murano/+milestone/0.2
[2] https://wiki.openstack.org/wiki/GerritJenkinsGithub#Milestones

Have a nice day.

--
Denis
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-13 Thread Patrick Petit

Hi Nikolay,
Please see comments inline.
Thanks
Patrick
On 8/12/13 5:28 PM, Nikolay Starodubtsev wrote:


Hi, again!


Partick, I’ll try to explain why do we belive in some base actions 
like instance starting/deleting in Climate. We are thinking about the 
following workflow (that will be quite comfortable and user friendly, 
and now we have more than one customer who really want it):



1) User goes to the OpenStack dashboard and asks Heat to reserve 
several stacks.



2) Heat goes to the Climate and creates all needed leases. Also Heat 
reserves all resources for these stacks.



3) When time comes, user goes to the OpenStack cloud and here we think 
he wants to see already working stacks (ideal version) or (at least) 
already started. If no, user will have to go to the Dashboard and wake 
up all the stacks he or she reserved. This means several actions, that 
may be done for the user automatically, because it will be needed to 
do them no matter what is the aim for these stacks - if user reserves 
them, he / she needs them.



We understand, that there are situations when these actions may be 
done by some other system (like some hypothetical Jenkins). But if we 
speak about users, this will be useful. We also understand that this 
default way of behavior should be implemented in some kind of long 
term life cycle management system (which is not Heat), but we have no 
one in the OpenStack now. Because the best may to implement it is to 
use Convection, that is only proposal now...



That’s why we think that for the behavior like “user just reserves 
resources and then does anything he / she wants to” physical leases 
are better variant, when user may reserve several nodes and use it in 
different ways. For the virtual reservations it will be better to 
start / delete them as a default way (for something unusual Heat may 
be used and modified).


Okay. So let's bootstrap it this way then. There will be two different 
ways the reservation service will deal with reservations depending on 
whether its physical or virtual. All things being equal, future will 
tell how things settle. We will focus on the physical host reservation 
side of things. It think having this contradictory debate helped to 
understand each others use cases and requirements that the initial 
design has to cope with. Francois who already submitted a bunch of code 
for review will not return from vacation until the end of August. So 
things on our side are a little on the standby until he returns but it 
might help if you could take a look at it. I suggest you start with your 
vision and we will iterate from there. Is that okay with you?




Do you think that this workflow is useful too and if so can you 
propose another implementation  variant for it?



Thank you.




On Mon, Aug 12, 2013 at 1:55 PM, Patrick Petit patrick.pe...@bull.net 
mailto:patrick.pe...@bull.net wrote:


On 8/9/13 3:05 PM, Nikolay Starodubtsev wrote:

Hello, Patrick!

We have several reasons to think that for the virtual resources
this possibility is interesting. If we speak about physical
resources, user may use them in the different ways, that's why it
is impossible to include base actions with them to the
reservation service. But speaking about virtual reservations,
let's imagine user wants to reserve virtual machine. He knows
everything about it - its parameters, flavor and time to be
leased for. Really, in this case user wants to have already
working (or at least starting to work) reserved virtual machine
and it would be great to include this opportunity to the
reservation service.
We are thinking about base actions for the virtual reservations
that will be supported by Climate, like boot/delete for instance,
create/delete for volume and create/delete for the stacks. The
same will be with volumes, IPs, etc. As for more complicated
behaviour, it may be implemented in Heat. This will make
reservations simpler to use for the end users.

Don't you think so?

Well yes and and no. It really depends upon what you put behind
those lease actions. The view I am trying to sustain is separation
of duties to keep the service simple, ubiquitous and non
prescriptive of a certain kind of usage pattern. In other words,
keep Climate for reservation of capacity (physical or virtual),
Heat for orchestration, and so forth. ... Consider for example the
case of reservation as a non technical act but rather as a
business enabler for wholesales activities. Don't need, and
probably don't want to start or stop any resource there. I do not
deny that there are cases where it is desirable but then how
reservations are used and composed together at the end of the day
mainly depends on exogenous factors which couldn't be anticipated
because they are driven by the business.

And so, rather than coupling reservations with wired resource
instantiation 

Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-13 Thread Dina Belova
Patrick, we are really glad we've found the way to deal with both use cases.


As for your patches, that are on review and were already merged, we are
thinking about the following actions to commit:


1) Oslo was merged, but it is a little bit old verdant (with setup and
version module, that are not really used now because of new per project).
So we (Mirantis) can update it as a first step.


2) We need to implement comfortable to use DB layer to allow using of
different DB types (SQL and NoSQL as well), so that's the second step. Here
we'll also create new abstractions like lease and physical or virtual
reservations (I think we can implement it really before end of August).


3) After that we'll have the opportunity to modify Francois' patches for
the physical hosts reservation in the way to be a part of our new common
vision together.


Thank you.


On Tue, Aug 13, 2013 at 4:23 PM, Patrick Petit patrick.pe...@bull.netwrote:

  Hi Nikolay,
 Please see comments inline.
 Thanks
 Patrick

 On 8/12/13 5:28 PM, Nikolay Starodubtsev wrote:

  Hi, again!

  Partick, I’ll try to explain why do we belive in some base actions like
 instance starting/deleting in Climate. We are thinking about the following
 workflow (that will be quite comfortable and user friendly, and now we have
 more than one customer who really want it):

  1) User goes to the OpenStack dashboard and asks Heat to reserve several
 stacks.

  2) Heat goes to the Climate and creates all needed leases. Also Heat
 reserves all resources for these stacks.

  3) When time comes, user goes to the OpenStack cloud and here we think
 he wants to see already working stacks (ideal version) or (at least)
 already started. If no, user will have to go to the Dashboard and wake up
 all the stacks he or she reserved. This means several actions, that may be
 done for the user automatically, because it will be needed to do them no
 matter what is the aim for these stacks - if user reserves them, he / she
 needs them.

  We understand, that there are situations when these actions may be done
 by some other system (like some hypothetical Jenkins). But if we speak
 about users, this will be useful. We also understand that this default way
 of behavior should be implemented in some kind of long term life cycle
 management system (which is not Heat), but we have no one in the OpenStack
 now. Because the best may to implement it is to use Convection, that is
 only proposal now...

  That’s why we think that for the behavior like “user just reserves
 resources and then does anything he / she wants to” physical leases are
 better variant, when user may reserve several nodes and use it in different
 ways. For the virtual reservations it will be better to start / delete them
 as a default way (for something unusual Heat may be used and modified).

 Okay. So let's bootstrap it this way then. There will be two different
 ways the reservation service will deal with reservations depending on
 whether its physical or virtual. All things being equal, future will tell
 how things settle. We will focus on the physical host reservation side of
 things. It think having this contradictory debate helped to understand each
 others use cases and requirements that the initial design has to cope with.
 Francois who already submitted a bunch of code for review will not return
 from vacation until the end of August. So things on our side are a little
 on the standby until he returns but it might help if you could take a look
 at it. I suggest you start with your vision and we will iterate from there.
 Is that okay with you?



  Do you think that this workflow is useful too and if so can you propose
 another implementation  variant for it?

  Thank you.



  On Mon, Aug 12, 2013 at 1:55 PM, Patrick Petit patrick.pe...@bull.netwrote:

  On 8/9/13 3:05 PM, Nikolay Starodubtsev wrote:

 Hello, Patrick!

 We have several reasons to think that for the virtual resources this
 possibility is interesting. If we speak about physical resources, user may
 use them in the different ways, that's why it is impossible to include base
 actions with them to the reservation service. But speaking about virtual
 reservations, let's imagine user wants to reserve virtual machine. He knows
 everything about it - its parameters, flavor and time to be leased for.
 Really, in this case user wants to have already working (or at least
 starting to work) reserved virtual machine and it would be great to include
 this opportunity to the reservation service.

  We are thinking about base actions for the virtual reservations that
 will be supported by Climate, like boot/delete for instance, create/delete
 for volume and create/delete for the stacks. The same will be with volumes,
 IPs, etc. As for more complicated behaviour, it may be implemented in Heat.
 This will make reservations simpler to use for the end users.

 Don't you think so?

  Well yes and and no. It really depends upon what you put behind those
 lease 

[openstack-dev] [Ceilometer] Question about get_meters query using a JOIN

2013-08-13 Thread Thomas Maddox
Hey team,

I was curious about why we went for a JOIN here rather than just using the 
meter table initially? 
https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/impl_sqlalchemy.py#L336-L391.
 Doug had mentioned that some performance testing had gone on with some of 
these queries, so before writing up requests to change this to the meter table 
only, I wanted to see if this was a result of that performance testing? Like 
the JOIN was less expensive than a DISTINCT.

Cheers!

-Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: Proposal for approving Auto HA development blueprint.

2013-08-13 Thread yongiman
For realizing auto HA function, we need monitoring service like ceilometer.

Ceilometer monitors status of compute nodes ( network interface..connection, 
healthcheck,,etc,,)

What I focus on is that this operation goes on automatically.

Nova expose auto ha API. When nova received a auto api call. VMs automatically 
migrate to auto ha host.( which is extra compute node for only auto ha)

All of information of auto ha is stored in auto_ha_hosts table.

In this tables, used column of auto ha hosts is changed to true

Administrator check broken compute node and fix( or replace ) the compute node.

After fixing the compute node, VMs is migrating to operating compute nodes. Now 
auto ha host is empty again.

When the number of runnning VMs in the auto ha host is zero, used column is 
replaced to false for using again by periodic task.

Combination with monitoring service is important. Howerver in this blueprint, I 
want to realize nova's auto ha operation.

My wiki page is still building now, I will fill out as soon as possbile.

I am expecting your advices . Thank you very much~!
 



Sent from my iPad

On 2013. 8. 13., at 오후 8:01, balaji patnala patnala...@gmail.com wrote:

 Potential candidate as new service like Ceilometer, Heat etc for OpenStack 
 and provide High Availability of VMs. Good topic to discuss at Summit for 
 implementation post Havana Release. 
 
 On Tue, Aug 13, 2013 at 12:03 PM, Alex Glikson glik...@il.ibm.com wrote:
 Agree. Some enhancements to Nova might be still required (e.g., to handle 
 resource reservations, so that there is enough capacity), but the end-to-end 
 framework probably should be outside of existing services, probably talking 
 to Nova, Ceilometer and potentially other components (maybe Cinder, Neutron, 
 Ironic), and 'orchestrating' failure detection, fencing and recovery. 
 Probably worth a discussion at the upcoming summit. 
 
 
 Regards, 
 Alex 
 
 
 
 From:Konglingxian konglingx...@huawei.com 
 To:OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org, 
 Date:13/08/2013 07:07 AM 
 Subject:[openstack-dev] 答复:  Proposal for approving Auto HA 
 developmentblueprint. 
 
 
 
 Hi yongiman: 
   
 Your idea is good, but I think the auto HA operation is not OpenStack’s 
 business. IMO, Ceilometer offers ‘monitoring’, Nova  offers ‘evacuation’, 
 and you can combine them to realize HA operation. 
   
 So, I’m afraid I can’t understand the specific implementation details very 
 well. 
   
 Any different opinions? 
   
 发件人: yongi...@gmail.com [mailto:yongi...@gmail.com] 
 发送时间: 2013年8月12日 20:52
 收件人: openstack-dev@lists.openstack.org
 主题: Re: [openstack-dev] Proposal for approving Auto HA development 
 blueprint. 
   
   
   
 Hi, 
   
 Now, I am developing auto ha operation for vm high availability. 
   
 This function is all progress automatically. 
   
 It needs other service like ceilometer. 
   
 ceilometer monitors compute nodes. 
   
 When ceilometer detects broken compute node, it send a api call to Nova, 
 nova exposes for auto ha API. 
   
 When received auto ha call, nova progress auto ha operation. 
   
 All auto ha enabled VM where are running on broken host are all migrated to 
 auto ha Host which is extra compute node for using only Auto-HA function. 
   
 Below is my blueprint and wiki page. 
   
 Wiki page is not yet completed. Now I am adding lots of information for this 
 function. 
   
 Thanks 
   
 https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken 
   
 https://wiki.openstack.org/wiki/Autoha___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Swift 1.9.1 released

2013-08-13 Thread John Dickinson
Swift 1.9.1, as described below, has been released. Download links to the 
tarball are at https://launchpad.net/swift/havana/1.9.1


--John


On Aug 7, 2013, at 10:21 AM, John Dickinson m...@not.mn wrote:

 Today we have released Swift 1.9.1 (RC1).
 
 The tarball for the RC is at
 http://tarballs.openstack.org/swift/swift-milestone-proposed.tar.gz
 
 This release was initially prompted by a bug found by Peter Portante
 (https://bugs.launchpad.net/swift/+bug/1196932) and includes a patch
 for it. All clusters are recommended to upgrade to this new release.
 As always, you can upgrade to this version of Swift with no end-user
 downtime.
 
 In addition to the patch mentioned above, this release contains a few
 other important features:
 
 * The default worker count has changed from 1 to auto. The new default
  value will for workers in the proxy, container, account  object
  wsgi servers will spawn as many workers per process as you have cpu
  cores.
 
 * A reveal_sensitive_prefix config parameter was added to the
  proxy_logging config. This value allows the auth token to be
  obscured in the logs.
 
 * The Keystone middleware will now enforce that the reseller_prefix
  ends in an underscore. Previously, this was a recommendation, and
  now it is enforced.
 
 There are several other changes in this release. I'd encourage you to
 read the full changelog at
 https://github.com/openstack/swift/blob/master/CHANGELOG.
 
 On the community side, this release includes the work of 7 new
 contributors. They are:
 
 Alistair Coles (alistair.co...@hp.com)
 Thomas Leaman (thomas.lea...@hp.com)
 Dirk Mueller (d...@dmllr.de)
 Newptone (xingc...@unitedstack.com)
 Jon Snitow (other...@swiftstack.com)
 TheSriram (sri...@klusterkloud.com)
 Koert van der Veer (ko...@cloudvps.com)
 
 Thanks to everyone for your hard work. I'm very happy with where Swift
 is and where we are going together.
 
 --John
 
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-13 Thread Patrick Petit

Hi Dina,
Sounds great! Speaking on behalf of Francois feel free to proceed with 
points below. I don't think he would have issues with that. We'll close 
the loop when he returns. BTW, did you get a chance to take a look at 
Haizea's design and implementation?

Thanks
Patrick
On 8/13/13 3:08 PM, Dina Belova wrote:


Patrick, we are really glad we've found the way to deal with both use 
cases.



As for your patches, that are on review and were already merged, we 
are thinking about the following actions to commit:



1) Oslo was merged, but it is a little bit old verdant (with setup and 
version module, that are not really used now because of new per 
project). So we (Mirantis) can update it as a first step.


2) We need to implement comfortable to use DB layer to allow using of 
different DB types (SQL and NoSQL as well), so that's the second step. 
Here we'll also create new abstractions like lease and physical or 
virtual reservations (I think we can implement it really before end of 
August).



3) After that we'll have the opportunity to modify Francois' patches 
for the physical hosts reservation in the way to be a part of our new 
common vision together.



Thank you.



On Tue, Aug 13, 2013 at 4:23 PM, Patrick Petit patrick.pe...@bull.net 
mailto:patrick.pe...@bull.net wrote:


Hi Nikolay,
Please see comments inline.
Thanks
Patrick

On 8/12/13 5:28 PM, Nikolay Starodubtsev wrote:


Hi, again!


Partick, I'll try to explain why do we belive in some base
actions like instance starting/deleting in Climate. We are
thinking about the following workflow (that will be quite
comfortable and user friendly, and now we have more than one
customer who really want it):


1) User goes to the OpenStack dashboard and asks Heat to reserve
several stacks.


2) Heat goes to the Climate and creates all needed leases. Also
Heat reserves all resources for these stacks.


3) When time comes, user goes to the OpenStack cloud and here we
think he wants to see already working stacks (ideal version) or
(at least) already started. If no, user will have to go to the
Dashboard and wake up all the stacks he or she reserved. This
means several actions, that may be done for the user
automatically, because it will be needed to do them no matter
what is the aim for these stacks - if user reserves them, he /
she needs them.


We understand, that there are situations when these actions may
be done by some other system (like some hypothetical Jenkins).
But if we speak about users, this will be useful. We also
understand that this default way of behavior should be
implemented in some kind of long term life cycle management
system (which is not Heat), but we have no one in the OpenStack
now. Because the best may to implement it is to use Convection,
that is only proposal now...


That's why we think that for the behavior like user just
reserves resources and then does anything he / she wants to
physical leases are better variant, when user may reserve several
nodes and use it in different ways. For the virtual reservations
it will be better to start / delete them as a default way (for
something unusual Heat may be used and modified).


Okay. So let's bootstrap it this way then. There will be two
different ways the reservation service will deal with reservations
depending on whether its physical or virtual. All things being
equal, future will tell how things settle. We will focus on the
physical host reservation side of things. It think having this
contradictory debate helped to understand each others use cases
and requirements that the initial design has to cope with.
Francois who already submitted a bunch of code for review will not
return from vacation until the end of August. So things on our
side are a little on the standby until he returns but it might
help if you could take a look at it. I suggest you start with your
vision and we will iterate from there. Is that okay with you?




Do you think that this workflow is useful too and if so can you
propose another implementation  variant for it?


Thank you.




On Mon, Aug 12, 2013 at 1:55 PM, Patrick Petit
patrick.pe...@bull.net mailto:patrick.pe...@bull.net wrote:

On 8/9/13 3:05 PM, Nikolay Starodubtsev wrote:

Hello, Patrick!

We have several reasons to think that for the virtual
resources this possibility is interesting. If we speak about
physical resources, user may use them in the different ways,
that's why it is impossible to include base actions with
them to the reservation service. But speaking about virtual
reservations, let's imagine user wants to reserve virtual
machine. He knows everything about it - its parameters,
flavor and time to be leased for. Really, in this case user

[openstack-dev] Neutron a quick qpid revert

2013-08-13 Thread Dan Prince
All of my Neutron tests are failing this morning in SmokeStack. We need a quick 
revert to fix the qpid RPC implementation:

https://review.openstack.org/41689

https://bugs.launchpad.net/neutron/+bug/1211778

I figure we may as well revert this quick and then just wait on oslo.messaging 
to fix the original RPC concern here?

Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-13 Thread Dina Belova
Patrick, I had an opportunity to take just a quick look on Haizea project
(only ideas of it and some common things). Actually we had not so much time
to investigate it in better way, so we'll do it this week.


On Tue, Aug 13, 2013 at 5:50 PM, Patrick Petit patrick.pe...@bull.netwrote:

  Hi Dina,
 Sounds great! Speaking on behalf of Francois feel free to proceed with
 points below. I don't think he would have issues with that. We'll close the
 loop when he returns. BTW, did you get a chance to take a look at Haizea's
 design and implementation?
 Thanks
 Patrick
 On 8/13/13 3:08 PM, Dina Belova wrote:

  Patrick, we are really glad we've found the way to deal with both use
 cases.


  As for your patches, that are on review and were already merged, we are
 thinking about the following actions to commit:


  1) Oslo was merged, but it is a little bit old verdant (with setup and
 version module, that are not really used now because of new per project).
 So we (Mirantis) can update it as a first step.

  2) We need to implement comfortable to use DB layer to allow using of
 different DB types (SQL and NoSQL as well), so that's the second step. Here
 we'll also create new abstractions like lease and physical or virtual
 reservations (I think we can implement it really before end of August).


  3) After that we'll have the opportunity to modify Francois' patches for
 the physical hosts reservation in the way to be a part of our new common
 vision together.


  Thank you.


 On Tue, Aug 13, 2013 at 4:23 PM, Patrick Petit patrick.pe...@bull.netwrote:

  Hi Nikolay,
 Please see comments inline.
 Thanks
 Patrick

 On 8/12/13 5:28 PM, Nikolay Starodubtsev wrote:

  Hi, again!

  Partick, I’ll try to explain why do we belive in some base actions like
 instance starting/deleting in Climate. We are thinking about the following
 workflow (that will be quite comfortable and user friendly, and now we have
 more than one customer who really want it):

  1) User goes to the OpenStack dashboard and asks Heat to reserve
 several stacks.

  2) Heat goes to the Climate and creates all needed leases. Also Heat
 reserves all resources for these stacks.

  3) When time comes, user goes to the OpenStack cloud and here we think
 he wants to see already working stacks (ideal version) or (at least)
 already started. If no, user will have to go to the Dashboard and wake up
 all the stacks he or she reserved. This means several actions, that may be
 done for the user automatically, because it will be needed to do them no
 matter what is the aim for these stacks - if user reserves them, he / she
 needs them.

  We understand, that there are situations when these actions may be done
 by some other system (like some hypothetical Jenkins). But if we speak
 about users, this will be useful. We also understand that this default way
 of behavior should be implemented in some kind of long term life cycle
 management system (which is not Heat), but we have no one in the OpenStack
 now. Because the best may to implement it is to use Convection, that is
 only proposal now...

  That’s why we think that for the behavior like “user just reserves
 resources and then does anything he / she wants to” physical leases are
 better variant, when user may reserve several nodes and use it in different
 ways. For the virtual reservations it will be better to start / delete them
 as a default way (for something unusual Heat may be used and modified).

  Okay. So let's bootstrap it this way then. There will be two different
 ways the reservation service will deal with reservations depending on
 whether its physical or virtual. All things being equal, future will tell
 how things settle. We will focus on the physical host reservation side of
 things. It think having this contradictory debate helped to understand each
 others use cases and requirements that the initial design has to cope with.
 Francois who already submitted a bunch of code for review will not return
 from vacation until the end of August. So things on our side are a little
 on the standby until he returns but it might help if you could take a look
 at it. I suggest you start with your vision and we will iterate from there.
 Is that okay with you?



  Do you think that this workflow is useful too and if so can you propose
 another implementation  variant for it?

  Thank you.



  On Mon, Aug 12, 2013 at 1:55 PM, Patrick Petit 
 patrick.pe...@bull.netwrote:

  On 8/9/13 3:05 PM, Nikolay Starodubtsev wrote:

 Hello, Patrick!

 We have several reasons to think that for the virtual resources this
 possibility is interesting. If we speak about physical resources, user may
 use them in the different ways, that's why it is impossible to include base
 actions with them to the reservation service. But speaking about virtual
 reservations, let's imagine user wants to reserve virtual machine. He knows
 everything about it - its parameters, flavor and time to be leased for.
 Really, in this case user 

[openstack-dev] Hyper-V meeting agenda

2013-08-13 Thread Peter Pouliot
Hi All,

Agenda for today's meeting is as follows.


* H3 Milestones

* Current patches in for review

o   Nova

o   Cinder

* Hyper-V Puppet Module Updates

* CI Discussion

Peter J. Pouliot, CISSP
Senior SDET, OpenStack

Microsoft
New England Research  Development Center
One Memorial Drive,Cambridge, MA 02142
ppoul...@microsoft.commailto:ppoul...@microsoft.com | Tel: +1(857) 453 6436

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Savanna issue

2013-08-13 Thread Matthew Farrellee

On 08/04/2013 12:01 PM, Linus Nova wrote:

HI,

I installed OpenStack Savanna in OpenStack Grizzely release. As you can
see in savanna.log, the savanna-api start and operates correctly.

When I launch the cluster, the VMs start correctly but soon after they
are removed as shown in the log file.

Do you have any ideas on what is happening?

Best regards.

Linus Nova


Linus,

I don't know if your issue has been resolved, but if it hasn't I invite 
you to ask it at -


   https://answers.launchpad.net/savanna/+addquestion

Best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to support new Cinder driver for CloudByte's Elastistor

2013-08-13 Thread Amit Das
Thanks a lot... This should give us a head start.

Regards,
Amit
*CloudByte Inc.* http://www.cloudbyte.com/


On Tue, Aug 13, 2013 at 5:14 PM, Thierry Carrez thie...@openstack.orgwrote:

 Amit Das wrote:
  We have implemented a CINDER driver for our QoS aware storage solution
  (CloudByte Elastistor).
 
  We would like to integrate this driver code with the next version of
  OpenStack (Havana).
 
  Please let us know the approval processes to be followed for this new
  driver support.

 See https://wiki.openstack.org/wiki/Release_Cycle and
 https://wiki.openstack.org/wiki/Blueprints for the beginning of an answer.

 Note that we are pretty late in the Havana cycle with lots of features
 which have been proposed a long time ago still waiting for reviews and
 merging... so it's a bit unlikely that a new feature would be added now
 to that already-overloaded backlog.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Dolph Mathews
On Tue, Aug 13, 2013 at 3:10 AM, Henry Nash hen...@linux.vnet.ibm.comwrote:

 Hi

 So few comebacks to the various comments:

 1) While I understand the idea that a client would follow the next/prev
 links returned in collections, I wasn't aware that we considered
 'page'/'per-page' as not standardized. We list these explicitly throughout
 the identity API spec (look in each List 'entity' example).


They were essentially relics from a very early draft of the spec that were
thoughtlessly copy/pasted around (I'm guilty of this myself)... they were
recently cleaned up and removed from the spec.


 How I imagined it would work would be:

 a) If a client did not include 'page' in the url we would not paginate


Make that a deployment option? per_page could simply default to a very high
value.


 b) Once we are paginating, a client can either build the next/prevs urls
 themselves if they want (by incrementing/decrementing the page number), or
 just follow the next/prev links (which come with the appropriate 'page=x'
 in them) returned in the collection which saves them having to do this.


I'm obviously very opposed to this because it unreasonably forces a single
approach to pagination across all drivers.


 c) Regarding implementation, the controller would continue to be able to
 paginate on behalf of drivers that couldn't, but those paginate-aware
 drivers would take over that capability (and indicate this to the
 controller the state of the pagination so that it can build the correct
 next/prev links)

 2) On the subject of huge enumerates, options are:
 a) Support a backend manager scoped (i.e. identity/assignent/token) limit
 in the conf file which would be honored by drivers.  Assuming that you set
 this larger than your pagination limit, this would make sense whether your
 driver is paginating or not in terms of minimizing the delay in responding
 data as well as not messing up pagination.  In the non-paginated case when
 we hit the limit, should we indicate this to the client?  Maybe a 206
 return code?  Although i) not quite sure that meets http standards, and ii)
 would we break a bunch of clients by doing this?


I'm not clear on what kind of limit you're referring to? A 206 sounds
unexpected for this use case though.


 b) We scrap the whole idea of pagination, and just set a conf limit as in
 2a).  To make this work of course, we must implement any defined filters in
 the backend (otherwise we still end up with today's performance problems -
 remember that today, in general,  filtering is done in the controller on a
 full enumeration of the entities in question).  I was planning to implement
 this backend filtering anyway as part of (or on top of) my change, since we
 are holding (at least one of) our hands behind our backs right now by not
 doing so.  And our filters need to be powerful, do we support wildcards for
 example, e.g. GET /users?name = fred*  ?


There were some discussions on this topic from about a year ago that I'd
love to continue. I don't want to invent a new language, but we do need
to settle on an approach that we can apply across a wide variety of
backends. That probably means keeping it very simple (like your example).
Asterisks need to be URL encoded, though. One suggestion I particularly
liked (which happens to avoid claiming perfectly valid characters -
asterisks - as special characters) was to adopt the syntax used in the
django ORM's filter function:

  ?name__startswith=Fred
  ?name__istartswith=fred
  ?name__endswith=Fred
  ?name__iendswith=fred
  ?name__contains=Fred
  ?name__icontains=fred

This probably represents the immediately useful subset of parameters for
us, but for more:

  https://docs.djangoproject.com/en/dev/topics/db/queries/


 Henry

 On 13 Aug 2013, at 04:40, Adam Young wrote:

  On 08/12/2013 09:22 PM, Miller, Mark M (EB SW Cloud - RD - Corvallis)
 wrote:

 The main reason I use user lists (i.e. keystone user-list) is to get the
 list of usernames/IDs for other keystone commands. I do not see the value
 of showing all of the users in an LDAP server when they are not part of the
 keystone database (i.e. do not have roles assigned to them). Performing a
 “keystone user-list” command against the HP Enterprise Directory locks up
 keystone for about 1 ½ hours in that it will not perform any other commands
 until it is done.  If it is decided that user lists are necessary, then at
 a minimum they need to be paged to return control back to keystone for
 another command.


 We need a way to tell HP ED to limit the number of rows, and to do
 filtering.

 We have a bug for the second part.  I'll open one for the limit.

  

 ** **

 Mark

 ** **

 *From:* Adam Young [mailto:ayo...@redhat.com ayo...@redhat.com]
 *Sent:* Monday, August 12, 2013 5:27 PM
 *To:* openstack-dev@lists.openstack.org
 *Subject:* Re: [openstack-dev] [keystone] Pagination

 ** **

 On 08/12/2013 05:34 PM, Henry Nash wrote:

 Hi 

 ** **

 I'm working on extending the 

Re: [openstack-dev] Proposal to support new Cinder driver for CloudByte's Elastistor

2013-08-13 Thread John Griffith
Hi Amit,

I think part of what Thierry was eluding to was the fact that feature
freeze for Grizzly is next week.  Also in the past we've been trying to
make sure that folks did not introduce BP's for new drivers in the last
release mile-stone.  There are other folks that are in this position
however they've also proposed their BP's for their driver and sent updates
to the Cinder team since H1.

That being said, if you already have working code that you think is ready
and can be submitted we can see what the rest of the Cinder team thinks.
 No promises though that your code will make it in, there are a number of
things already in process that will take priority in terms of review time
etc.

Thanks,
John


On Tue, Aug 13, 2013 at 8:42 AM, Amit Das amit@cloudbyte.com wrote:

 Thanks a lot... This should give us a head start.

 Regards,
 Amit
 *CloudByte Inc.* http://www.cloudbyte.com/


 On Tue, Aug 13, 2013 at 5:14 PM, Thierry Carrez thie...@openstack.orgwrote:

 Amit Das wrote:
  We have implemented a CINDER driver for our QoS aware storage solution
  (CloudByte Elastistor).
 
  We would like to integrate this driver code with the next version of
  OpenStack (Havana).
 
  Please let us know the approval processes to be followed for this new
  driver support.

 See https://wiki.openstack.org/wiki/Release_Cycle and
 https://wiki.openstack.org/wiki/Blueprints for the beginning of an
 answer.

 Note that we are pretty late in the Havana cycle with lots of features
 which have been proposed a long time ago still waiting for reviews and
 merging... so it's a bit unlikely that a new feature would be added now
 to that already-overloaded backlog.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ANVIL] Missing openvswitch dependency for basic-neutron.yaml persona

2013-08-13 Thread Joshua Harlow
It likely shouldn't be needed :)

I haven't personally messes around with the neutron persona to much and I know 
that it just underwent the great rename of 2013 so u might be hitting issues 
due to that.

Try seeing if u can adjust the yaml file and if not I am on irc to help more.

Sent from my really tiny device...

On Aug 12, 2013, at 9:14 AM, Sylvain Bauza 
sylvain.ba...@bull.netmailto:sylvain.ba...@bull.net wrote:

Hi,

./smithy -a install -p conf/personas/in-a-box/basic-neutron.yaml is failing 
because of openvswitch missing.
See logs here [1].

Does anyone knows why openvswitch is needed when asking for linuxbridge in 
components/neutron.yaml ?
Shall I update distros/rhel.yaml ?

-Sylvain



[1] : http://pastebin.com/7ZUR2TyU http://pastebin.com/TFkDrrDc


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron a quick qpid revert

2013-08-13 Thread David Ripton

On 08/13/2013 09:57 AM, Dan Prince wrote:

All of my Neutron tests are failing this morning in SmokeStack. We need a quick 
revert to fix the qpid RPC implementation:

https://review.openstack.org/41689

https://bugs.launchpad.net/neutron/+bug/1211778

I figure we may as well revert this quick and then just wait on oslo.messaging 
to fix the original RPC concern here?


Thanks Dan.  That's my mistake, for pulling over the entire latest 
impl_qpid.py rather than just my tiny fix to it.  I'll redo the patch.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to support new Cinder driver for CloudByte's Elastistor

2013-08-13 Thread John Griffith
On Tue, Aug 13, 2013 at 9:01 AM, John Griffith
john.griff...@solidfire.comwrote:

 Hi Amit,

 I think part of what Thierry was eluding to was the fact that feature
 freeze for Grizzly is next week.  Also in the past we've been trying to
 make sure that folks did not introduce BP's for new drivers in the last
 release mile-stone.  There are other folks that are in this position
 however they've also proposed their BP's for their driver and sent updates
 to the Cinder team since H1.

 That being said, if you already have working code that you think is ready
 and can be submitted we can see what the rest of the Cinder team thinks.
  No promises though that your code will make it in, there are a number of
 things already in process that will take priority in terms of review time
 etc.

 Thanks,
 John


 On Tue, Aug 13, 2013 at 8:42 AM, Amit Das amit@cloudbyte.com wrote:

 Thanks a lot... This should give us a head start.

 Regards,
 Amit
 *CloudByte Inc.* http://www.cloudbyte.com/


 On Tue, Aug 13, 2013 at 5:14 PM, Thierry Carrez thie...@openstack.orgwrote:

 Amit Das wrote:
  We have implemented a CINDER driver for our QoS aware storage solution
  (CloudByte Elastistor).
 
  We would like to integrate this driver code with the next version of
  OpenStack (Havana).
 
  Please let us know the approval processes to be followed for this new
  driver support.

 See https://wiki.openstack.org/wiki/Release_Cycle and
 https://wiki.openstack.org/wiki/Blueprints for the beginning of an
 answer.

 Note that we are pretty late in the Havana cycle with lots of features
 which have been proposed a long time ago still waiting for reviews and
 merging... so it's a bit unlikely that a new feature would be added now
 to that already-overloaded backlog.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I should clarify my posting, next week (August 21'st) is a FeatureProposal
freeze for the Cinder project.  Further explanation here: [1]

[1] https://wiki.openstack.org/wiki/FeatureProposalFreeze
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Jay Pipes

On 08/13/2013 03:05 AM, Yee, Guang wrote:

Passing the query parameters, whatever they are, into the driver if the
given driver supports pagination and allowing the driver to override the
manager default pagination functionality seem reasonable to me.


Please do use the standards that are supported in other OpenStack 
services already: limit, marker, sort_key and sort_dir.


Pagination is meaningless without a sort key and direction, so picking a 
sensible default for user/project records is good. I'd go with either 
created_at (what Glance/Nova/Cinder use..) or with the user/project UUID.


The Glance DB API pagination is well-documented and clean [1]. I highly 
recommend it as a starting point.


Nova uses the same marker/limit/sort_key/sort_dir options for queries 
that it allows pagination on. An example is the 
instance_get_all_by_filters() call [2].


Cinder uses the same marker/limit/sort_key/sort_dir options for query 
pagination as well. [3]


Finally, I'd consider supporting the standard change-since parameter for 
listing operations. Both Nova [4] and Glance [5] support the parameter, 
which is useful for tools that poll the APIs for new events/records.


In short, go with what is already a standard in the other projects...

Best,
-jay

[1] 
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L429
[2] 
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1709
[3] 
https://github.com/openstack/cinder/blob/master/cinder/common/sqlalchemyutils.py#L33
[4] 
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1766
[5] 
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L618





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ANVIL] Missing openvswitch dependency for basic-neutron.yaml persona

2013-08-13 Thread Joshua Harlow
Well open switch is likely needed still when it's really needed right? So I 
think there is a need for it. It just might have to be a dynamic choice (based 
on a config option) instead of a static choice. Make sense??

The other personas don't use neutron so I think that's how they work, since 
nova-network base functionality still exists.

Any patches would be great, will be on irc soon if u want to discuss more.

Josh

Sent from my really tiny device...

On Aug 13, 2013, at 9:23 AM, Sylvain Bauza 
sylvain.ba...@bull.netmailto:sylvain.ba...@bull.net wrote:

Do you confirm the basic idea would be to get rid of any openvswitch reference 
in rhel.yaml ?
If so, wouldn't it be breaking other personas ?

I can provide a patch so the team would review it.

-Sylvain

Le 13/08/2013 17:57, Joshua Harlow a écrit :
It likely shouldn't be needed :)

I haven't personally messes around with the neutron persona to much and I know 
that it just underwent the great rename of 2013 so u might be hitting issues 
due to that.

Try seeing if u can adjust the yaml file and if not I am on irc to help more.

Sent from my really tiny device...

On Aug 12, 2013, at 9:14 AM, Sylvain Bauza 
sylvain.ba...@bull.netmailto:sylvain.ba...@bull.net wrote:

Hi,

./smithy -a install -p conf/personas/in-a-box/basic-neutron.yaml is failing 
because of openvswitch missing.
See logs here [1].

Does anyone knows why openvswitch is needed when asking for linuxbridge in 
components/neutron.yaml ?
Shall I update distros/rhel.yaml ?

-Sylvain



[1] : http://pastebin.com/TFkDrrDc


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to support new Cinder driver for CloudByte's Elastistor

2013-08-13 Thread Amit Das
Thanks John for the updates.

I shall work towards setting up the Blueprint  Gerrit. We have the code
ready but pretty late in participating with the community.

Regards,
Amit
*CloudByte Inc.* http://www.cloudbyte.com/


On Tue, Aug 13, 2013 at 8:31 PM, John Griffith
john.griff...@solidfire.comwrote:

 Hi Amit,

 I think part of what Thierry was eluding to was the fact that feature
 freeze for Grizzly is next week.  Also in the past we've been trying to
 make sure that folks did not introduce BP's for new drivers in the last
 release mile-stone.  There are other folks that are in this position
 however they've also proposed their BP's for their driver and sent updates
 to the Cinder team since H1.

 That being said, if you already have working code that you think is ready
 and can be submitted we can see what the rest of the Cinder team thinks.
  No promises though that your code will make it in, there are a number of
 things already in process that will take priority in terms of review time
 etc.

 Thanks,
 John


 On Tue, Aug 13, 2013 at 8:42 AM, Amit Das amit@cloudbyte.com wrote:

 Thanks a lot... This should give us a head start.

 Regards,
 Amit
 *CloudByte Inc.* http://www.cloudbyte.com/


 On Tue, Aug 13, 2013 at 5:14 PM, Thierry Carrez thie...@openstack.orgwrote:

 Amit Das wrote:
  We have implemented a CINDER driver for our QoS aware storage solution
  (CloudByte Elastistor).
 
  We would like to integrate this driver code with the next version of
  OpenStack (Havana).
 
  Please let us know the approval processes to be followed for this new
  driver support.

 See https://wiki.openstack.org/wiki/Release_Cycle and
 https://wiki.openstack.org/wiki/Blueprints for the beginning of an
 answer.

 Note that we are pretty late in the Havana cycle with lots of features
 which have been proposed a long time ago still waiting for reviews and
 merging... so it's a bit unlikely that a new feature would be added now
 to that already-overloaded backlog.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V Meeting Minutes

2013-08-13 Thread Peter Pouliot
Hi Everyone,

Here are the minutes from today's Hyper-V meeting.

Minutes:
http://eavesdrop.openstack.org/meetings/_hyper_v/2013/_hyper_v.2013-08-13-16.02.html
Minutes (text): 
http://eavesdrop.openstack.org/meetings/_hyper_v/2013/_hyper_v.2013-08-13-16.02.txt
Log:
http://eavesdrop.openstack.org/meetings/_hyper_v/2013/_hyper_v.2013-08-13-16.02.log.html


Peter J. Pouliot, CISSP
Senior SDET, OpenStack

Microsoft
New England Research  Development Center
One Memorial Drive,Cambridge, MA 02142
ppoul...@microsoft.commailto:ppoul...@microsoft.com | Tel: +1(857) 453 6436

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Henry Nash
Jay,

Thanks for all the various links - most useful.

To map this into keystone context, if we were to follow this logic we would:

1) Support 'limit' and 'marker' (as opposed to 'page', 'page_szie', or anything 
else).  These would be standard, independent of what backing store keystone was 
using.  If neither are included in the url, then we return the first N entires, 
where N is defined by the cloud provider.  This ensures that for at least 
smaller deployments, non-pagination aware clients still work.  If either 
'limit' or 'marker' are specified, then we paginate, passing them down into the 
driver layer wherever possible to ensure efficiency (some drivers may not be 
able to support pagination, hence we will do this, inefficiently, at a higher 
layer)
2) If we are paginating at the driver level, we must, by definition, be doing 
all the filtering down there as well (otherwise it all gets mucked)
3) We should look at supporting the other standard options (sort order etc.), 
but irrespective of that, by definition, we must ensure that we any driver that 
is paginating must be getting is entries back in a consistent order (otherwise, 
again, pagination doesn't work reliably)

Henry
On 13 Aug 2013, at 18:10, Jay Pipes wrote:

 On 08/13/2013 12:55 PM, Lyle, David (Cloud Services) wrote:
 The marker/limit pagination scheme is inferior.
 
 A bold statement that flies in the face of experience and the work already 
 done in all the other projects.
 
 The use of page/page_size allows access to arbitrary pages, whereas 
 limit/marker only allows forward progress.
 
 I don't see this as a particularly compelling use case considering the 
 performance manifestations of using LIMIT OFFSET pagination.
 
 In Horizon's use case, with page/page_size we can provide the user access to 
 any page they have already visited, rather than just the previous page 
 (using prev/next links returned in the response).
 
 I don't see this as a particularly useful thing, but in any case, you could 
 still do this by keeping the markers for previous pages on the client 
 (Horizon) side.
 
 The point of marker/limit is to eliminate poor performance of LIMIT OFFSET 
 queries and to force proper index usage in the listing queries.
 
 You can see the original discussion about this from more than two years and 
 even see where I was originally arguing for a LIMIT OFFSET strategy and was 
 brought around to the current limit/marker strategy by the responses of 
 Justin Santa Barbara and Greg Holt:
 
 https://lists.launchpad.net/openstack/msg02548.html
 
 Best,
 -jay
 
 -David
 
 On 08/13/2013 10:29 AM, Pipes, Jay wrote:
 
 On 08/13/2013 03:05 AM, Yee, Guang wrote:
 Passing the query parameters, whatever they are, into the driver if
 the given driver supports pagination and allowing the driver to
 override the manager default pagination functionality seem reasonable to 
 me.
 
 Please do use the standards that are supported in other OpenStack services 
 already: limit, marker, sort_key and sort_dir.
 
 Pagination is meaningless without a sort key and direction, so picking a 
 sensible default for user/project records is good. I'd go with either 
 created_at (what Glance/Nova/Cinder use..) or with the user/project UUID.
 
 The Glance DB API pagination is well-documented and clean [1]. I highly 
 recommend it as a starting point.
 
 Nova uses the same marker/limit/sort_key/sort_dir options for queries that 
 it allows pagination on. An example is the
 instance_get_all_by_filters() call [2].
 
 Cinder uses the same marker/limit/sort_key/sort_dir options for query 
 pagination as well. [3]
 
 Finally, I'd consider supporting the standard change-since parameter for 
 listing operations. Both Nova [4] and Glance [5] support the parameter, 
 which is useful for tools that poll the APIs for new events/records.
 
 In short, go with what is already a standard in the other projects...
 
 Best,
 -jay
 
 [1]
 https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L429
 [2]
 https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1709
 [3]
 https://github.com/openstack/cinder/blob/master/cinder/common/sqlalchemyutils.py#L33
 [4]
 https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1766
 [5]
 https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L618
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list

Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Jay Pipes
On 08/13/2013 01:51 PM, Miller, Mark M (EB SW Cloud - RD - Corvallis) 
wrote:

I have been following this exchange of ideas on how to solve/implement 
pagination. I would ask you to keep in mind that a solution needs to take into 
account a split LDAP/SQL backend (you are not always dealing with a single 
Keystone SQL database). Having a split backend means that the query information 
is divided between both backends and that you may not have as much flexibility 
with the LDAP backend


Yes, absolutely understood and a good point.

In the case of engines that don't support filtering, ordering, or other 
DB-like operations, then a pagination implementation in the controller 
would have to be provided. Not efficient, but better than nothing.


-jay


Mark.

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, August 13, 2013 10:10 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [keystone] Pagination

On 08/13/2013 12:55 PM, Lyle, David (Cloud Services) wrote:

The marker/limit pagination scheme is inferior.


A bold statement that flies in the face of experience and the work already done 
in all the other projects.

  The use of page/page_size allows access to arbitrary pages, whereas 
limit/marker only allows forward progress.

I don't see this as a particularly compelling use case considering the 
performance manifestations of using LIMIT OFFSET pagination.

  In Horizon's use case, with page/page_size we can provide the user access to 
any page they have already visited, rather than just the previous page (using 
prev/next links returned in the response).

I don't see this as a particularly useful thing, but in any case, you could 
still do this by keeping the markers for previous pages on the client (Horizon) 
side.

The point of marker/limit is to eliminate poor performance of LIMIT OFFSET 
queries and to force proper index usage in the listing queries.

You can see the original discussion about this from more than two years and 
even see where I was originally arguing for a LIMIT OFFSET strategy and was 
brought around to the current limit/marker strategy by the responses of Justin 
Santa Barbara and Greg Holt:

https://lists.launchpad.net/openstack/msg02548.html

Best,
-jay


-David

On 08/13/2013 10:29 AM, Pipes, Jay wrote:


On 08/13/2013 03:05 AM, Yee, Guang wrote:

Passing the query parameters, whatever they are, into the driver if
the given driver supports pagination and allowing the driver to
override the manager default pagination functionality seem reasonable to me.



Please do use the standards that are supported in other OpenStack services 
already: limit, marker, sort_key and sort_dir.



Pagination is meaningless without a sort key and direction, so picking a sensible 
default for user/project records is good. I'd go with either created_at (what 
Glance/Nova/Cinder use..) or with the user/project UUID.



The Glance DB API pagination is well-documented and clean [1]. I highly 
recommend it as a starting point.



Nova uses the same marker/limit/sort_key/sort_dir options for queries
that it allows pagination on. An example is the
instance_get_all_by_filters() call [2].



Cinder uses the same marker/limit/sort_key/sort_dir options for query
pagination as well. [3]



Finally, I'd consider supporting the standard change-since parameter for listing operations. 
Both Nova [4] and Glance [5] support the parameter, which is useful for tools that poll the 
APIs for new events/records.



In short, go with what is already a standard in the other projects...



Best,
-jay



[1]
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/
api.py#L429
[2]
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.
py#L1709
[3]
https://github.com/openstack/cinder/blob/master/cinder/common/sqlalch
emyutils.py#L33
[4]
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.
py#L1766
[5]
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/
api.py#L618





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-13 Thread Thomas Maddox
Hello!

I was having some chats yesterday with both Julien and Doug regarding some 
thoughts that occurred to me while digging through CM and Doug suggested that I 
bring them up on the dev list for everyones benefit and discussion.

My bringing this up is intended to help myself and others get a better 
understanding of why it's this way, whether we're on the correct course, and, 
if not, how we get to it. I'm not expecting anything to change quickly or 
necessarily at all from this. Ultimately the question I'm asking is: are we 
addressing the correct use cases with the correct API calls; being able to 
expect certain behavior without having to know the internals? For context, this 
is mostly using the SQLAlchemy implementation for these questions, but the API 
questions apply overall.

My concerns:

  *   Driving get_resources() with the Meter table instead of the Resource 
table. This is mainly because of the additional filtering available in the 
Meter table, which allows us to satisfy a use case like getting a list of 
resources a user had during a period of time to get meters to compute billing 
with. The semantics are tripping me up a bit; the question this boiled down to 
for me was: why use a resource query to get meters to show usage by a tenant? I 
was curious about why we needed the timestamp filtering when looking at 
Resources, and why we would use Resource as a way to get at metering data, 
rather than a Meter request itself? This was answered by resources being the 
current vector to get at metering data for a tenant in terms of resources, if I 
understood correctly.
  *   With this implementation, we have to do aggregation to get at the 
discrete Resources (via the Meter table) rather than just filtering the already 
distinct ones in the Resource table.
  *   This brought up some confusion with the API for me with the major use 
cases I can think of:
 *   As a new consumer of this API, I would think that 
/resource/resource_id would get me details for a resource, e.g. current 
state, when it was created, last updated/used timestamp, who owns it; not the 
attributes from the first sample to come through about it
 *   I would think that /meter/?q.field=resource_idq.value=resource_id 
ought to get me a list of meter(s) details for a specific resource, e.g. name, 
unit, and origin; but not a huge mixture of samples.
*   Additionally /meter/?q.field=user_idq.value=user_id would get me 
a list of all meters that are currently related to the user
 *   The ultimate use case, for billing queries, I would think that 
/meter/meter_id/statistics?time filtersuser(resource_id) would get me 
the measurements for that meter to bill for.

If I understand correctly, one main intent driving this is wanting to avoid end 
users having to write a bunch of API requests themselves from the billing side 
and instead just drill down from payloads for each resource to get the billing 
information for their customers. It also looks like there's a BP to add 
grouping functionality to statistics queries to allow us this functionality 
easily (this one, I think: 
https://blueprints.launchpad.net/ceilometer/+spec/api-group-by).

I'm new to this project, so I'm trying to get a handle on how we got here and 
maybe offer some outside perspective, if it's needed or wanted. =]

Thank you all in advance for your time with this. I hope this is productive!

Cheers!

-Thomas













___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [ceilometer] Periodic Auditing In Glance

2013-08-13 Thread Andrew Melton
 I'm just concerned with the type of notification you'd send. It has to
 be enough fine grained so we don't lose too much information.

It's a tough situation, sending out an image.exists for each image with
the same payload as say image.upload would likely create TONS of traffic.
Personally, I'm thinking about a batch payload, with a bare minimum of the
following values:

'payload': [{'id': 'uuid1', 'owner': 'tenant1', 'created_at': 'some_date',
'size': 1},
   {'id': 'uuid2', 'owner': 'tenant2', 'created_at':
'some_date', 'deleted_at': 'some_other_date', 'size': 2}]

That way the audit job/task could be configured to emit in batches which
a deployer could tweak the settings so as to not emit too many messages.
I definitely welcome other ideas as well.

Thanks,
Andrew Melton


On Tue, Aug 13, 2013 at 4:27 AM, Julien Danjou jul...@danjou.info wrote:

 On Mon, Aug 12 2013, Andrew Melton wrote:

  So, my question to the Ceilometer community is this, does this sound like
  something Ceilometer would find value in and use? If so, would this be
  something
  we would want most deployers turning on?

 Yes. I think we would definitely be happy to have the ability to drop
 our pollster at some time.
 I'm just concerned with the type of notification you'd send. It has to
 be enough fine grained so we don't lose too much information.

 --
 Julien Danjou
 // Free Software hacker / freelance consultant
 // http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [ceilometer] Periodic Auditing In Glance

2013-08-13 Thread Sandy Walsh


On 08/13/2013 04:35 PM, Andrew Melton wrote:
 I'm just concerned with the type of notification you'd send. It has to
 be enough fine grained so we don't lose too much information.
 
 It's a tough situation, sending out an image.exists for each image with
 the same payload as say image.upload would likely create TONS of traffic.
 Personally, I'm thinking about a batch payload, with a bare minimum of the
 following values:
 
 'payload': [{'id': 'uuid1', 'owner': 'tenant1', 'created_at':
 'some_date', 'size': 1},
{'id': 'uuid2', 'owner': 'tenant2', 'created_at':
 'some_date', 'deleted_at': 'some_other_date', 'size': 2}]
 
 That way the audit job/task could be configured to emit in batches which
 a deployer could tweak the settings so as to not emit too many messages.
 I definitely welcome other ideas as well.

Would it be better to group by tenant vs. image?

One .exists per tenant that contains all the images owned by that tenant?

-S


 Thanks,
 Andrew Melton
 
 
 On Tue, Aug 13, 2013 at 4:27 AM, Julien Danjou jul...@danjou.info
 mailto:jul...@danjou.info wrote:
 
 On Mon, Aug 12 2013, Andrew Melton wrote:
 
  So, my question to the Ceilometer community is this, does this
 sound like
  something Ceilometer would find value in and use? If so, would this be
  something
  we would want most deployers turning on?
 
 Yes. I think we would definitely be happy to have the ability to drop
 our pollster at some time.
 I'm just concerned with the type of notification you'd send. It has to
 be enough fine grained so we don't lose too much information.
 
 --
 Julien Danjou
 // Free Software hacker / freelance consultant
 // http://julien.danjou.info
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Extension to volume creation (filesystem and label)

2013-08-13 Thread Caitlin Bestler

On 8/12/2013 9:37 AM, Greg Poirier wrote:




Oh, we don't want to get super fancy with it. We would probably only
support one filesystem type and not partitions. E.g. You request a 120GB
volume and you get a 120GB Ext4 FS mountable by label.



I'm not following something here. What is the point at dictating a 
specific FS format when the compute node will be the one applying

the interpretation?

Isn't a 120 GB volume which the VM will interpret as an EXT4 FS just
a 120 GB volume that has a *hint* attached to it?

And would there be any reason to constrain in advance the set of hints
that could be offered?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Gabriel Hurley
I have been one of the earliest, loudest, and most consistent PITA's about 
pagination, so I probably oughta speak up. I would like to state three facts:

1. Marker + limit (e.g. forward-only) pagination is horrific for building a 
user interface.
2. Pagination doesn't scale.
3. OpenStack's APIs have historically had useless filtering capabilities.

In a world where pagination is a must-have feature we need to have page 
number + limit pagination in order to build a reasonable UI. Ironically though, 
I'm in favor of ditching pagination altogether. It's the lowest-common 
denominator, used because we as a community haven't buckled down and built 
meaningful ways for our users to get to the data they really want.

Filtering is great, but it's only 1/3 of the solution. Let me break it down 
with problems and high level solutions:

Problem 1: I know what I want and I need to find it.
Solution: filtering/search systems.

Problem 2: I don't know what I want, and it may or may not exist.
Solution: tailored discovery mechanisms.

Problem 3: I need to know something about *all* the data in my system.
Solution: reporting systems.

We've got the better part of none of that. But I'd like to solve these issues. 
I have lots of thoughts on all of those, and I think the UX and design 
communities can offer a lot in terms of the usability of the solutions we come 
up with. Even more, I think this would be an awesome working group session at 
the next summit to talk about nothing other than how can we get rid of 
pagination?

As a parting thought, what percentage of the time do you click to the second 
page of results in Google?

All the best,

- Gabriel


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder: Oslo project meeting

2013-08-13 Thread Mark McLoughlin
Hi

We're having an IRC meeting on Friday to sync up again on the messaging
work going on:

  https://wiki.openstack.org/wiki/Meetings/Oslo
  https://etherpad.openstack.org/HavanaOsloMessaging

Feel free to add other topics to the wiki

See you on #openstack-meeting at 1400 UTC

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Meeting Tuesday August 13th at 19:00 UTC

2013-08-13 Thread Elizabeth Krumbach Joseph
On Mon, Aug 12, 2013 at 10:40 AM, Elizabeth Krumbach Joseph
l...@princessleia.com wrote:
 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting tomorrow, Tuesday August 13th, at 19:00 UTC in
 #openstack-meeting

Meeting log and minutes:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-08-13-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-08-13-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-08-13-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Blueprint: launch-time configurable kernel-id, ramdisk-id, and kernel command line

2013-08-13 Thread Dennis Kliban
I have just created a new blueprint: 
https://blueprints.launchpad.net/nova/+spec/expose-ramdisk-kernel-and-command-line-via-rest-and-cli

I realize that some of this work overlaps with: 
https://blueprints.launchpad.net/nova/+spec/improve-boot-from-volume
which is an umbrella blueprint for: 
https://blueprints.launchpad.net/nova/+spec/improve-block-device-handling

I can see that a lot of work has been done for the above blueprints, but I was 
not clear on the progress with regard to exposing kernel-id and ramdisk-id.  
Perhaps I don't need to implement this?   

The second change proposed in the blueprint has not been addressed in any other 
blueprints.  Does anyone think that adding ability to pass in at launch time 
the kernel command line would be problematic?

Thanks,
Dennis

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Swift, netifaces, PyPy, and cffi

2013-08-13 Thread Alex Gaynor
Hi all,

(This references this changeset: https://review.openstack.org/#/c/38415/)

One of the goals I've been working at has been getting swift running on
PyPy (and from there, the rest of OpenStack). The last blocking issue in
swift is that it currently uses netifaces, which is a C-extension that
doesn't on PyPy. I've proposed to replace this dependency with a cffi based
binding to the system.

For those not familiar, cffi is a tool for binding to C libraries, similar
to ctypes (in the stdlib), except more expressive, less error prone, and
faster; some of our downstream dependencies already use it.

One of the issues that came up in this review however, is that cffi is not
packaged in the most recent Ubuntu LTS (and likely other distributions),
although it is available in raring, and in a PPA (
http://packages.ubuntu.com/raring/python-cffi and
https://launchpad.net/~pypy/+archive/ppa?field.series_filter=preciserespectively).

As a result of this, we wanted to get some feedback on which direction is
best to go:

a) cffi-only approach, this is obviously the simplest approach, and works
everywhere (assuming you can install a PPA, use pip, or similar for cffi)
b) wait until the next LTS to move to this approach (requires waiting until
2014 for PyPy support)
c) Support using either netifaces or cffi: most complex, and most code,
plus one or the other dependencies aren't well supported by most tools as
far as I know.

Thoughts?
Alex

-- 
I disapprove of what you say, but I will defend to the death your right to
say it. -- Evelyn Beatrice Hall (summarizing Voltaire)
The people's good is the highest law. -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] GPU passthrough support blueprints for OpenStack

2013-08-13 Thread Brian Schott
Are there more recent blueprints related to adding GPU pass-through support?  
All that I can find are some stale blueprints that I created around the Cactus 
timeframe (while wearing a different hat) that are pretty out of date.

I just heard a rumor that folks are doing Nvidia GRID K2 GPU passthrough with 
KVM successfully using linux 3.10.6 kernel with RHEL.

In addition, Lorin and I did some GPU passthrough testing back in the spring 
with GRID K2 on HyperV, libvirt+xen, and XenServer.  Slides are here:
http://www.slideshare.net/bfschott/nimbis-schott-openstackgpustatus20130618

The virtualization support for  GPU-enabled virtual desktops and GPGPU seems to 
have stabilized this year for server deployments.  How is this going to be 
supported in OpenStack?

Brian

-
Brian Schott, CTO
Nimbis Services, Inc.
brian.sch...@nimbisservices.com
ph: 443-274-6064  fx: 443-274-6060





smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [oslo] postpone key distribution bp until icehouse?

2013-08-13 Thread Dolph Mathews
With regard to:
https://blueprints.launchpad.net/keystone/+spec/key-distribution-server

During today's project status meeting [1], the state of KDS was discussed
[2]. To quote ttx directly: we've been bitten in the past with late
security-sensitive stuff and I'm a bit worried to ship late code with
such security implications as a KDS. I share the same concern, especially
considering the API only recently went up for formal review [3], and the
WIP implementation is still failing smokestack [4].

I'm happy to see the reviews in question continue to receive their fair
share of attention over the next few weeks, but can (and should?) merging
be delayed until icehouse while more security-focused eyes have time to
review the code?

Ceilometer and nova would both be affected by a delay, as both have use
cases for consuming trusted messaging [5] (a dependency of the bp in
question).

Thanks for you feedback!

[1]:
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2013-08-13.log
[2]: http://paste.openstack.org/raw/44075/
[3]: https://review.openstack.org/#/c/40692/
[4]: https://review.openstack.org/#/c/37118/
[5]: https://blueprints.launchpad.net/oslo/+spec/trusted-messaging
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Meeting agenda for Wed August 14th at 2000 UTC

2013-08-13 Thread Steven Hardy
The Heat team holds a weekly meeting in #openstack-meeting, see

https://wiki.openstack.org/wiki/Meetings/HeatAgenda for more details

The next meeting is on Wed August 14th at 2000 UTC

Current topics for discussion:
- Review last weeks actions
- Reminder re Havana_Release_Schedule FeatureProposalFreeze
- h3 blueprint status
- Open discussion

If anyone has any other topic to discuss, please add to the wiki.

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] postpone key distribution bp until icehouse?

2013-08-13 Thread Russell Bryant
On 08/13/2013 06:20 PM, Dolph Mathews wrote:
 With regard
 to: https://blueprints.launchpad.net/keystone/+spec/key-distribution-server
 
 During today's project status meeting [1], the state of KDS was
 discussed [2]. To quote ttx directly: we've been bitten in the past
 with late security-sensitive stuff and I'm a bit worried to ship late
 code with such security implications as a KDS. I share the same
 concern, especially considering the API only recently went up for formal
 review [3], and the WIP implementation is still failing smokestack [4].
 
 I'm happy to see the reviews in question continue to receive their fair
 share of attention over the next few weeks, but can (and should?)
 merging be delayed until icehouse while more security-focused eyes have
 time to review the code?
 
 Ceilometer and nova would both be affected by a delay, as both have use
 cases for consuming trusted messaging [5] (a dependency of the bp in
 question).

The longer this takes, the longer it is until we can make use of it.
However, at this point, deferring doesn't affect Nova much.  Landing at
the end of Havana vs the beginning of Icehouse doesn't change that
Icehouse would be the earliest Nova would start making use of it.

I would really like to see this as a priority to land ASAP in Icehouse
if it gets deferred.  Otherwise, other projects such as Nova can't make
any plans to build something with it in Icehouse, pushing this out yet
another 6 months.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] security_groups extension in nova api v3

2013-08-13 Thread Melanie Witt
On Aug 13, 2013, at 2:11 AM, Day, Phil wrote:

 If we really want to get clean separation between Nova and Neutron in the V3 
 API should we consider making the Nov aV3 API only accept lists o port ids in 
 the server create command ?
 
 That way there would be no need to every pass security group information into 
 Nova.
 
 Any cross project co-ordination (for example automatically creating ports) 
 could be handled in the client layer, rather than inside Nova.

Server create is always (until there's a separate layer) going to go cross 
project calling other apis like neutron and cinder while an instance is being 
provisioned. For that reason, I tend to think it's ok to give some extra 
convenience of automatically creating ports if needed, and being able to 
specify security groups.

For the associate and disassociate, the only convenience is being able to use 
the instance display name and security group name, which is already handled at 
the client layer. It seems a clearer case of duplicating what neutron offers.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-13 Thread Kieran Spear

On 14/08/2013, at 7:40 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/13/2013 05:04 PM, Gabriel Hurley wrote:
 I have been one of the earliest, loudest, and most consistent PITA's about 
 pagination, so I probably oughta speak up. I would like to state three facts:
 
 1. Marker + limit (e.g. forward-only) pagination is horrific for building a 
 user interface.
 2. Pagination doesn't scale.
 3. OpenStack's APIs have historically had useless filtering capabilities.
 
 In a world where pagination is a must-have feature we need to have page 
 number + limit pagination in order to build a reasonable UI. Ironically 
 though, I'm in favor of ditching pagination altogether. It's the 
 lowest-common denominator, used because we as a community haven't buckled 
 down and built meaningful ways for our users to get to the data they really 
 want.
 
 Filtering is great, but it's only 1/3 of the solution. Let me break it down 
 with problems and high level solutions:
 
 Problem 1: I know what I want and I need to find it.
 Solution: filtering/search systems.
 
 This is a good place to start. Glance has excellent filtering/search 
 capabilities -- built in to the API from early on in the Essex timeframe, and 
 only expanded over the last few releases.
 
 Pagination solutions should build on a solid filtering/search functionality 
 in the API, where there is a consistent sort key and direction (either 
 hard-coded or user-determined, doesn't matter).
 
 Limit/offset pagination solutions (forward and backwards paging, random 
 skip-to-a-page) are inefficient from a SQL query perspective and should be a 
 last resort, IMO, compared to limit/marker. With some smart session-storage 
 of a page's markers, backwards paging with limit/marker APIs is certainly 
 possible -- just store the previous page's marker.

Not just the previous page's marker, but the marker of every previous page 
since we would like to be able to click the previous button more than once. Any 
previous markers we store are also likely to become stale pretty quickly. And 
all this is based on the assumption that the user's session even started at the 
first 'page' - it could be they followed a link from elsewhere in Horizon or 
the greater internet.

I completely agree with Dolph that this is something the client shouldn't need 
to care about at all. The next/prev links returned with each page of results 
should hide all of this. next/prev links also make it trivial for the client to 
discover whether there's even a next page at all, since we don't want to make a 
user click a link to go to an empty page.

Having said that, I think we can improve the current marker/limit system 
without hurting performance if we split the marker into 'before' and 'after' 
parameters. That way all the information needed to go forward or backwards is 
included in the results for the current page. Supporting 'before' should be as 
simple as reversing the sort order and then flipping the order of the results.


Kieran


 
 Problem 2: I don't know what I want, and it may or may not exist.
 Solution: tailored discovery mechanisms.
 
 This should not be a use case that we spend much time on. Frankly, this use 
 case can be summarized as the window shopper scenario. Providing a quality 
 search/filtering mechanism, including the *API* itself providing REST-ful 
 discovery of the filters and search criteria the API supports, is way more 
 important...
 
 Problem 3: I need to know something about *all* the data in my system.
 Solution: reporting systems.
 
 Sure, no disagreement there.
 
 We've got the better part of none of that.
 
 I disagree. Some of the APIs have support for a good bit of search/filtering. 
 We just need to bring all the projects up to search speed, Captain.
 
 Best,
 -jay
 
 p.s. I very often go to the second and third pages of Google searches. :) But 
 I never skip to the 127th page of results.
 
  But I'd like to solve these issues. I have lots of thoughts on all of 
  those, and I think the UX and design communities can offer a lot in terms 
  of the usability of the solutions we come up with. Even more, I think this 
  would be an awesome working group session at the next summit to talk about 
  nothing other than how can we get rid of pagination?
 
 As a parting thought, what percentage of the time do you click to the second 
 page of results in Google?
 
 All the best,
 
 - Gabriel
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Allows to set the memory parameters for an Instance

2013-08-13 Thread Jae Sang Lee
Yes, there are instance resource quotas but Memory parameter doesn't
exists. Using libvirt memtune, I'd like to set the memory parameters for a
VM.



2013/8/13 Shake Chen shake.c...@gmail.com

 maybe use Extra Flavor .

 https://wiki.openstack.org/wiki/FlavorExtraSpecsKeyList


 On Sun, Aug 11, 2013 at 3:49 PM, Jae Sang Lee hyan...@gmail.com wrote:



 I've registered a blueprint to allows to set the advanced memory
 parameters for an Instance

 https://blueprints.launchpad.net/nova/+spec/libvirt-memtune-for-instance


 Would it be possible to review it (and maybe get an approval or not)?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Shake Chen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]Possiblility to run multiple hypervisors in a single deployment

2013-08-13 Thread Konglingxian
Hi all:

When I read OpenStack operation 
guide(http://docs.openstack.org/trunk/openstack-ops/content/compute_nodes.html),
 there is a note as follows:
It is also possible to run multiple hypervisors in a single deployment using 
Host Aggregates or Cells. However, an individual compute node can only run a 
single hypervisor at a time.

I think it not very correct, it should be based on the premise that the 
'multiple hypervisors' should support the same Neutron Plugin.

Am I right? Any hints are appreciated.


Lingxian Kong
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Skipping tests in tempest via config file

2013-08-13 Thread Ian Wienand
Hi,

I proposed a change to tempest that skips tests based on a config file
directive [1].  Reviews were inconclusive and it was requested the
idea be discussed more widely.

Of course issues should go upstream first.  However, sometimes test
failures are triaged to a local/platform problem and it is preferable
to keep everything else running by skipping the problematic tests
while its being worked on.

My perspective is one of running tempest in a mixed CI environment
with RHEL, Fedora, etc.  Python 2.6 on RHEL doesn't support testr (it
doesn't do the setUpClass calls required by temptest) and nose
upstream has some quirks that make it hard to work with the tempest
test layout [2].

Having a common place in the temptest config to set these skips is
more convienent than having to deal with the multiple testing
environments.

Another proposal is to have a separate JSON file of skipped tests.  I
don't feel strongly but it does seem like another config file.

-i

[1] https://review.openstack.org/#/c/39417/
[2] https://github.com/nose-devs/nose/pull/717

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] FWaaS: Support for explicit commit

2013-08-13 Thread Sridar Kandaswamy (skandasw)
Hi All:

In discussing with some more folks from a deployment perspective - managing 
rules for  PCI compliance and Audit requirements is quite important. And as is 
pointed below by Sumit, this can help enable a gate for any audit checks before 
actually applying it on the backend. Another use case discussed was  that 
firewall rules are often bloated because often admins hesitate to remove old 
and unused rules because no one wants to take a chance on the effects. This 
could also serve as a validation point before an actual update is effected on a 
commit.

Thanks

Sridar 

-Original Message-
From: Sumit Naiksatam [mailto:sumitnaiksa...@gmail.com] 
Sent: Monday, August 12, 2013 12:24 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Neutron] FWaaS: Support for explicit commit

Hi Aaron,

I seemed to have missed this email from you earlier. As compared to existing 
Neutron resources, the FWaaS Firewall resource and workflow is slightly 
different, since it's a two step process. The rules/policy creation is 
decoupled (for audit reasons) from its application on the backend firewall. 
Hence the need for the commit-like operation which expresses the intent that 
the state of the rules/policy be applied to the backend firewall. We can 
provide capabilities for bulk creation/update of rules/policies as well but 
that I believe is independent of this.

I posted a patch yesterday night for this 
(https://review.openstack.org/#/c/41353/).

Thanks,
~Sumit.

On Wed, Aug 7, 2013 at 5:19 PM, Aaron Rosen aro...@nicira.com wrote:
 Hi Sumit,

 Neutron has a concept of a bulk creation where multiple things can be 
 created in one api request rather that N (and then be implemented 
 atomically on the backend). In my opinion, I think it would be better 
 to implement a bulk update/delete operation rather than a commit. I 
 think that having something like this that is generic could be useful 
 to other api's in neutron.

 I do agree that one has to keep track of the order they are 
 changing/adding/delete rules so that they don't allow two things to 
 communicate that shouldn't be allowed to. If someone wanted to perform 
 this type of bulk atomic change now could they create a new profile 
 with the rules they desire and then switch out which profile is 
 attached to the firewall?

 Best,

 Aaron


 On Wed, Aug 7, 2013 at 3:40 PM, Sumit Naiksatam 
 sumitnaiksa...@gmail.com
 wrote:

 We had some discussion on this during the Neutron IRC meeting, and 
 per that discussion I have created a blueprint for this:

 https://blueprints.launchpad.net/neutron/+spec/neutron-fwaas-explicit
 -commit

 Further comments can be posted on the blueprint whiteboard and/or the 
 design spec doc.

 Thanks,
 ~Sumit.

 On Fri, Aug 2, 2013 at 6:43 PM, Sumit Naiksatam 
 sumitnaiksa...@gmail.com wrote:
  Hi All,
 
  In Neutron Firewall as a Service (FWaaS), we currently support an 
  implicit commit mode, wherein a change made to a firewall_rule is 
  propagated immediately to all the firewalls that use this rule (via 
  the firewall_policy association), and the rule gets applied in the 
  backend firewalls. This might be acceptable, however this is 
  different from the explicit commit semantics which most firewalls support.
  Having an explicit commit operation ensures that multiple rules can 
  be applied atomically, as opposed to in the implicit case where 
  each rule is applied atomically and thus opens up the possibility 
  of security holes between two successive rule applications.
 
  So the proposal here is quite simple -
 
  * When any changes are made to the firewall_rules 
  (added/deleted/updated), no changes will happen on the firewall 
  (only the corresponding firewall_rule resources are modified).
 
  * We will support an explicit commit operation on the firewall 
  resource. Any changes made to the rules since the last commit will 
  now be applied to the firewall when this commit operation is invoked.
 
  * A show operation on the firewall will show a list of the 
  currently committed rules, and also the pending changes.
 
  Kindly respond if you have any comments on this.
 
  Thanks,
  ~Sumit.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Skipping tests in tempest via config file

2013-08-13 Thread Matt Riedemann
I have the same issue.  I run a subset of the tempest tests via nose on a 
RHEL 6.4 VM directly against the site-packages (not using virtualenv). I'm 
running on x86_64, ppc64 and s390x and have different issues on all of 
them (a mix of DB2 on x86_64 and MySQL on the others, and different 
nova/cinder drivers on each).  What I had to do was just make a nose.cfg 
for each of them and throw that into ~/ for each run of the suite.

The switch from nose to testr hasn't impacted me because I'm not using a 
venv.  However, there was a change this week that broke me on python 2.6 
and I opened this bug:

https://bugs.launchpad.net/tempest/+bug/1212071 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Ian Wienand iwien...@redhat.com
To: openstack-dev@lists.openstack.org, 
Date:   08/13/2013 09:13 PM
Subject:[openstack-dev] Skipping tests in tempest via config file



Hi,

I proposed a change to tempest that skips tests based on a config file
directive [1].  Reviews were inconclusive and it was requested the
idea be discussed more widely.

Of course issues should go upstream first.  However, sometimes test
failures are triaged to a local/platform problem and it is preferable
to keep everything else running by skipping the problematic tests
while its being worked on.

My perspective is one of running tempest in a mixed CI environment
with RHEL, Fedora, etc.  Python 2.6 on RHEL doesn't support testr (it
doesn't do the setUpClass calls required by temptest) and nose
upstream has some quirks that make it hard to work with the tempest
test layout [2].

Having a common place in the temptest config to set these skips is
more convienent than having to deal with the multiple testing
environments.

Another proposal is to have a separate JSON file of skipped tests.  I
don't feel strongly but it does seem like another config file.

-i

[1] https://review.openstack.org/#/c/39417/
[2] https://github.com/nose-devs/nose/pull/717

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


image/gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift, netifaces, PyPy, and cffi

2013-08-13 Thread Joe Gordon
On Tue, Aug 13, 2013 at 6:56 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Alex Gaynor's message of 2013-08-13 14:58:56 -0700:
  Hi all,
 
  (This references this changeset: https://review.openstack.org/#/c/38415/
 )
 
  One of the goals I've been working at has been getting swift running on
  PyPy (and from there, the rest of OpenStack). The last blocking issue in
  swift is that it currently uses netifaces, which is a C-extension that
  doesn't on PyPy. I've proposed to replace this dependency with a cffi
 based
  binding to the system.


I assume you have seen
http://vish.everyone.me/running-openstack-nova-with-pypy



 
  For those not familiar, cffi is a tool for binding to C libraries,
 similar
  to ctypes (in the stdlib), except more expressive, less error prone, and
  faster; some of our downstream dependencies already use it.
 
  One of the issues that came up in this review however, is that cffi is
 not
  packaged in the most recent Ubuntu LTS (and likely other distributions),
  although it is available in raring, and in a PPA (
  http://packages.ubuntu.com/raring/python-cffi and
 
 https://launchpad.net/~pypy/+archive/ppa?field.series_filter=preciserespectively
 ).
 
  As a result of this, we wanted to get some feedback on which direction is
  best to go:
 
  a) cffi-only approach, this is obviously the simplest approach, and works
  everywhere (assuming you can install a PPA, use pip, or similar for cffi)

 There are a lot of dependencies of Grizzly and Havana that aren't in
 the official release of Ubuntu 12.04. That is why Canonical created
 the cloud archive, so that users can keep everything that isn't
 OpenStack+Dependencies on the LTS.

 The fact that cffi is already available in a release makes it even
 more likely that it will be a straight forward backport to the cloud
 archive. However, is Ubuntu 12.04's pypy 1.8 sufficient?  Ubuntu 13.04
 and 12.10 have 1.9, and saucy (the presumed 13.10) has 2.0.2.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Proposal for approving Starting by scheduler development blueprint.

2013-08-13 Thread cosmos cosmos
Hello. 

My name is Rucia for Samsung SDS.



Now, I am developing Start Logic by nova-scheduler for efficient resources of 
host.

This function is already implemented in folsom release version.



It is used for the iscsi target such as HP san storage.



This is slightly different from the original version.

If you start the instance after stop, the instance will started at optimal 
Compute host.

The selected host is through the nova-scheduler.



Before Logic

1. Do not use the scheduler originally in start logic of Openstack Nova

2. Start on the host where the instance is created



After Logic

1. When the stopped instance start, Changed to start from the hosts that is 
selected by nova-scheduler

2. When the VM starts, Check the resources through check_resource_limit()



Pros

- You can use resources efficiently 

- When you start a virtual machine, You can solve the problem that is error 
caused by the lack of resources on a host.



Below is my blueprint and wiki page.

Thanks



https://blueprints.launchpad.net/nova/+spec/start-instance-by-scheduler

https://wiki.openstack.org/wiki/Start

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] More than one compute_driver in nova.conf

2013-08-13 Thread Alex Glikson
Jake G. dj_dark_jungl...@yahoo.com wrote on 13/08/2013 07:25:04 AM:
 On 2013/08/13, at 12:50, Robert Collins robe...@robertcollins.net 
wrote:
  
  I was wondering how to handle changing the compute_driver in 
nova.conf? I
  currently have the default
  
  compute_driver = libvirt.LibvirtDriver
  libvirt_type=kvm
  
  I want to be able to add the driver for baremetal provisioning, but I 
am
  unclear on how to do this.
  Can there be more than one compute_driver or do I have to replace
  compute_driver = libvirt.LibvirtDriver with the new driver.
  
  You need to have a dedicated nova cell/region/cloud for baremetal. We
  change the scheduler at a fairly fundamental layer - Devananda has
  plans to rectify this, but probably only after Ironic is stable.
  
  If I can only replace what will happen to my existing KVM and nova
  operations?
  
  I suggest spinning up a new host rather than modifying your existing 
one.
  
  Also, I am just using the bearmetal driver as an example. This 
 same question
  can be applied to any other feature that requires a different
  compute_driver.
  
  Baremetal is special - unlike lxc/kvm/qemu baremetal doesn't subdivide
  an existing hypervisors resources, rather it hands out entire machines
  - so it has to match exactly rather than matching a subset when
  comparing flavors.
  
  Cheers,
  Rob
  
  -- 
  Robert Collins rbtcoll...@hp.com
  Distinguished Technologist
  HP Converged Cloud
 
 Thanks for the heads up. I can image the damage I would have done to
 my test environment trying to get this going. 
 
 Just to confirm, there is no way to add another compute node that is
 for baremetal only into an existing openstack environment? 

For the compute node itself, it seems possible just to have a separate 
host running nova-compute configured to work with bare-metal driver, with 
a different nova.conf. Regarding the scheduler, support for multiple 
configurations will be possible once 
https://review.openstack.org/#/c/37407/ is merged (hopefully soon). In the 
latest revision, nova.conf will not change, but you will be able to 
override scheduler parameters (driver, maybe even host manager) with 
flavor extra specs. At least in theory the combination of the two should 
allow having 'regular' and bare-metal in the same deployment, IMO.

Regards,
Alex
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Doubt regarding json filter

2013-08-13 Thread Peeyush Gupta
Hi,

I have been trying to use json filter. In the documentation,
it says that I can make my filter using json hints. JSON filter
has its own pre-defined variables like $free_ram_mb, $free_disk_mb
etc. I was wondering if it is possible to define other variables, so 
that I can define a filter suiting my needs? If yes, then where should
the new variables be defined?

Thanks
 
~Peeyush Gupta___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] nova-compute won't restart (on some nodes) after Grizzly upgrade

2013-08-13 Thread Michael Still
Jonathan, sorry for the slow reply. I had a baby on Friday last week
instead of keeping up with email. I promise it wont happen again. ;)

Did you manage these instances in virsh manually at all as part of the
upgrade? If not, I'd love you to file a bug with a log to show the
problem.

Thanks,
Michael

On Sun, Aug 11, 2013 at 10:17 PM, Jonathan Proulx j...@jonproulx.com wrote:
 Hi Michael,

 Thanks for the offer.  I'd be happy to paste up some compute logs if you
 have a interest, but I got around the issue with:

 virsh list --all

 and then 'virsh undefine' for all deleted instances on each host.  I've used
 hypervisors directly and high level stuff like openstack (and others) but
 never spent much time at the libvirt layer so that was a bit of new info for
 me apparrently from the operators list not long after I sent my query here.

 Thanks,
 -Jon


 On Wed, Aug 7, 2013 at 9:02 PM, Michael Still mi...@stillhq.com wrote:

 Johnathan,

 this would be easier to debug with a nova-compute log. Are you willing
 to post one somewhere that people could take a look at?

 Thanks,
 Michael

 On Thu, Aug 8, 2013 at 7:35 AM, Jonathan Proulx j...@jonproulx.com wrote:
  Hi All,
 
  Apologies to those who saw this on the operators list earlier, there is
  a
  bit of new info here  having gotten no response there thought I'd take
  it
  to a wider audience...
 
 
  I'm almost through my grizzly upgrade.  I'd upgraded everything except
  nova-compute before upgrading that (ubuntu 12.04 cloud archieve pkgs).
 
  On most nodes the nova-compute service upgraded and restarted properly,
  but
  on some it imediately exits with:
 
  CRITICAL nova [-] 'instance_type_memory_mb'
 
  It would seem like this is https://code.launchpad.net/bugs/1161022 but
  the
  fix for that was released in March and I've verified is in the packaged
  version I'm using.
 
  The referenced bug involves the DB migration only updating non-deleted
  instances in the instance-system-metatata table and the patch skips the
  lookups that are broken (and irrelevant) for deleted instances.
 
  Tracing the DB calls from the host shows it is trying to do lookups for
  instances that were deleted last October, which is a bit surprising as
  it's
  run thousands of instances since  it's not looking those up.
 
  It is note worthy that that is around the time I upgraded from Essex -
  Folsom so it's possible their state  is weirder than most having run
  through
  that update.
 
  There were directories for the instances in question in
  /var/lib/nova/instances, so I thought Aha! and moved them, but on
  restart
  I still get the same failure and same DB query for the old instances.
  Where
  is nova getting the idea it should look these up  how can I stop it?
 
  I've go so far as to generate instance_type_foo entries in the
  instance_system_metadata table  for all instances ever on my deployment
  (about 500k) but I still only have the cryptic CRITICAL nova [-]
  'instance_type_memory_mb' error and a failure to start, so clearly I'm
  casing the wrong problem some how.
 
  Help?
  -Jon
 
  ___
  Mailing list:
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
  Post to : openstack@lists.openstack.org
  Unsubscribe :
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 



 --
 Rackspace Australia





-- 
Rackspace Australia

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Error in bare metal configuration

2013-08-13 Thread Jake G.
Hi all,

Having an issue deploying baremetal on a new packstack created Openstack Griz 
server created soley for bare metal testing.
I have been following this 
https://wiki.openstack.org/wiki/GeneralBareMetalProvisioningFramework but after 
I run:

# nova-baremetal-manage db sync

I get the error:
Command failed, please check log for more info
2013-08-13 18:06:05.056 3412 CRITICAL nova [-] 'module' object has no attribute 
'DatabaseNotControlledError'
2013-08-13 18:06:05.056 3412 TRACE nova Traceback (most recent call last):
2013-08-13 18:06:05.056 3412 TRACE nova   File 
/usr/bin/nova-baremetal-manage, line 221, in module
2013-08-13 18:06:05.056 3412 TRACE nova     main()
2013-08-13 18:06:05.056 3412 TRACE nova   File 
/usr/bin/nova-baremetal-manage, line 213, in main
2013-08-13 18:06:05.056 3412 TRACE nova     fn(*fn_args, **fn_kwargs)
2013-08-13 18:06:05.056 3412 TRACE nova   File 
/usr/bin/nova-baremetal-manage, line 101, in sync
2013-08-13 18:06:05.056 3412 TRACE nova     bmdb_migration.db_sync(version)
2013-08-13 18:06:05.056 3412 TRACE nova   File 
/usr/lib/python2.6/site-packages/nova/virt/baremetal/db/migration.py, line 
34, in db_sync
2013-08-13 18:06:05.056 3412 TRACE nova     return IMPL.db_sync(version=version)
2013-08-13 18:06:05.056 3412 TRACE nova   File 
/usr/lib/python2.6/site-packages/nova/virt/baremetal/db/sqlalchemy/migration.py,
 line 71, in db_sync
2013-08-13 18:06:05.056 3412 TRACE nova     current_version = db_version()
2013-08-13 18:06:05.056 3412 TRACE nova   File 
/usr/lib/python2.6/site-packages/nova/virt/baremetal/db/sqlalchemy/migration.py,
 line 85, in db_version
2013-08-13 18:06:05.056 3412 TRACE nova     except 
versioning_exceptions.DatabaseNotControlledError:
2013-08-13 18:06:05.056 3412 TRACE nova AttributeError: 'module' object has no 
attribute 'DatabaseNotControlledError'
2013-08-13 18:06:05.056 3412 TRACE nova 

I see a similar error reported here but the answer is not confirmed. 
https://lists.launchpad.net/openstack/msg24081.html

Thanks for your help
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] quantum l2 networks

2013-08-13 Thread Francois Deppierraz
Hi Aaron,

Thanks for the patch!

I was experiencing the same issue than the OP with grizzly installed
from the Ubuntu Cloud Archive with quantum and openvswitch. Adding
security groups to a running instance works well now.

Is there any plan to have it included in the havana release, or even
better patched in grizzly?

Cheers,

François

On 08. 06. 13 11:40, Aaron Rosen wrote:
 Hi Daniel, 
 
 That's for finding this! This is a bug. The code wasn't accounting if
 the plugin didn't implement port_security_enabled.  Here's a patch that
 fixes the issue in the meantime. 
 
 Best, 
 
 Aaron 
 
 --- a/nova/network/security_group/quantum_driver.py
 +++ b/nova/network/security_group/quantum_driver.py
 @@ -340,8 +340,9 @@ class
 SecurityGroupAPI(security_group_base.SecurityGroupBase):
  has_ip = port.get('fixed_ips')
  if port_security_enabled and has_ip:
  return True
 -else:
 -return False
 +elif 'port_security_enabled' not in port and has_ip:
 +return True
 +return False
  
  @wrap_check_security_groups_policy
  def add_to_instance(self, context, instance, security_group_name):
 
 
 
 On Sat, Jun 8, 2013 at 2:14 AM, daniels cai danx...@gmail.com
 mailto:danx...@gmail.com wrote:
 
  
 nova add-secgroup 24891d97-8d0e-4e99-9537-c8f8291913d0 d11
 
 ERROR: Network requires port_security_enabled and subnet associated
 in order to apply security groups. (HTTP 400) (Request-ID:
 req-94cb2d54-858b-4843-af53-b373c88bcdc0)
 
 
 security group is exists 
 
 # quantum security-group-list 
 +--+-+--+
 | id   | name| description  |
 +--+-+--+
 | 0acc8258-bd9f-4f87-b051-a94dbc1504eb | default | default  |
 | 5902febc-e793-4b09-8073-567226d83d79 | d11 | des for firewall |
 +--+-+--+
 
 
 
 Daniels Cai
 http://dnscai.com
 
 
 2013/6/8 Aaron Rosen aro...@nicira.com mailto:aro...@nicira.com
 
 You said: 
 
 it works, but when i try to attach a security group to an exist
 vm , api throw an error :Network requires
 port_security_enabled and subnet associated in order to apply
 security groups.
 
 What command are you running to generate that error? 
 
 
 
 On Sat, Jun 8, 2013 at 1:45 AM, daniels cai danx...@gmail.com
 mailto:danx...@gmail.com wrote:
 
 Aaron , thanks for you answers, i see it.
 
 we are not useing nvp in our environemnt
 yet.
 
 my vm is boot with a subnet_id specified
 . 
 i am sure about it .
 here is more info:
 
 vm has an ip 192.168.6.100 , this ip belongs to subnet
 83afd693-7e36-41e9-b896-9d8b0d89d255
 , this subnet belongs to network iaas-net, network id is
 5332f0f7-3156-4961-aa67-0b8507265fa5
 
 # nova list
 
 | 24891d97-8d0e-4e99-9537-c8f8291913d0 |
 ubuntu-1304-server-amd64 | ACTIVE  | iaas-net=192.168.6.100
 
 here is quantum network info :
 
 # quantum net-list
 
 +--+--+---+
 | id   | name |
 subnets   |
 
 +--+--+---+
 |
 5332f0f7-3156-4961-aa67-0b8507265fa5 | iaas-net |
 329ca377-6193-4a0c-9320-471cd5ff762f 192.168.202.0/24
 http://192.168.202.0/24 |
 |  |  |
 83afd693-7e36-41e9-b896-9d8b0d89d255 192.168.6.0/24
 http://192.168.6.0/24   |
 |  |  |
 bb1afb2d-ab59-4ba4-8a76-8b5b426b8e33 192.168.7.0/24
 http://192.168.7.0/24   |
 |  |  |
 d59794df-bb49-4924-a19f-cbdec0ce24df 192.168.188.0/24
 http://192.168.188.0/24 |
 |  |  |
 dca45033-e506-42e4-bf05-aaccd0591c55 192.168.193.0/24
 http://192.168.193.0/24 |
 |  |  |
 e8a9be74-2f39-4d7e-9287-c5b85b573cca 192.168.192.0/24
 http://192.168.192.0/24 |
 
 
 i enabled the following features in quantum
 1. namespace
 2. overlap ips
 
 if any more info needed for 

[Openstack] Fwd: [vmwareapi] Could we distribute wsdl files from vSphere SDK?

2013-08-13 Thread Roman Sokolkov
Hi, folks

Herehttp://docs.openstack.org/trunk/openstack-compute/admin/content/vmware.htmlmetioned
that we need vsphere SDK to use with vmwareapi drivers. It could
be downloaded from vmware.com only by authorized users.

As i understand nova uses only wsdl files from this SDK. What are license
terms for wsdl files from SDK?

Could we distribute necessary wsdl files with our deployment packs?

   - I've found
discussionhttps://groups.google.com/forum/#!msg/jclouds-dev/dT3MkGT2eNo/7bERFdi8HY0J,
   but without answer.
   - and official FAQ https://communities.vmware.com/docs/DOC-7983, but
   it also not clear for me.

Thanks, Roman
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Fwd: Multiple Network Node Set Up

2013-08-13 Thread raghavendra.lad
Hi Girija,

The Openstack Multi-node installation that you have planned let me know about 
the Details.

Controller node

Network node

Compute Nodes

MySQL and the Nova services and Cider install steps, how you have planned?

Regards,
Raghavendra Lad

From: Girija Sharan [mailto:girijasharansi...@gmail.com]
Sent: Tuesday, August 13, 2013 7:43 PM
To: openstack@lists.openstack.org
Subject: [Openstack] Fwd: Multiple Network Node Set Up

Hi all,

I am using Openstack Grizzly release. Right now I am following multi-node 
installation which has 1 Openstack Controller , 1 Network and 2 Compute nodes.

I wanted to know if some one has tried with multiple Network nodes also. What 
all changes one would have to do to get multiple Network nodes in the set up.
Any idea in this will be highly appreciated.
Thanks in advance.
Cheers Openstack !!!
Regards,
Girija Sharan



This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise confidential information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the e-mail by you is prohibited.

Where allowed by local law, electronic communications with Accenture and its 
affiliates, including e-mail and instant messaging (including content), may be 
scanned by our systems for the purposes of information security and assessment 
of internal compliance with Accenture policy.

__

www.accenture.com
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Fwd: [vmwareapi] Could we distribute wsdl files from vSphere SDK?

2013-08-13 Thread Christian Berendt
On 08/13/2013 04:42 PM, Shawn Hartsock wrote:
 Could someone direct me to how to fix the official docs? I'll do that 
 *today*... it's on my to-do list anyway.

Have a look at https://wiki.openstack.org/wiki/Documentation/HowTo.

HTH, Christian.

-- 
Christian Berendt
Cloud Computing Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Fwd: [vmwareapi] Could we distribute wsdl files from vSphere SDK?

2013-08-13 Thread Shawn Hartsock
Thanks. I'll sit down and do this today. That documentation has been wrong 
quite a while.

# Shawn Hartsock

- Original Message -
 From: Christian Berendt bere...@b1-systems.de
 To: Shawn Hartsock hartso...@vmware.com
 Cc: Roman Sokolkov rsokol...@gmail.com, openstack@lists.openstack.org
 Sent: Tuesday, August 13, 2013 10:48:30 AM
 Subject: Re: [Openstack] Fwd: [vmwareapi] Could we distribute wsdl files from 
 vSphere SDK?
 
 On 08/13/2013 04:42 PM, Shawn Hartsock wrote:
  Could someone direct me to how to fix the official docs? I'll do that
  *today*... it's on my to-do list anyway.
 
 Have a look at https://wiki.openstack.org/wiki/Documentation/HowTo.
 
 HTH, Christian.
 
 --
 Christian Berendt
 Cloud Computing Solution Architect
 Mail: bere...@b1-systems.de
 
 B1 Systems GmbH
 Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
 GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537
 

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Odd glance trouble after Grizzly update

2013-08-13 Thread Jonathan Proulx
I'd though this was happening before it got to a compute host since no host
was listed in the instances table of the database, but I did manage to hunt
down the RPC exception that happens on the compute node:

2013-08-13 12:12:09.055 AUDIT nova.compute.manager
[req-3ec8ea52-2ef7-4a38-8ccf-4d629a2a16df 0be8fa0d641a4e778b9262bd2e5f40b5
6f9adccbd03e4d2186756896957a14bf] [instance:
62f92c7a-a7cf-41e7-83c5-055ad43ad78a] Starting instance...
2013-08-13 12:12:11.279 ERROR nova.openstack.common.rpc.amqp
[req-3ec8ea52-2ef7-4a38-8ccf-4d629a2a16df 0be8fa0d641a4e778b9262bd2e5f40b5
6f9adccbd03e4d2186756896957a14bf] Exception during message handling

and then a long trace, included below.

I suspected an incorrect rabbit password somewhere as the failure coincided
with a config management update but I've mathc the passwords in config
managenment and all teh openstack configs I can think of comparing both
current values and backups from a known good state and haven't found that
to be the case yet

Thanks,
-Jon

TRACE:

2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp Traceback
(most recent call last):
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line
430, in _process_data
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp rval
= self.proxy.dispatch(ctxt, version, method, **args)
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py,
line 133, in dispatch
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp
return getattr(proxyobj, method)(ctxt, **kwargs)
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/exception.py, line 117, in wrapped
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp
temp_level, payload)
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp
self.gen.next()
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/exception.py, line 94, in wrapped
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp
return f(self, context, *args, **kw)
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 209, in
decorated_function
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp pass
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp
self.gen.next()
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 195, in
decorated_function
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp
return function(self, context, *args, **kwargs)
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 260, in
decorated_function
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp
function(self, context, *args, **kwargs)
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 237, in
decorated_function
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp e,
sys.exc_info())
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp
self.gen.next()
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 224, in
decorated_function
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp
return function(self, context, *args, **kwargs)
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1224, in
run_instance
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp
do_run_instance()
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py, line
242, in inner
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp
retval = f(*args, **kwargs)
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1223, in
do_run_instance
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp
admin_password, is_first_time, node, instance)
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp   File

Re: [Openstack] Odd glance trouble after Grizzly update

2013-08-13 Thread Jonathan Proulx
D'Oh  of course after the messy trace post i find my typo

I'd accidentally defined glance_api_servers to be IP rather than IP:PORT
while refactoring my configuration management (I'd suspected it was in
there but had been looking in all the wrong places)

-Jon
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] quantum l2 networks

2013-08-13 Thread Ashok Kumaran
I guess it's already back-ported to Grizzly 2013.1.3 cycle

https://review.openstack.org/#/c/32679


Best
Ashok

Sent from my iPhone

On 13-Aug-2013, at 6:24 PM, Francois Deppierraz franc...@ctrlaltdel.ch
wrote:

Hi Aaron,

Thanks for the patch!

I was experiencing the same issue than the OP with grizzly installed
from the Ubuntu Cloud Archive with quantum and openvswitch. Adding
security groups to a running instance works well now.

Is there any plan to have it included in the havana release, or even
better patched in grizzly?

Cheers,

François

On 08. 06. 13 11:40, Aaron Rosen wrote:

Hi Daniel,


That's for finding this! This is a bug. The code wasn't accounting if

the plugin didn't implement port_security_enabled.  Here's a patch that

fixes the issue in the meantime.


Best,


Aaron


--- a/nova/network/security_group/quantum_driver.py

+++ b/nova/network/security_group/quantum_driver.py

@@ -340,8 +340,9 @@ class

SecurityGroupAPI(security_group_base.SecurityGroupBase):

has_ip = port.get('fixed_ips')

if port_security_enabled and has_ip:

return True

-else:

-return False

+elif 'port_security_enabled' not in port and has_ip:

+return True

+return False


@wrap_check_security_groups_policy

def add_to_instance(self, context, instance, security_group_name):




On Sat, Jun 8, 2013 at 2:14 AM, daniels cai danx...@gmail.com

mailto:danx...@gmail.com danx...@gmail.com wrote:



   nova add-secgroup 24891d97-8d0e-4e99-9537-c8f8291913d0 d11


   ERROR: Network requires port_security_enabled and subnet associated

   in order to apply security groups. (HTTP 400) (Request-ID:

   req-94cb2d54-858b-4843-af53-b373c88bcdc0)



   security group is exists


   # quantum security-group-list

   +--+-+--+

   | id   | name| description  |

   +--+-+--+

   | 0acc8258-bd9f-4f87-b051-a94dbc1504eb | default | default  |

   | 5902febc-e793-4b09-8073-567226d83d79 | d11 | des for firewall |

   +--+-+--+




   Daniels Cai

   http://dnscai.com



   2013/6/8 Aaron Rosen aro...@nicira.com
mailto:aro...@nicira.comaro...@nicira.com



   You said:


it works, but when i try to attach a security group to an exist

   vm , api throw an error :Network requires

port_security_enabled and subnet associated in order to apply

   security groups.


   What command are you running to generate that error?




   On Sat, Jun 8, 2013 at 1:45 AM, daniels cai danx...@gmail.com

   mailto:danx...@gmail.com danx...@gmail.com wrote:


   Aaron , thanks for you answers, i see it.


   we are not useing nvp in our environemnt

   yet.


   my vm is boot with a subnet_id specified

   .

   i am sure about it .

   here is more info:


   vm has an ip 192.168.6.100 , this ip belongs to subnet

   83afd693-7e36-41e9-b896-9d8b0d89d255

   , this subnet belongs to network iaas-net, network id is

   5332f0f7-3156-4961-aa67-0b8507265fa5


   # nova list


   | 24891d97-8d0e-4e99-9537-c8f8291913d0 |

   ubuntu-1304-server-amd64 | ACTIVE  | iaas-net=192.168.6.100


   here is quantum network info :


   # quantum net-list

   
+--+--+---+

   | id   | name |

   subnets   |

   
+--+--+---+

   |

   5332f0f7-3156-4961-aa67-0b8507265fa5 | iaas-net |

   329ca377-6193-4a0c-9320-471cd5ff762f 192.168.202.0/24

   http://192.168.202.0/24 |

   |  |  |

   83afd693-7e36-41e9-b896-9d8b0d89d255 192.168.6.0/24

   http://192.168.6.0/24   |

   |  |  |

   bb1afb2d-ab59-4ba4-8a76-8b5b426b8e33 192.168.7.0/24

   http://192.168.7.0/24   |

   |  |  |

   d59794df-bb49-4924-a19f-cbdec0ce24df 192.168.188.0/24

   http://192.168.188.0/24 |

   |  |  |

   dca45033-e506-42e4-bf05-aaccd0591c55 192.168.193.0/24

   http://192.168.193.0/24 |

   |  |  |

   e8a9be74-2f39-4d7e-9287-c5b85b573cca 192.168.192.0/24

   http://192.168.192.0/24 |



   i enabled 

Re: [Openstack] nova-compute won't restart (on some nodes) after Grizzly upgrade

2013-08-13 Thread Jonathan Proulx
On Tue, Aug 13, 2013 at 3:59 AM, Michael Still mi...@stillhq.com wrote:

 Jonathan, sorry for the slow reply. I had a baby on Friday last week
 instead of keeping up with email. I promise it wont happen again. ;)


Congrats, now what the hell are you doing reading this list?


 Did you manage these instances in virsh manually at all as part of the
 upgrade? If not, I'd love you to file a bug with a log to show the
 problem.


I did end up using virsh undefine on a number of nodes.  I don't know
that I can make a useful bug report out of it as I took some pretty coarse
sweeping actions in my rush to make it go again But I'll try and save
what logs I have and put together a bug after I get my networking tamed.

-Jon
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Announcing: Gluster Community Day @ SF, Aug. 27

2013-08-13 Thread John Mark Walker
NOTE: Interested in speaking? Contact me off-line.

Thanks to Rackspace for hosting our San Francisco Community Day on Tuesday,
August 27! This day will feature in-depth sessions, use cases, demos, and
developer content presented by Gluster Community experts.

Note that the agenda is subject to change.


AGENDA

9:00am - 9:30am - Light breakfast, introductions, networking

9:30am - 10:30am - The State of the Gluster Community

10:30am - 11:30am - What's New in GlusterFS 3.4

11:30am - 12:30pm - Open

12:30pm - 1:30pm - Lunch (on site)

1:30pm - 2:30pm - All about Geo-replication + demo

2:30pm - 3:30pm - Open

3:30pm - 3:45pm - Break

3:45pm - 4:45pm - Gluster for Developers

4:45pm - 5:00pm - Closing remarks

5:00pm - 6:00pm - Free-as-in-beer happy hour!


Look forward to seeing you there! Come for all or parts of the community
day, even if it's just the happy hour.

Come for the drinks, stay for the Gluster.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack