Re: [openstack-dev] [Nova] Network stuff in Nova API v3

2013-08-12 Thread Alex Xu

On 2013年08月08日 13:49, Zhu Bo wrote:

On 2013年08月07日 21:42, Alex Xu wrote:

On 2013年08月07日 17:38, John Garbutt wrote:

multi-nic added an extra virtual interface on a seprate network, like
adding a port:
http://docs.openstack.org/trunk/openstack-compute/admin/content/using-multi-nics.html 

That just describe create instance with multinic, that we will 
support. Still have problem
with action add_fixed_ip and remove_fixed_ip in extension multinic. 
Those action

invoke inject_network_info and reset_network.


I think we need to keep a nova-network focused api extension, and a
separate neutron focused api extension, because we have not yet
removed neutron. It should probably proxy the neutron information
still, so people can more easily transition between nova-network and
neutron.

Sound good, thanks.
Nova v2 api will be saved with v3 for some time, I think.  Why not 
just keep neutron api extension in v3?
I think people can have enough time to understand the difference 
between v2 and v3. If we keep
api for nova-network in v3, we will still face the same problem when 
next api version occur  or when

remove the nova-network.
Make sense. If we add nova-network focused api extension, will face the 
same problem when next api version occur

I agree we should probably slim down the neturon focused api extension.

Howerver, it should probably include network-ids and port-ids for each
port, if we still support both:
 nova boot --image  --flavor  --nic net-id=
--nic net-id= 
and this:
nova boot --image  --flavor  --nic 
port-id= 

Yes, we still support those. But why we need network-ids?

Longer term, we still need the metadata service to provide networking
information, so there will be a nova-api that has to proxy info from
neutron, but I agree we should reduce where we can.

agree with this. There will be a nova-api that has to proxy info from
neutron, but we should reduce where we can.


John

On 7 August 2013 10:08, Alex Xu  wrote:

Hi, guys,

Currently we have one core and two extensions that related network 
in Nova

API v3.
They are ips, attach_interface and multinic. I have two questions 
for them.


The first question is about ips and attach_interface. The below was 
the

index's response
of ips and attach_interface:
ips:
{
 "addresses": {
 "net1": [
 {
 "addr": "10.0.0.8",
 "mac_addr": "fa:16:3e:c2:0f:aa",
 "type": "fixed",
 "version": 4
 },
 {
 "addr": "30.0.0.5",
 "mac_addr": "fa:16:3e:c2:0f:aa",
 "type": "floating",
 "version": 4
 }
 ]
 }
}

attach_interface:
{
 "interface_attachments": [
 {
 "fixed_ips": [
 {
 "ip_address": "10.0.0.8",
 "subnet_id": 
"f84f7d51-758c-4a02-a4c9-171ed988a884"

 }
 ],
 "mac_addr": "fa:16:3e:c2:0f:aa",
 "net_id": "b6ba34f1-5504-4aca-825b-04511c104802",
 "port_id": "3660380b-0075-4115-be96-f08b41ccdf5d",
 "port_state": "ACTIVE"
 }
 ]
}

The problem is the responses are similar, but just with different 
view,  and

all the information can
get from Neutron directly. I think we didn't want to proxy Neutron 
through

Nova. So how about
we merge ips and attach_interface into an new extension. The new 
extension

will be include the
things as below:
1. Extend the detail of servers to list the uuid of port. User can 
get more

information from Neutron
by port uuid.
2. Attach and detach interface that move from extension 
attach_interface.
3. Extend the creation of servers to support network (The patch 
already here

https://review.openstack.org/#/c/36615/)

The second question is about multinic. Looking into the code, 
multinic just

add fixed_ip for server's port.
That can be done by Neutron API directly too. But there are
inject_network_info and reset_network
in the code. Only xen and vmware's driver implement that function. 
I'm not

familiar with xen and
vmware, I guess it use guest agent to update the guest network. If 
I am

right, I think we didn't
encourage using that way to update guest network.There are api for
inject_network_info and reset_network
in extension admin-actions also. I think we can keep them. But can 
we delete

multinic for V3?

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__

[openstack-dev] Remove 'absolute_limit' from limits's response for v3

2013-08-12 Thread Alex Xu

Hi, guys,

 When I'm cleaning up v3 api. I found limits extension will return 
absolute_limit. I think that already
done by extension quota_sets. And I can't guess the reason why we keep 
that in limits. For ensure,
I didn't missing something, I bring it to here. If we haven't any reason 
for keep it in limits, I prefer delete it.


https://review.openstack.org/#/c/39872/


Thanks
Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] AUTO: Zhao Fang ZF Han is out of the office (returning 08/19/2013)

2013-08-12 Thread Zhao Fang ZF Han


I am out of the office until 08/19/2013.

I'll be out of office for 5 days from 08/12 to 08/16, will not able to
response to your mail very quickly.


Note: This is an automated response to your message  "Re: [openstack-dev]
[Nova] Network stuff in Nova API v3" sent on 08/12/2013 15:59:47.

This is the only notification you will receive while this person is away.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-12 Thread Patrick Petit

On 8/9/13 3:05 PM, Nikolay Starodubtsev wrote:

Hello, Patrick!

We have several reasons to think that for the virtual resources this 
possibility is interesting. If we speak about physical resources, user 
may use them in the different ways, that's why it is impossible to 
include base actions with them to the reservation service. But 
speaking about virtual reservations, let's imagine user wants to 
reserve virtual machine. He knows everything about it - its 
parameters, flavor and time to be leased for. Really, in this case 
user wants to have already working (or at least starting to work) 
reserved virtual machine and it would be great to include this 
opportunity to the reservation service.
We are thinking about base actions for the virtual reservations that 
will be supported by Climate, like boot/delete for instance, 
create/delete for volume and create/delete for the stacks. The same 
will be with volumes, IPs, etc. As for more complicated behaviour, it 
may be implemented in Heat. This will make reservations simpler to use 
for the end users.


Don't you think so?
Well yes and and no. It really depends upon what you put behind those 
lease actions. The view I am trying to sustain is separation of duties 
to keep the service simple, ubiquitous and non prescriptive of a certain 
kind of usage pattern. In other words, keep Climate for reservation of 
capacity (physical or virtual), Heat for orchestration, and so forth. 
... Consider for example the case of reservation as a non technical act 
but rather as a business enabler for wholesales activities. Don't need, 
and probably don't want to start or stop any resource there. I do not 
deny that there are cases where it is desirable but then how 
reservations are used and composed together at the end of the day mainly 
depends on exogenous factors which couldn't be anticipated because they 
are driven by the business.


And so, rather than coupling reservations with wired resource 
instantiation actions, I would rather couple them with notifications 
that everybody can subscribe to (as opposed to the Resource Manager 
only) which would let users decide what to do with the life-cycle 
events. The what to do may very well be what you advocate i.e. start a 
full stack of reserved and interwoven resources, or at the other end of 
the spectrum, do nothing at all. This approach IMO would keep things 
more open.


P.S. Also we remember about the problem you mentioned some letters ago 
- how to guarantee that user will have already working and prepared 
host / VM / stack / etc. by the time lease actually starts, no just 
"lease begins and preparing process begins too". We are working on it now.
Yes. I think I was explicitly referring to hosts instantiation also 
because there is no support of that in Nova API. Climate should support 
some kind of "reservation kick-in heads-up" notification whereby the 
provider and/or some automated provisioning tools could do the heavy 
lifting work of bringing physical hosts online before a hosts 
reservation lease starts. I think it doesn't have to be rocket-science 
either. It's probably sufficient to make Climate fire up a notification 
that say "Lease starting in x seconds", x being an offset value against 
T0 that could be defined by the operator based on heuristics. A 
dedicated (e.g. IPMI) module of the Resource Manager for hosts 
reservation would subscribe as listener to those events.



On Thu, Aug 8, 2013 at 8:18 PM, Patrick Petit > wrote:


Hi Nikolay,

Relying on Heat for orchestration is obviously the right thing to
do. But there is still something in your design approach that I am
having difficulties to comprehend since the beginning. Why do you
keep thinking that orchestration and reservation should be treated
together? That's adding unnecessary complexity IMHO. I just don't
get it. Wouldn't it be much simpler and sufficient to say that
there are pools of reserved resources you create through the
reservation service. Those pools could be of different types i.e.
host, instance, volume, network,.., whatever if that's really
needed. Those pools are identified by a unique id that you pass
along when the resource is created. That's it. You know, the AWS
reservation service doesn't even care about referencing a
reservation when an instance is created. The association between
the two just happens behind the scene. That would work in all
scenarios, manual, automatic, whatever... So, why do you care so
much about this in a first place?
Thanks,
Patrick

On 8/7/13 3:35 PM, Nikolay Starodubtsev wrote:

Patrick, responding to your comments:

1) Dina mentioned "start automatically" and "start manually" only
as examples of how these politics may look like. It doesn't seem
to be a correct approach to put orchestration functionality (that
belongs to Heat) in Climate. That's why now we can implemen

[openstack-dev] XenServer - supported image download interface

2013-08-12 Thread Mate Lakat
Hi Stackers,

I would like to advertise a feature, and a blueprint, that enables to
use sparse images with XenAPI using a supported interface.

The download part is ready, and waiting for review:

https://review.openstack.org/#/c/40708/
https://review.openstack.org/#/c/40906/
https://review.openstack.org/#/c/40907/
https://review.openstack.org/#/c/40908/
https://review.openstack.org/#/c/40909/

The blueprint could be found here:
https://blueprints.launchpad.net/nova/+spec/xenapi-supported-image-import-export

I think the canges are self-contained and small.

If you want to try it out: (assuming you have a raw disk image called
0.raw - I was using a cirros image):

cd /opt/stack/devstack/
. openrc admin
tar -czf raw-stuff.tgz 0.raw
glance image-create --name raw-in-tgz \
  --container-format=ovf --disk-format=raw < raw-stuff.tgz
nova boot --flavor m1.tiny --image raw-in-tgz raw-instance

To boot it as PV, set some metadata:

glance image-update raw-in-tgz --property vm_mode=xen

Any review is appreciated.

-- 
Mate Lakat

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-12 Thread Patrick Petit

On 8/9/13 9:06 PM, Scott Devoid wrote:

Hi Nikolay and Patrick, thanks for your replies.

Virtual vs. Physical Resources
Ok, now I realize what you meant by "virtual resources," e.g. 
instances, volumes, networks...resources provided by existing 
OpenStack schedulers. In this case "physical resources" are actually 
more "removed" since there are no interfaces to them in the user-level 
OpenStack APIs. If you make a physical reservation on "this rack of 
machines right here", how do you supply this reservation information 
to nova-scheduler? Probably via scheduler hints + an availability zone 
or host-aggregates. At which point you're really defining a instance 
reservation that includes explicit scheduler hints. Am I missing 
something?


Hi Scott!
No, you don't. At least, it's how I see things working for hosts 
reservation. In fact, it is already partially addressed in Havana with 
https://wiki.openstack.org/wiki/WholeHostAllocation. What's missing is 
the ability to automate the create and release of those pools based on a 
lease schedule.

Thanks
Patrick

Eviction:
Nikolay, to your point that we might evict something that was already 
paid for: in the design I have in mind, this would only happen if the 
policies set up by the operator caused one reservation to be weighted 
higher than another reservation. Maybe because one client paid more? 
The point is that this would be configurable and the sensible default 
is to not evict anything.



On Fri, Aug 9, 2013 at 8:05 AM, Nikolay Starodubtsev 
mailto:nstarodubt...@mirantis.com>> wrote:


Hello, Patrick!

We have several reasons to think that for the virtual resources
this possibility is interesting. If we speak about physical
resources, user may use them in the different ways, that's why it
is impossible to include base actions with them to the reservation
service. But speaking about virtual reservations, let's imagine
user wants to reserve virtual machine. He knows everything about
it - its parameters, flavor and time to be leased for. Really, in
this case user wants to have already working (or at least starting
to work) reserved virtual machine and it would be great to include
this opportunity to the reservation service. We are thinking about
base actions for the virtual reservations that will be supported
by Climate, like boot/delete for instance, create/delete for
volume and create/delete for the stacks. The same will be with
volumes, IPs, etc. As for more complicated behaviour, it may be
implemented in Heat. This will make reservations simpler to use
for the end users.

Don't you think so?

P.S. Also we remember about the problem you mentioned some letters
ago - how to guarantee that user will have already working and
prepared host / VM / stack / etc. by the time lease actually
starts, no just "lease begins and preparing process begins too".
We are working on it now.


On Thu, Aug 8, 2013 at 8:18 PM, Patrick Petit
mailto:patrick.pe...@bull.net>> wrote:

Hi Nikolay,

Relying on Heat for orchestration is obviously the right thing
to do. But there is still something in your design approach
that I am having difficulties to comprehend since the
beginning. Why do you keep thinking that orchestration and
reservation should be treated together? That's adding
unnecessary complexity IMHO. I just don't get it. Wouldn't it
be much simpler and sufficient to say that there are pools of
reserved resources you create through the reservation service.
Those pools could be of different types i.e. host, instance,
volume, network,.., whatever if that's really needed. Those
pools are identified by a unique id that you pass along when
the resource is created. That's it. You know, the AWS
reservation service doesn't even care about referencing a
reservation when an instance is created. The association
between the two just happens behind the scene. That would work
in all scenarios, manual, automatic, whatever... So, why do
you care so much about this in a first place?
Thanks,
Patrick

On 8/7/13 3:35 PM, Nikolay Starodubtsev wrote:

Patrick, responding to your comments:

1) Dina mentioned "start automatically" and "start manually"
only as examples of how these politics may look like. It
doesn't seem to be a correct approach to put orchestration
functionality (that belongs to Heat) in Climate. That's why
now we can implement the basics like starting Heat stack, and
for more complex actions we may later utilize something like
Convection (Task-as-a-Service) project.

2) If we agree that Heat is the main consumer of
Reservation-as-a-Service, we can agree that lease may be
created according to one of the following scenarions (b

[openstack-dev] [savanna] Savanna PTL election proposal

2013-08-12 Thread Matthew Farrellee

This is a request for feedback from the community.

The Savanna project has been operating with a benevolent dictator. It 
wants to upgrade to an elected PTL.


There's no set process for a project that isn't incubating or 
integrated. Our goal is to mirror the standard election process as 
closely as possible, but a few options exist for components of the election.


The goal is to agree on election options during this week's (15 Aug) 
Savanna meeting 
(https://wiki.openstack.org/wiki/Meetings/SavannaAgenda), start the 
election after the meeting, and complete the election by the following 
week's meeting. (This is also open for suggestions)


The proposal w/ options -
 0. System -
  a. http://www.cs.cornell.edu/w8/~andru/civs/
 1. Candidates -
  a. members of the electorate (OpenStack standard)
 2. Candidate nomination -
  a. anyone can list names in 
https://etherpad.openstack.org/savanna-ptl-candidates-0

  b. anyone mentioned during this week's IRC meeting
  c. both (a) and (b)
  - Current direction is to be inclusive and thus (c)
 3. Electorate -
  a. all AUTHORS on the Savanna repositories
  b. all committers (git log --author) on Savanna repos since Grizzly 
release

  c. all committers since Savanna inception
  d. savanna-core members (currently 2 people)
  e. committers w/ filter on number of commits or size of commits
  - Current direction is to be broadly inclusive (not (d) or (e)) thus 
(a), it is believed that (a) ~= (b) ~= (c).

 4. Duration of election -
  a. 1 week (from 15 Aug meeting to 22 Aug meeting)
 5. Term -
  a. effective immediately through next full OpenStack election cycle 
(i.e. now until "I" release, 6 mo+)

  b. effective immediately until min(6 mo, incubation)
  c. effective immediately until end of incubation
  - Current direction is any option that aligns with the standard 
OpenStack election cycle


FYI, Savanna repositories -
 . https://github.com/stackforge/savanna - core services
 . https://github.com/stackforge/savanna-extra - DIB elements
 . https://github.com/stackforge/savanna-dashboard - horizon integration
 . https://github.com/stackforge/python-savannaclient - client library

Thanks to hub_cap and other folks on #savanna for the lively discussion 
and debate in forming this proposal.


Best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ANN] LogCAS is released

2013-08-12 Thread Akira Yoshiyama
Hello stackers,

I'm pleased to announce the first release of LogCAS, a log
collecting/analyzing system for OpenStack.

LogCAS has major features below:

* List logs: list logs from OpenStack components by time series.
* List requests: list logs per request ID.
* Display a request: list logs for a user request by time series.
* Display a log: print details of a log.

You can find more details on github (https://github.com/yosshy/logcas).

README(en) https://github.com/yosshy/logcas/blob/master/README.md
README(ja) https://github.com/yosshy/logcas/blob/master/README.ja.md
Screenshots https://github.com/yosshy/logcas/tree/master/screenshots

Comments are welcome.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a blueprint and bugs needing attention

2013-08-12 Thread Gary Kotton


-Original Message-
From: Shawn Hartsock [mailto:hartso...@vmware.com] 
Sent: Thursday, August 08, 2013 9:56 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [nova][vmware] a blueprint and bugs needing attention

I have a couple things that need some action (they're becoming problems).

This blueprint is not accepted... but the associated patch is merged already! 
Can we just get the paperwork on this one caught up?
* https://blueprints.launchpad.net/nova/+spec/vmware-configuration-section
** https://review.openstack.org/#/c/37539/

[Gary Kotton] The BP has been marked completed and the code is accepted into 
the Havana release

This review has sat for weeks with very few -1's and it's been in review for 
months, it really is ready to go:
* https://review.openstack.org/#/c/30822/

These patches are also pretty much a slam-dunk for whomever has time to look:
* https://review.openstack.org/#/c/33504/
* https://review.openstack.org/#/c/39336/


[Gary Kotton] Yes, this one has some very good karma

Thanks for your help.

# Shawn Hartsock

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for approving Auto HA development blueprint.

2013-08-12 Thread yongiman


> Hi,
> 
> Now, I am developing auto ha operation for vm high availability.
> 
> This function is all progress automatically.
> 
> It needs other service like ceilometer.
> 
> ceilometer monitors compute nodes.
> 
> When ceilometer detects broken compute node, it send a api call to Nova, 
> nova exposes for auto ha API.
> 
> When received auto ha call, nova progress auto ha operation.
> 
> All auto ha enabled VM where are running on broken host are all migrated to 
> auto ha Host which is extra compute node for using only Auto-HA function.
> 
> Below is my blueprint and wiki page.
> 
> Wiki page is not yet completed. Now I am adding lots of information for this 
> function.
> 
> Thanks
> 
> https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken
> 
> https://wiki.openstack.org/wiki/Autoha
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Need help with HBase backend

2013-08-12 Thread Thomas Maddox
Hey team,

I am working on a fix for retrieving the latest metadata on a resource rather 
than the first with the HBase implementation, and I'm running into some trouble 
when trying to get my dev environment to work with HBase. It looks like a 
concurrency issue when it tries to store the metering data. I'm getting the 
following error in my logs (summary):

013-08-11 18:52:33.980 2445 ERROR ceilometer.collector.dispatcher.database 
[req-3b3c65c9-1a1b-4b5d-bba5-8224f074b176 None None] Second simultaneous read 
on fileno 7 detected.  Unless you really know what you're doing, make sure that 
only one greenthread can read any particular socket.  Consider using a 
pools.Pool. If you do know what you're doing and want to disable this error, 
call eventlet.debug.hub_prevent_multiple_readers(False)

Full traceback: http://paste.openstack.org/show/43872/

Has anyone else run into this lovely little problem? It looks like the 
implementation needs to use happybase.ConnectionPool, unless I'm missing 
something.

Thanks in advance for help! :)

-Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Ceilometer and nova compute cells

2013-08-12 Thread Doug Hellmann
On Mon, Aug 5, 2013 at 3:49 AM, Julien Danjou  wrote:

> On Fri, Aug 02 2013, Doug Hellmann wrote:
>
> > On Fri, Aug 2, 2013 at 7:47 AM, Julien Danjou 
> wrote:
> >> That would need the RPC layer to connect to different rabbitmq server.
> >> Not sure that's supported yet.
> >>
> >
> > We'll have that problem in the cell's collector, then, too, right?
>
> If you have an AMQP server per cell and a Ceilometer installation per
> cell, that'd work. But I can't see how you can aggregate at higher
> level.
>

If ceilometer can't replicate the messages up from a cell to a "central"
location by itself, the AMQP system would have to be configured to do that.
I would expect rabbit to provide a feature like that, connecting exchanges.
Does it?

Doug


>
> --
> Julien Danjou
> ;; Free Software hacker ; freelance consultant
> ;; http://julien.danjou.info
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Autogenerating the Nova v3 API specification

2013-08-12 Thread Doug Hellmann
On Mon, Aug 5, 2013 at 10:36 AM, Christopher Yeoh  wrote:

> On Mon, 5 Aug 2013 14:55:15 +0100
> John Garbutt  wrote:
> > Given we seem to be leaning towards WSME:
> >
> http://lists.openstack.org/pipermail/openstack-dev/2013-August/012954.html
> >
> > Could we not try to make WSME give us the documentation we need?
> >
> > Not sure if its feasible, but it seems like there is a good start to
> > that already available:
> > https://wsme.readthedocs.org/en/latest/document.html
>
> Hrm its not clear from there how the API samples are generated.
>

Each type declared for the API has a class method to instantiate a sample
object. That object is then passed through the appropriate serializer (XML
or JSON). reST directives embedded in our Sphinx docs trigger the
conversion and render to HTML now, but we'll need to build something
similar to generate standalone files for use with the DocBook-based
documentation.

Doug



> But more generally I have a concern with making having a specification
> for the V3 API dependent on getting WSME merged - since I think its a
> reasonably big chunk of work and certainly won't land until sometime in
> the icehouse timeframe. In the meantime without some automation of the
> process its likely we won't have a V3 API spec as there are around 60
> extensions (with all their methods) to document.
>
> Regards,
>
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Need help with HBase backend

2013-08-12 Thread Stas Maksimov
Hi Thomas,

I definitely saw this before, iirc it was caused by monkey-patching
somewhere else in ceilometer. It was fixed in the end before i submitted
hbase implementation.

At this moment unfortunately that's all I can recollect on the subject.
I'll get back to you if I have an 'aha' moment on this. Feel free to
contact me off-list regarding this hbase driver.

Thanks,
Stas.
 Hey team,

 I am working on a fix for retrieving the latest metadata on a resource
rather than the first with the HBase implementation, and I'm running into
some trouble when trying to get my dev environment to work with HBase. It
looks like a concurrency issue when it tries to store the metering data.
I'm getting the following error in my logs (summary):

 *013-08-11 18:52:33.980 2445 ERROR
ceilometer.collector.dispatcher.database
[req-3b3c65c9-1a1b-4b5d-bba5-8224f074b176 None None] Second simultaneous
read on fileno 7 detected.  Unless you really know what you're doing, make
sure that only one greenthread can read any particular socket.  Consider
using a pools.Pool. If you do know what you're doing and want to disable
this error, call eventlet.debug.hub_prevent_multiple_readers(False)*

 *Full traceback*: http://paste.openstack.org/show/43872/

 Has anyone else run into this lovely little problem? It looks like the
implementation needs to use happybase.ConnectionPool, unless I'm missing
something.

 Thanks in advance for help! :)

 -Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ANN] LogCAS is released

2013-08-12 Thread Akira Yoshiyama
Oh, sorry. It was sent before I finished to write it.

> Hello stackers,
>
> I'm pleased to announce the first release of LogCAS, a log
> collecting/analyzing system for OpenStack.
>
> LogCAS has major features below:
>
> * List logs: list logs from OpenStack components by time series.
> * List requests: list logs per request ID.
> * Display a request: list logs for a user request by time series.
> * Display a log: print details of a log.
>
> You can find more details on github (https://github.com/yosshy/logcas).
>
> README(en) https://github.com/yosshy/logcas/blob/master/README.md
> README(ja) https://github.com/yosshy/logcas/blob/master/README.ja.md
> Screenshots https://github.com/yosshy/logcas/tree/master/screenshots
>
> Comments are welcome.

Thank you,

Akira Yoshiyama 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is WSME really suitable? (Was: [nova] Autogenerating the Nova v3 API specification)

2013-08-12 Thread Doug Hellmann
There's a fix for this in WSME trunk, but it only works with Pecan at the
moment. Now that I'm back from vacation, I will resume working on getting
WSME onto StackForge and a new release cut.

Doug


On Tue, Aug 6, 2013 at 7:36 PM, Devananda van der Veen <
devananda@gmail.com> wrote:

> On Tue, Aug 6, 2013 at 2:17 PM, Mac Innes, Kiall  wrote:
>
>> On 06/08/13 21:56, Jonathan LaCour wrote:
>> > James Slagle  wrote:
>> >
>> >> WSME + pecan is being used in Tuskar:
>> >> https://github.com/tuskar/tuskar (OpenStack management API)
>> >>
>> >> We encountered the same issue discussed here.  A solution we settled
>> >> on for now was to use a custom Renderer class that could handle
>> >> different response codes.  You set the renderer in the call to
>> >> pecan.make_app.  This was meant to be a temporary solution until
>> >> there's better support in WSME.
>> >
>> > If there is anything I can do on the Pecan side, let me know! Happy to
>> build in new functionality to make this easier, in general. It does seem to
>> make sense to be fixed on the WSME side, though.
>> >
>> > Best --
>> >
>> > - Jonathan
>>
>> Nah - this is entirely on the WSME side :)
>>
>> WSME translate all exceptions that don't extend their ClientExcption to
>> HTTP 500, and anything that does to HTTP 400.
>>
>> Beyond that, you only have the "default status code" to work with - 401,
>> 404, 503 etc are all off limits with stock WSME. Literally you have a
>> default code for successful requests, and 400 or 500. Nothing else.
>>
>> It seems like Ceilometer has worked there way around it using the
>> "_lookup" method for 404's (but I can't find how they return any other
>> status codes..), and libra + tuskar have replaced the WSME error
>> handling for something entirely custom.
>>
>> We're not massively interested in replacing WSME's error handling with
>> something custom, so our plan is to just use Pecan and ignore WSME for
>> the Designate v2 API. When it's ready, hopefully the switch won't be too
>> painful!
>>
>> Thanks,
>> Kiall
>>
>>
> Ironic is also using WSME, and has been bitten by this issue. In certain
> situations we want to return either a 200 or a 202, which doesn't work for
> the same reasons.
>
> -Deva
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] MultiClusterZones

2013-08-12 Thread Wolfgang Richter
What is the status of this proposal:

https://wiki.openstack.org/wiki/MultiClusterZones

Has anyone worked on it?

-- 
Wolf

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Need help with HBase backend

2013-08-12 Thread Stas Maksimov
Aha, so here it goes. The problem was not caused by monkey-patching or
multithreading issues, it was caused by the DevStack VM losing its
connection and getting a new address from the DHCP server. Once I fixed the
connection issues, the problem with eventlet disappeared.

Hope this helps,
Stas

On 12 August 2013 14:49, Stas Maksimov  wrote:

> Hi Thomas,
>
> I definitely saw this before, iirc it was caused by monkey-patching
> somewhere else in ceilometer. It was fixed in the end before i submitted
> hbase implementation.
>
> At this moment unfortunately that's all I can recollect on the subject.
> I'll get back to you if I have an 'aha' moment on this. Feel free to
> contact me off-list regarding this hbase driver.
>
> Thanks,
> Stas.
>  Hey team,
>
>  I am working on a fix for retrieving the latest metadata on a resource
> rather than the first with the HBase implementation, and I'm running into
> some trouble when trying to get my dev environment to work with HBase. It
> looks like a concurrency issue when it tries to store the metering data.
> I'm getting the following error in my logs (summary):
>
>  *013-08-11 18:52:33.980 2445 ERROR
> ceilometer.collector.dispatcher.database
> [req-3b3c65c9-1a1b-4b5d-bba5-8224f074b176 None None] Second simultaneous
> read on fileno 7 detected.  Unless you really know what you're doing, make
> sure that only one greenthread can read any particular socket.  Consider
> using a pools.Pool. If you do know what you're doing and want to disable
> this error, call eventlet.debug.hub_prevent_multiple_readers(False)*
>
>  *Full traceback*: http://paste.openstack.org/show/43872/
>
>  Has anyone else run into this lovely little problem? It looks like the
> implementation needs to use happybase.ConnectionPool, unless I'm missing
> something.
>
>  Thanks in advance for help! :)
>
>  -Thomas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Need help with HBase backend

2013-08-12 Thread Thomas Maddox
Hmmm, that's interesting.

That would effect an all-in-one deployment? It's referencing localhost right 
now; not distributed. My Thrift server is hbase://127.0.0.1:9090/. Or would 
that still effect it, because it's a software facilitated localhost reference 
and I'm doing dev inside of a VM (in the cloud) rather than a hardware host?

I really appreciate your help!

-Thomas

From: Stas Maksimov mailto:maksi...@gmail.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, August 12, 2013 9:17 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Ceilometer] Need help with HBase backend

Aha, so here it goes. The problem was not caused by monkey-patching or 
multithreading issues, it was caused by the DevStack VM losing its connection 
and getting a new address from the DHCP server. Once I fixed the connection 
issues, the problem with eventlet disappeared.

Hope this helps,
Stas

On 12 August 2013 14:49, Stas Maksimov 
mailto:maksi...@gmail.com>> wrote:

Hi Thomas,

I definitely saw this before, iirc it was caused by monkey-patching somewhere 
else in ceilometer. It was fixed in the end before i submitted hbase 
implementation.

At this moment unfortunately that's all I can recollect on the subject. I'll 
get back to you if I have an 'aha' moment on this. Feel free to contact me 
off-list regarding this hbase driver.

Thanks,
Stas.

Hey team,

I am working on a fix for retrieving the latest metadata on a resource rather 
than the first with the HBase implementation, and I'm running into some trouble 
when trying to get my dev environment to work with HBase. It looks like a 
concurrency issue when it tries to store the metering data. I'm getting the 
following error in my logs (summary):

013-08-11 18:52:33.980 2445 ERROR ceilometer.collector.dispatcher.database 
[req-3b3c65c9-1a1b-4b5d-bba5-8224f074b176 None None] Second simultaneous read 
on fileno 7 detected.  Unless you really know what you're doing, make sure that 
only one greenthread can read any particular socket.  Consider using a 
pools.Pool. If you do know what you're doing and want to disable this error, 
call eventlet.debug.hub_prevent_multiple_readers(False)

Full traceback: http://paste.openstack.org/show/43872/

Has anyone else run into this lovely little problem? It looks like the 
implementation needs to use happybase.ConnectionPool, unless I'm missing 
something.

Thanks in advance for help! :)

-Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Need help with HBase backend

2013-08-12 Thread Stas Maksimov
Is it sporadic or happens all the time?

In my case my Ceilometer VM was different from HBase VM, so I'm not sure if
DHCP issues can affect localhost connections.

Thanks,
Stas

On 12 August 2013 15:29, Thomas Maddox  wrote:

>  Hmmm, that's interesting.
>
>  That would effect an all-in-one deployment? It's referencing localhost
> right now; not distributed. My Thrift server is hbase://127.0.0.1:9090/.
> Or would that still effect it, because it's a software facilitated
> localhost reference and I'm doing dev inside of a VM (in the cloud) rather
> than a hardware host?
>
>  I really appreciate your help!
>
>  -Thomas
>
>   From: Stas Maksimov 
> Reply-To: OpenStack Development Mailing List <
> openstack-dev@lists.openstack.org>
> Date: Monday, August 12, 2013 9:17 AM
> To: OpenStack Development Mailing List 
> Subject: Re: [openstack-dev] [Ceilometer] Need help with HBase backend
>
>  Aha, so here it goes. The problem was not caused by monkey-patching or
> multithreading issues, it was caused by the DevStack VM losing its
> connection and getting a new address from the DHCP server. Once I fixed the
> connection issues, the problem with eventlet disappeared.
>
> Hope this helps,
> Stas
>
> On 12 August 2013 14:49, Stas Maksimov  wrote:
>
>> Hi Thomas,
>>
>> I definitely saw this before, iirc it was caused by monkey-patching
>> somewhere else in ceilometer. It was fixed in the end before i submitted
>> hbase implementation.
>>
>> At this moment unfortunately that's all I can recollect on the subject.
>> I'll get back to you if I have an 'aha' moment on this. Feel free to
>> contact me off-list regarding this hbase driver.
>>
>> Thanks,
>> Stas.
>>   Hey team,
>>
>>  I am working on a fix for retrieving the latest metadata on a resource
>> rather than the first with the HBase implementation, and I'm running into
>> some trouble when trying to get my dev environment to work with HBase. It
>> looks like a concurrency issue when it tries to store the metering data.
>> I'm getting the following error in my logs (summary):
>>
>>  *013-08-11 18:52:33.980 2445 ERROR
>> ceilometer.collector.dispatcher.database
>> [req-3b3c65c9-1a1b-4b5d-bba5-8224f074b176 None None] Second simultaneous
>> read on fileno 7 detected.  Unless you really know what you're doing, make
>> sure that only one greenthread can read any particular socket.  Consider
>> using a pools.Pool. If you do know what you're doing and want to disable
>> this error, call eventlet.debug.hub_prevent_multiple_readers(False)*
>>
>>  *Full traceback*: http://paste.openstack.org/show/43872/
>>
>>  Has anyone else run into this lovely little problem? It looks like the
>> implementation needs to use happybase.ConnectionPool, unless I'm missing
>> something.
>>
>>  Thanks in advance for help! :)
>>
>>  -Thomas
>>
>>  ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] gauging feelings on a new config option for powervm_lpar_operation_timeout

2013-08-12 Thread Russell Bryant
On 08/11/2013 04:04 PM, Matt Riedemann wrote:
> While working on a patch to implement hard reboot for the powervm driver
> [1], I noticed that the stop_lpar (power_off) method has a timeout
> argument with a default value of 30 seconds but it's not overridden
> anywhere in the code and it's not configurable.  The start_lpar
> (power_on) method doesn't have a timeout at all.  I was thinking about
> creating a patch to (1) make start_lpar poll until the instance is
> running and (2) making the stop/start timeouts configurable with a new
> config option, something like powervm_lpar_operation_timeout that
> defaults to 60 seconds.

Why would someone change this?  What makes one person's environment need
a different timeout than another?  If those questions have good answers,
it's probably fine IMO.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Need help with HBase backend

2013-08-12 Thread Thomas Maddox
Happens all of the time. I haven't been able to get a single meter stored. :(

From: Stas Maksimov mailto:maksi...@gmail.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, August 12, 2013 9:34 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Ceilometer] Need help with HBase backend

Is it sporadic or happens all the time?

In my case my Ceilometer VM was different from HBase VM, so I'm not sure if 
DHCP issues can affect localhost connections.

Thanks,
Stas

On 12 August 2013 15:29, Thomas Maddox 
mailto:thomas.mad...@rackspace.com>> wrote:
Hmmm, that's interesting.

That would effect an all-in-one deployment? It's referencing localhost right 
now; not distributed. My Thrift server is 
hbase://127.0.0.1:9090/. Or would that still effect it, 
because it's a software facilitated localhost reference and I'm doing dev 
inside of a VM (in the cloud) rather than a hardware host?

I really appreciate your help!

-Thomas

From: Stas Maksimov mailto:maksi...@gmail.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, August 12, 2013 9:17 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Ceilometer] Need help with HBase backend

Aha, so here it goes. The problem was not caused by monkey-patching or 
multithreading issues, it was caused by the DevStack VM losing its connection 
and getting a new address from the DHCP server. Once I fixed the connection 
issues, the problem with eventlet disappeared.

Hope this helps,
Stas

On 12 August 2013 14:49, Stas Maksimov 
mailto:maksi...@gmail.com>> wrote:

Hi Thomas,

I definitely saw this before, iirc it was caused by monkey-patching somewhere 
else in ceilometer. It was fixed in the end before i submitted hbase 
implementation.

At this moment unfortunately that's all I can recollect on the subject. I'll 
get back to you if I have an 'aha' moment on this. Feel free to contact me 
off-list regarding this hbase driver.

Thanks,
Stas.

Hey team,

I am working on a fix for retrieving the latest metadata on a resource rather 
than the first with the HBase implementation, and I'm running into some trouble 
when trying to get my dev environment to work with HBase. It looks like a 
concurrency issue when it tries to store the metering data. I'm getting the 
following error in my logs (summary):

013-08-11 18:52:33.980 2445 ERROR ceilometer.collector.dispatcher.database 
[req-3b3c65c9-1a1b-4b5d-bba5-8224f074b176 None None] Second simultaneous read 
on fileno 7 detected.  Unless you really know what you're doing, make sure that 
only one greenthread can read any particular socket.  Consider using a 
pools.Pool. If you do know what you're doing and want to disable this error, 
call eventlet.debug.hub_prevent_multiple_readers(False)

Full traceback: http://paste.openstack.org/show/43872/

Has anyone else run into this lovely little problem? It looks like the 
implementation needs to use happybase.ConnectionPool, unless I'm missing 
something.

Thanks in advance for help! :)

-Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Extension to volume creation (filesystem and label)

2013-08-12 Thread Vishvananda Ishaya
This would need to happen on the cinder side on creation. I don't think it is 
safe for nova to be modifying the contents of the volume on attach. That said 
nova does currently set the serial number on attach (for libvirt at least) so 
the volume will show up as:

/dev/disk/by-id/virtio-

Although the uuid gets truncated.

Vish

On Aug 10, 2013, at 10:11 PM, Greg Poirier  wrote:

> Since we can't guarantee that a volume, when attached, will become a 
> specified device name, we would like to be able to create a filesystem and 
> label it (so that we can programmatically interact with it when provisioning 
> systems, services, etc).
> 
> What we are trying to decide is whether this should be the responsibility of 
> Nova or Cinder. Since Cinder essentially has all of the information about the 
> volume and is already responsible for creating the volume (against the 
> configured backend), why not also give it the ability to mount the volume 
> (assuming support for it on the backend exists), run mkfs., 
> and then use tune2fs to label the volume with (for example) the volume's UUID?
> 
> This way we can programmatically do:
> 
> mount /dev/disk/by-label/ /mnt/point
> 
> This is more or less a functional requirement for our provisioning service, 
> and I'm wondering also:
> 
> - Is anyone else doing this already?
> - Has this been considered before?
> 
> We will gladly implement this and submit a patch against Cinder or Nova. We'd 
> just like to make sure we're heading in the right direction and making the 
> change in the appropriate part of Openstack.
> 
> Thanks,
> 
> Greg Poirier
> Opower - Systems Engineering
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] gauging feelings on a new config option for powervm_lpar_operation_timeout

2013-08-12 Thread Matt Riedemann
Russell, with powervm the hypervisor (IVM) is running on a detached system 
and depending on how many compute nodes are going through the same 
hypervisor I would think the load would vary as it processes multiple 
requests.  In my runs with Tempest I haven't seen any timeouts in the 
power_off operation, but I have seen other problems which are making me 
conscious of timeout issues with the powervm driver, mainly around taking 
a snapshot of a running instance (but that's a different issue related 
more to disk performance on the IVM and network topology - still 
investigating).

Honestly, I just don't like hard-coded timeouts which can't be configured 
if the need arises.  I don't know why there is a timeout argument in the 
code that defaults to 30 seconds if it can't be overridden (or is 
overridden in the code).  We could do like the libvirt driver and just not 
have a timeout on stop/start, but that scares me for some reason.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Russell Bryant 
To: openstack-dev@lists.openstack.org, 
Date:   08/12/2013 09:49 AM
Subject:Re: [openstack-dev] [nova] gauging feelings on a new 
config option for powervm_lpar_operation_timeout



On 08/11/2013 04:04 PM, Matt Riedemann wrote:
> While working on a patch to implement hard reboot for the powervm driver
> [1], I noticed that the stop_lpar (power_off) method has a timeout
> argument with a default value of 30 seconds but it's not overridden
> anywhere in the code and it's not configurable.  The start_lpar
> (power_on) method doesn't have a timeout at all.  I was thinking about
> creating a patch to (1) make start_lpar poll until the instance is
> running and (2) making the stop/start timeouts configurable with a new
> config option, something like powervm_lpar_operation_timeout that
> defaults to 60 seconds.

Why would someone change this?  What makes one person's environment need
a different timeout than another?  If those questions have good answers,
it's probably fine IMO.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-12 Thread Nikolay Starodubtsev
Hi, again!

Partick, I’ll try to explain why do we belive in some base actions like
instance starting/deleting in Climate. We are thinking about the following
workflow (that will be quite comfortable and user friendly, and now we have
more than one customer who really want it):

1) User goes to the OpenStack dashboard and asks Heat to reserve several
stacks.

2) Heat goes to the Climate and creates all needed leases. Also Heat
reserves all resources for these stacks.

3) When time comes, user goes to the OpenStack cloud and here we think he
wants to see already working stacks (ideal version) or (at least) already
started. If no, user will have to go to the Dashboard and wake up all the
stacks he or she reserved. This means several actions, that may be done for
the user automatically, because it will be needed to do them no matter what
is the aim for these stacks - if user reserves them, he / she needs them.

We understand, that there are situations when these actions may be done by
some other system (like some hypothetical Jenkins). But if we speak about
users, this will be useful. We also understand that this default way of
behavior should be implemented in some kind of long term life cycle
management system (which is not Heat), but we have no one in the OpenStack
now. Because the best may to implement it is to use Convection, that is
only proposal now...

That’s why we think that for the behavior like “user just reserves
resources and then does anything he / she wants to” physical leases are
better variant, when user may reserve several nodes and use it in different
ways. For the virtual reservations it will be better to start / delete them
as a default way (for something unusual Heat may be used and modified).

Do you think that this workflow is useful too and if so can you propose
another implementation  variant for it?

Thank you.



On Mon, Aug 12, 2013 at 1:55 PM, Patrick Petit wrote:

>  On 8/9/13 3:05 PM, Nikolay Starodubtsev wrote:
>
> Hello, Patrick!
>
> We have several reasons to think that for the virtual resources this
> possibility is interesting. If we speak about physical resources, user may
> use them in the different ways, that's why it is impossible to include base
> actions with them to the reservation service. But speaking about virtual
> reservations, let's imagine user wants to reserve virtual machine. He knows
> everything about it - its parameters, flavor and time to be leased for.
> Really, in this case user wants to have already working (or at least
> starting to work) reserved virtual machine and it would be great to include
> this opportunity to the reservation service.
>
>  We are thinking about base actions for the virtual reservations that
> will be supported by Climate, like boot/delete for instance, create/delete
> for volume and create/delete for the stacks. The same will be with volumes,
> IPs, etc. As for more complicated behaviour, it may be implemented in Heat.
> This will make reservations simpler to use for the end users.
>
> Don't you think so?
>
> Well yes and and no. It really depends upon what you put behind those
> lease actions. The view I am trying to sustain is separation of duties to
> keep the service simple, ubiquitous and non prescriptive of a certain kind
> of usage pattern. In other words, keep Climate for reservation of capacity
> (physical or virtual), Heat for orchestration, and so forth. ... Consider
> for example the case of reservation as a non technical act but rather as a
> business enabler for wholesales activities. Don't need, and probably don't
> want to start or stop any resource there. I do not deny that there are
> cases where it is desirable but then how reservations are used and composed
> together at the end of the day mainly depends on exogenous factors which
> couldn't be anticipated because they are driven by the business.
>
> And so, rather than coupling reservations with wired resource
> instantiation actions, I would rather couple them with notifications that
> everybody can subscribe to (as opposed to the Resource Manager only) which
> would let users decide what to do with the life-cycle events. The what to
> do may very well be what you advocate i.e. start a full stack of reserved
> and interwoven resources, or at the other end of the spectrum, do nothing
> at all. This approach IMO would keep things more open.
>
>
> P.S. Also we remember about the problem you mentioned some letters ago -
> how to guarantee that user will have already working and prepared host / VM
> / stack / etc. by the time lease actually starts, no just "lease begins and
> preparing process begins too". We are working on it now.
>
> Yes. I think I was explicitly referring to hosts instantiation also
> because there is no support of that in Nova API. Climate should support
> some kind of "reservation kick-in heads-up" notification whereby the
> provider and/or some automated provisioning tools could do the heavy
> lifting work of bringing physical hosts on

[openstack-dev] [Murano] Meeting minutes 2013-08-12

2013-08-12 Thread Denis Koryavov
Hello,

Below, you can see the meeting minutes from today's Murano meeting.

Minutes:
http://eavesdrop.openstack.org/meetings/murano/2013/murano.2013-08-12-15.05.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/murano/2013/murano.2013-08-12-15.05.txt
Log:
http://eavesdrop.openstack.org/meetings/murano/2013/murano.2013-08-12-15.05.log.html


--
Denis
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Dev] Block Storage libraries and shared code

2013-08-12 Thread John Griffith
Hey,

There have been a couple of block storage related patches in Nova lately
and I wanted to get some discussion going and also maybe increase some
awareness on some efforts that were discussed at the last summit.  To catch
up a bit here's the etherpad from the summit session [1].

First off, there was a patch to move Nova's LVM code in to OSLO (here [2]).
 This one is probably my fault for not having enough awareness out there
regarding our plans/goals with brick.  I'd like to hear from folks if the
brick approach is not sufficient or if there's some other reason that it's
not desirable (hopefully it's just that folks didn't know about it).

For reference/review the latest version of the brick/local_dev/lvm code is
here: [4].

One question we haven't answered on this yet is where this code should
ultimately live.  Should it be in OSLO, or should it be a separate library
that's part of Cinder and can be imported by other projects.  I'm mixed on
this for a number of reasons but I think either approach is fine.

The next item around this topic that came up was a patch to add support for
using RBD for local volumes in Nova (here [3]).  You'll notice a number of
folks mentioned brick on this, and I think that's the correct answer.  At
the same time while I think that's the right answer long term I also would
hate to see this feature NOT go in to H just because folks weren't aware of
what was going on in Brick.  It's a bit late in the cycle so my thought on
this is that I'd like to see this resubmitted using the brick/common
approach.  If that can't be done between now and the feature freeze for H3
I'd rather see the patch go in as is than have the feature not be present
at all for another release.  We can then address this when we get a better
story in place for brick.


[1] https://etherpad.openstack.org/havana-cinder-local-storage-library
[2] https://review.openstack.org/#/c/40795/
[3] https://review.openstack.org/#/c/36042/15
[4] https://review.openstack.org/#/c/38172/11/cinder/brick/local_dev/lvm.py
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ANVIL] Missing openvswitch dependency for basic-neutron.yaml persona

2013-08-12 Thread Sylvain Bauza

Hi,

./smithy -a install -p conf/personas/in-a-box/basic-neutron.yaml is 
failing because of openvswitch missing.

See logs here [1].

Does anyone knows why openvswitch is needed when asking for linuxbridge 
in components/neutron.yaml ?

Shall I update distros/rhel.yaml ?

-Sylvain



[1] : http://pastebin.com/TFkDrrDc


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Extension to volume creation (filesystem and label)

2013-08-12 Thread John Griffith
On Mon, Aug 12, 2013 at 9:15 AM, Vishvananda Ishaya
wrote:

> This would need to happen on the cinder side on creation. I don't think it
> is safe for nova to be modifying the contents of the volume on attach. That
> said nova does currently set the serial number on attach (for libvirt at
> least) so the volume will show up as:
>
> /dev/disk/by-id/virtio-
>
> Although the uuid gets truncated.
>
> Vish
>
> On Aug 10, 2013, at 10:11 PM, Greg Poirier 
> wrote:
>
> > Since we can't guarantee that a volume, when attached, will become a
> specified device name, we would like to be able to create a filesystem and
> label it (so that we can programmatically interact with it when
> provisioning systems, services, etc).
> >
> > What we are trying to decide is whether this should be the
> responsibility of Nova or Cinder. Since Cinder essentially has all of the
> information about the volume and is already responsible for creating the
> volume (against the configured backend), why not also give it the ability
> to mount the volume (assuming support for it on the backend exists), run
> mkfs., and then use tune2fs to label the volume with (for
> example) the volume's UUID?
> >
> > This way we can programmatically do:
> >
> > mount /dev/disk/by-label/ /mnt/point
> >
> > This is more or less a functional requirement for our provisioning
> service, and I'm wondering also:
> >
> > - Is anyone else doing this already?
> > - Has this been considered before?
> >
> > We will gladly implement this and submit a patch against Cinder or Nova.
> We'd just like to make sure we're heading in the right direction and making
> the change in the appropriate part of Openstack.
> >
> > Thanks,
> >
> > Greg Poirier
> > Opower - Systems Engineering
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

The virtio- method Vish described has worked pretty well for me so
hopefully that will work for you.  I also don't like the idea of doing a
parition/format on attach in compute, seems like an easy path to
inadvertently loosing your data.

If you still want to look at adding the partition/format functionality to
Cinder it's an interesting idea, but to be honest I've discounted it in the
past because it just seemed "safer" and more flexible to leave it to the
instance rather than trying to cover all of the possible partition schemes
and FS types etc.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Extension to volume creation (filesystem and label)

2013-08-12 Thread Greg Poirier
On Mon, Aug 12, 2013 at 9:18 AM, John Griffith
wrote:

>
> On Mon, Aug 12, 2013 at 9:15 AM, Vishvananda Ishaya  > wrote:
>
>> This would need to happen on the cinder side on creation. I don't think
>> it is safe for nova to be modifying the contents of the volume on attach.
>> That said nova does currently set the serial number on attach (for libvirt
>> at least) so the volume will show up as:
>>
>> /dev/disk/by-id/virtio-
>>
>> Although the uuid gets truncated.
>>
>> Vish
>
>
I missed this in my first passthrough. Thanks for pointing that out.

We still like the idea of creating the filesystem (to make block storage
truly self-service for developers), but we might be able to work around
that. It seems that my initial feeling that this would be dealt with in
Cinder was correct, though.


> The virtio- method Vish described has worked pretty well for me so
> hopefully that will work for you.  I also don't like the idea of doing a
> parition/format on attach in compute, seems like an easy path to
> inadvertently loosing your data.
>

We could track the state of the filesystem somewhere in the Cinder model.
Only try to initialize it once.

If you still want to look at adding the partition/format functionality to
> Cinder it's an interesting idea, but to be honest I've discounted it in the
> past because it just seemed "safer" and more flexible to leave it to the
> instance rather than trying to cover all of the possible partition schemes
> and FS types etc.
>

Oh, we don't want to get super fancy with it. We would probably only
support one filesystem type and not partitions. E.g. You request a 120GB
volume and you get a 120GB Ext4 FS mountable by label.

It may potentially not be worth the effort, ultimately. We'll have to
continue our discussions internally... particularly since now I know where
a useful identifier for the volume is under the dev fs.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Allows to set the memory parameters for an Instance

2013-08-12 Thread Shake Chen
maybe use Extra Flavor .

https://wiki.openstack.org/wiki/FlavorExtraSpecsKeyList


On Sun, Aug 11, 2013 at 3:49 PM, Jae Sang Lee  wrote:

>
>
> I've registered a blueprint to allows to set the advanced memory
> parameters for an Instance
>
> https://blueprints.launchpad.net/nova/+spec/libvirt-memtune-for-instance
>
>
> Would it be possible to review it (and maybe get an approval or not)?
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Shake Chen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Issues with devstack

2013-08-12 Thread Monty Taylor
Hey all!

You may have noticed recently that there are issues running devstack on
brand new machines - and if you have noticed that, you may have been asking:

- what's the problem?
- how the heck did this make it past the gate since we run devstack so
often?

Well - this is a fun one, and it's kind of a perfect storm of three
different things.

First of all, there are a couple of packages with bad permissions in the
archive that they have on PyPI. Specifically, prettytable and httplib2.
That wasn't a problem until pip 1.4 actually started unpacking zip files
more correctly - by actually preserving the permissions of the files in
the archive.

It got past the gate because pip 1.4 _only_ does this for zip files, and
our mirror happens to return files in a different order, so the gate
jobs were getting the tarball source archives instead of the zip source
archives.

What are we doing about it?

A couple of things. Dean is adding a workaround to devstack to chmod the
bad packages appropriately[1]. We are also filing bugs against the bad
packages. And we've filed a bug against pip[2] and are working with the
upstream authors (thanks dstufft) to get the logic in pip changed to be
safer across the board (and to apply the same logic to both tar and zip
archives)

Monty

[1] https://review.openstack.org/#/c/41209/
[2] https://github.com/pypa/pip/issues/1133

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Extension to volume creation (filesystem and label)

2013-08-12 Thread Fox, Kevin M
It may make the dependency tree a bit weird but Cinder could use Nova to do the 
actual work. Make a bare minimum image that Cinder fires up under Nova, 
attaches the volumes, and then does the partitioning/formatting. Once setup, 
the vm can be terminated. This has the benefit of reusing a lot of code in 
Cinder and Nova. It also would provide a lot of protection from dangerous code 
like formatting from being able to see disks not intended to be formatted. The 
API would live under Cinder as the Nova stuff would simply be an implementation 
detail the user need not know about.

Thanks,
Kevin



From: Greg Poirier [greg.poir...@opower.com]
Sent: Monday, August 12, 2013 9:37 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Extension to volume creation (filesystem and   
label)

On Mon, Aug 12, 2013 at 9:18 AM, John Griffith 
mailto:john.griff...@solidfire.com>> wrote:

On Mon, Aug 12, 2013 at 9:15 AM, Vishvananda Ishaya 
mailto:vishvana...@gmail.com>> wrote:
This would need to happen on the cinder side on creation. I don't think it is 
safe for nova to be modifying the contents of the volume on attach. That said 
nova does currently set the serial number on attach (for libvirt at least) so 
the volume will show up as:

/dev/disk/by-id/virtio-

Although the uuid gets truncated.

Vish

I missed this in my first passthrough. Thanks for pointing that out.

We still like the idea of creating the filesystem (to make block storage truly 
self-service for developers), but we might be able to work around that. It 
seems that my initial feeling that this would be dealt with in Cinder was 
correct, though.

The virtio- method Vish described has worked pretty well for me so 
hopefully that will work for you.  I also don't like the idea of doing a 
parition/format on attach in compute, seems like an easy path to inadvertently 
loosing your data.

We could track the state of the filesystem somewhere in the Cinder model. Only 
try to initialize it once.

If you still want to look at adding the partition/format functionality to 
Cinder it's an interesting idea, but to be honest I've discounted it in the 
past because it just seemed "safer" and more flexible to leave it to the 
instance rather than trying to cover all of the possible partition schemes and 
FS types etc.

Oh, we don't want to get super fancy with it. We would probably only support 
one filesystem type and not partitions. E.g. You request a 120GB volume and you 
get a 120GB Ext4 FS mountable by label.

It may potentially not be worth the effort, ultimately. We'll have to continue 
our discussions internally... particularly since now I know where a useful 
identifier for the volume is under the dev fs.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Extension to volume creation (filesystem and label)

2013-08-12 Thread John Griffith
On Mon, Aug 12, 2013 at 10:52 AM, Fox, Kevin M  wrote:

> It may make the dependency tree a bit weird but Cinder could use Nova to
> do the actual work. Make a bare minimum image that Cinder fires up under
> Nova, attaches the volumes, and then does the partitioning/formatting. Once
> setup, the vm can be terminated. This has the benefit of reusing a lot of
> code in Cinder and Nova. It also would provide a lot of protection from
> dangerous code like formatting from being able to see disks not intended to
> be formatted. The API would live under Cinder as the Nova stuff would
> simply be an implementation detail the user need not know about.
>
> Thanks,
> Kevin
>

There have been a number of things folks have talked about implementing
"worker" instances in Cinder for.  What you're describing would be one of
them.  To be honest though I've never been crazy about the idea of
introducing a Nova dependency in Cinder like that.  Just doesn't seem to me
that in most cases the extra complexity has that great of a return but I
could be wrong.

>
>
> 
> From: Greg Poirier [greg.poir...@opower.com]
> Sent: Monday, August 12, 2013 9:37 AM
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] Extension to volume creation (filesystem and
>   label)
>
> On Mon, Aug 12, 2013 at 9:18 AM, John Griffith <
> john.griff...@solidfire.com> wrote:
>
> On Mon, Aug 12, 2013 at 9:15 AM, Vishvananda Ishaya  > wrote:
> This would need to happen on the cinder side on creation. I don't think it
> is safe for nova to be modifying the contents of the volume on attach. That
> said nova does currently set the serial number on attach (for libvirt at
> least) so the volume will show up as:
>
> /dev/disk/by-id/virtio-
>
> Although the uuid gets truncated.
>
> Vish
>
> I missed this in my first passthrough. Thanks for pointing that out.
>
> We still like the idea of creating the filesystem (to make block storage
> truly self-service for developers), but we might be able to work around
> that. It seems that my initial feeling that this would be dealt with in
> Cinder was correct, though.
>
> The virtio- method Vish described has worked pretty well for me so
> hopefully that will work for you.  I also don't like the idea of doing a
> parition/format on attach in compute, seems like an easy path to
> inadvertently loosing your data.
>
> We could track the state of the filesystem somewhere in the Cinder model.
> Only try to initialize it once.
>
> If you still want to look at adding the partition/format functionality to
> Cinder it's an interesting idea, but to be honest I've discounted it in the
> past because it just seemed "safer" and more flexible to leave it to the
> instance rather than trying to cover all of the possible partition schemes
> and FS types etc.
>
> Oh, we don't want to get super fancy with it. We would probably only
> support one filesystem type and not partitions. E.g. You request a 120GB
> volume and you get a 120GB Ext4 FS mountable by label.
>
> It may potentially not be worth the effort, ultimately. We'll have to
> continue our discussions internally... particularly since now I know where
> a useful identifier for the volume is under the dev fs.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Extension to volume creation (filesystem and label)

2013-08-12 Thread Clint Byrum
Excerpts from Greg Poirier's message of 2013-08-10 22:11:06 -0700:
> Since we can't guarantee that a volume, when attached, will become a
> specified device name, we would like to be able to create a filesystem and
> label it (so that we can programmatically interact with it when
> provisioning systems, services, etc).
> 
> What we are trying to decide is whether this should be the responsibility
> of Nova or Cinder. Since Cinder essentially has all of the information
> about the volume and is already responsible for creating the volume
> (against the configured backend), why not also give it the ability to mount
> the volume (assuming support for it on the backend exists), run
> mkfs., and then use tune2fs to label the volume with (for
> example) the volume's UUID?

Like others, I am a little dubious about adding a filesystem to these
disks for a number of reasons. It feels like a violation of "its just
a bunch of bits".

Have you considered putting a GPT on it instead?

With a GPT you have a UUID for the disk which you can communicate to the
host via metadata service. With that you can instruct gdisk to partition
the right disk programattically and create the filesystem with native
in-instance tools.

This is pure meta-data, and defines a lot less than a filesystem, so it
feels like a bigger win for the general purpose case of volumes.  It will
work for any OS that supports GPT, which is likely _every_ modern PC OS.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday August 13th at 19:00 UTC

2013-08-12 Thread Elizabeth Krumbach Joseph
The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday August 13th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Extension to volume creation (filesystem and label)

2013-08-12 Thread Greg Poirier
On Mon, Aug 12, 2013 at 10:26 AM, Clint Byrum  wrote:

> Like others, I am a little dubious about adding a filesystem to these
> disks for a number of reasons. It feels like a violation of "its just
> a bunch of bits".
>

I actually think that it's a valid concern. I've been trying to come up
with a stable, reasonable solution ever since I sent the original e-mail. :)


> Have you considered putting a GPT on it instead?


We have.


> With a GPT you have a UUID for the disk which you can communicate to the
> host via metadata service. With that you can instruct gdisk to partition
> the right disk programattically and create the filesystem with native
> in-instance tools.
>

I'm not sure that this is any different from:
- Examine current disk devices
- Attach volume
- Examine current disk devices
- Get device ID from diff
- Do something

That seems to be pretty much the pattern that everyone has used to solve
this problem. What this says to me is that this is a common problem, and
perhaps it is a failing of Cinder to simply provide this functionality.
Even if it doesn't bother creating a filesystem, it seems like it should
make a best effort to ensure that the volume is identifiable within the
instance after attachment--as opposed to the current implementation of
"throw hands up in the air and have the state lie about the device name of
the volume". Currently we have metadata that says it's /dev/vdc, when in
reality it's /dev/vdb. That's a bug, imo.


> This is pure meta-data, and defines a lot less than a filesystem, so it
> feels like a bigger win for the general purpose case of volumes.  It will
> work for any OS that supports GPT, which is likely _every_ modern PC OS.
>
>
Honestly, the only reason we were considering putting the filesystem on it
was to use tune2fs to put a label (specifically the volume ID) directly
attached to the filesystem. If we can manage to store the state of the
volume attachment in the metadata service and ensure the validity of that
data, then we will go that route. We simply haven't been able to do that
without some kind of wonkiness.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] ssbench inconsistent results across identical runs

2013-08-12 Thread Clay Gerrard
Three things.

1) make your runs last longer than a few seconds, a sustained load over a
longer period will average out some of jitter..

2) You don't have enough log lines.  Is this a single replica cluster?  How
does that work?

3) If you're benchmarking with the background daemons on, you might try
turning them off as they're going to introduce some unpredictability
between runs - it may be more realistic, but you'll need ever *longer* runs
to smooth out the spotty load over time.


On Fri, Aug 9, 2013 at 7:17 AM, Snider, Tim  wrote:

>  When running ssbench  I’ve consistently gotten inconsistent results –
> 67% difference between the fastest and slowest CREATE times. Nothing
> (scenario, parameters, cluster, etc…) was varied between the runs.  I’m
> using tempAuth and not Keystone authentication.
>
> ** **
>
> Ssbench was run on a node  using the internal (10GbE) network that isn’t
> currently part of the Swift cluster.
>
> Is this commonly seen with Swift? 
>
> CREATE   Count:   406 (0 error; 0 retries:  0.00%)  Average
> requests per second:  79.6
>
> CREATE   Count:   389 (0 error; 0 retries:  0.00%)  Average
> requests per second:  99.8
>
> CREATE   Count:   412 (0 error; 0 retries:  0.00%)  Average
> requests per second:  65.2
>
> CREATE   Count:   416 (0 error; 0 retries:  0.00%)  Average
> requests per second:  58.6
>
> CREATE   Count:   406 (0 error; 0 retries:  0.00%)  Average
> requests per second:  41.6
>
> CREATE   Count:   409 (0 error; 0 retries:  0.00%)  Average
> requests per second:  71.3
>
> CREATE   Count:   395 (0 error; 0 retries:  0.00%)  Average
> requests per second:  32.3
>
> ** **
>
> /var/log/swift entries for a PUT with a large latency doesn’t seem to
> reveal much:
>
> root@swift21:/home/swift/bin# grep txeaf22b65bbc442d091373399a931340e
> /var/log/swift/*
>
> /var/log/swift/container.log:Aug  9 06:47:40 swift21 container-server
> 192.168.10.208 - - [09/Aug/2013:13:47:40 +] "PUT
> /sdc/8738/AUTH_test/ssbench_74/100k.1_000180" 201 -
> "txeaf22b65bbc442d091373399a931340e" "-" "-" 0.0005
>
> /var/log/swift/object.log:Aug  9 06:47:40 swift21 object-server
> 192.168.10.209 - - [09/Aug/2013:13:47:40 +] "PUT
> /sdd/12336/AUTH_test/ssbench_74/100k.1_000180" 201 - "-"
> "txeaf22b65bbc442d091373399a931340e" "-" 5.3650
>
> ** **
>
> I’d like to investigate the possible source of the large result variation
> and requests with large latencies. Suggestions on where / what to look
> at/for are appreciated.
>
> ** **
>
> Details in: http://paste.openstack.org/show/43703 
>
> ** **
>
> Thanks,
>
> Tim
>
> ** **
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Dev] Block Storage libraries and shared code

2013-08-12 Thread Vishvananda Ishaya

On Aug 12, 2013, at 8:55 AM, John Griffith  wrote:

> Hey,
> 
> There have been a couple of block storage related patches in Nova lately and 
> I wanted to get some discussion going and also maybe increase some awareness 
> on some efforts that were discussed at the last summit.  To catch up a bit 
> here's the etherpad from the summit session [1].
> 
> First off, there was a patch to move Nova's LVM code in to OSLO (here [2]).  
> This one is probably my fault for not having enough awareness out there 
> regarding our plans/goals with brick.  I'd like to hear from folks if the 
> brick approach is not sufficient or if there's some other reason that it's 
> not desirable (hopefully it's just that folks didn't know about it). 
> 
> For reference/review the latest version of the brick/local_dev/lvm code is 
> here: [4].
> 
> One question we haven't answered on this yet is where this code should 
> ultimately live.  Should it be in OSLO, or should it be a separate library 
> that's part of Cinder and can be imported by other projects.  I'm mixed on 
> this for a number of reasons but I think either approach is fine.
> 
> The next item around this topic that came up was a patch to add support for 
> using RBD for local volumes in Nova (here [3]).  You'll notice a number of 
> folks mentioned brick on this, and I think that's the correct answer.  At the 
> same time while I think that's the right answer long term I also would hate 
> to see this feature NOT go in to H just because folks weren't aware of what 
> was going on in Brick.  It's a bit late in the cycle so my thought on this is 
> that I'd like to see this resubmitted using the brick/common approach.  If 
> that can't be done between now and the feature freeze for H3 I'd rather see 
> the patch go in as is than have the feature not be present at all for another 
> release.  We can then address this when we get a better story in place for 
> brick.

It seems like the key question is whether or not the nova code is going to be 
replaced by brick by Havana. If not, then this should go in as-is.

Vish

> 
> 
> [1] https://etherpad.openstack.org/havana-cinder-local-storage-library
> [2] https://review.openstack.org/#/c/40795/
> [3] https://review.openstack.org/#/c/36042/15
> [4] https://review.openstack.org/#/c/38172/11/cinder/brick/local_dev/lvm.py
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Dev] Block Storage libraries and shared code

2013-08-12 Thread Russell Bryant
On 08/12/2013 02:56 PM, Vishvananda Ishaya wrote:
> 
> On Aug 12, 2013, at 8:55 AM, John Griffith  > wrote:
> 
>> Hey,
>>
>> There have been a couple of block storage related patches in Nova
>> lately and I wanted to get some discussion going and also maybe
>> increase some awareness on some efforts that were discussed at the
>> last summit.  To catch up a bit here's the etherpad from the summit
>> session [1].
>>
>> First off, there was a patch to move Nova's LVM code in to OSLO (here
>> [2]).  This one is probably my fault for not having enough awareness
>> out there regarding our plans/goals with brick.  I'd like to hear from
>> folks if the brick approach is not sufficient or if there's some other
>> reason that it's not desirable (hopefully it's just that folks didn't
>> know about it). 
>>
>> For reference/review the latest version of the brick/local_dev/lvm
>> code is here: [4].
>>
>> One question we haven't answered on this yet is where this code should
>> ultimately live.  Should it be in OSLO, or should it be a separate
>> library that's part of Cinder and can be imported by other projects.
>>  I'm mixed on this for a number of reasons but I think either approach
>> is fine.
>>
>> The next item around this topic that came up was a patch to add
>> support for using RBD for local volumes in Nova (here [3]).  You'll
>> notice a number of folks mentioned brick on this, and I think that's
>> the correct answer.  At the same time while I think that's the right
>> answer long term I also would hate to see this feature NOT go in to H
>> just because folks weren't aware of what was going on in Brick.  It's
>> a bit late in the cycle so my thought on this is that I'd like to see
>> this resubmitted using the brick/common approach.  If that can't be
>> done between now and the feature freeze for H3 I'd rather see the
>> patch go in as is than have the feature not be present at all for
>> another release.  We can then address this when we get a better story
>> in place for brick.
> 
> It seems like the key question is whether or not the nova code is going
> to be replaced by brick by Havana. If not, then this should go in as-is.

+1.  I was still expecting that it was.  If not, I'm happy to go with this.

What's the status on this work?

https://blueprints.launchpad.net/nova/+spec/refactor-iscsi-fc-brick

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] event table is a ticking time bomb

2013-08-12 Thread Doug Hellmann
On Fri, Aug 9, 2013 at 11:56 AM, Clint Byrum  wrote:

> Excerpts from Sandy Walsh's message of 2013-08-09 06:16:55 -0700:
> >
> > On 08/08/2013 11:36 PM, Angus Salkeld wrote:
> > > On 08/08/13 13:16 -0700, Clint Byrum wrote:
> > >> Last night while reviewing a feature which would add more events to
> the
> > >> event table, it dawned on me that the event table really must be
> removed.
> > >
> > >
> > >>
> > >> https://bugs.launchpad.net/heat/+bug/1209492
> > >>
> > >> tl;dr: users can write an infinite number of rows to the event table
> at
> > >> a fairly alarming rate just by creating and updating a very large
> stack
> > >> that has no resources that cost any time or are even billable (like an
> > >> autoscaling launch configuration).
> > >>
> > >> The table has no purge function, so the only way to clear out old
> events
> > >> is to delete the stack, or manually remove them directly in the
> database.
> > >>
> > >> We've all been through this before, logging to a database seems great
> > >> until you actually do it.
> > >>
> > >> I have some ideas for how to solve it, but I wanted to get a wider
> > >> audience:
> > >>
> > >> 1) Make the event list a ring buffer. Have rows 0 - $MAX_BUFFER_SIZE
> in
> > >> each stack, and simply write each new event to the next open position,
> > >> wrapping at $MAX_BUFFER_SIZE. Pros: little change to current code,
> > >> just need an offset column added and code that will properly wrap to 0
> > >> at $MAX_BUFFER_SIZE. Cons: still can incur heavy transactional load on
> > >> the database server.A
> > >>
> > >> 1.b) Same, but instead of rows, just maintain a blob and append the
> rows
> > >> as json list. Lowers transactional load but would push some load onto
> > >> the API servers and such to parse these out, and would make pagination
> > >> challenging. Blobs also can be a drain on DB server performance.
> > >>
> > >> 2) Write a purge script. Delete old ones. Pros: No code change, just
> > >> new code to do purging. Cons: same as 1, plus more vulnerability to an
> > >> aggressive attacker who can fit a lot of data in between purges. Also
> > >> large scale deletes can be really painful (see: keystone sql token
> > >> backend).
> > >>
> > >> 3) Log events to Swift. I can't seem to find information on how/if
> > >> appending works there. Tons of tiny single-row files is an option,
> but I
> > >> want to hear from people with more swift knowledge if that is a
> viable,
> > >> performant option. Pros: Scale to the moon. Can charge tenant for
> usage
> > >> and let them purge events as needed. Cons: Adds swift as a requirement
> > >> of Heat.
> > >>
> > >> 4) Provide a way for users to receive logs via HTTP POST. Pros: Simple
> > >> and punts the problem to the users. Cons: users will be SoL if they
> > >> don't have a place to have logs posted to.
> > >>
> > >> 5) Provide a way for users to receive logs via messaging service like
> > >> Marconi.  Pros/Cons: same as HTTP, but perhaps a little more confusing
> > >> and ambitious given Marconi's short existence.
> > >>
> > >> 6) Provide a pluggable backend for logging. This seems like the way
> most
> > >> OpenStack projects solve these issues, which is to let the deployers
> > >> choose and/or provide their own way to handle a sticky problem. Pros:
> > >> Simple and flexible for the future. Cons: Would require writing at
> least
> > >> one backend provider that does what the previous 5 options suggest.
> > >>
> > >> To be clear: Heat cannot really exist without this, as it is the only
> way
> > >> to find out what your stack is doing or has done.
> > >
> > > btw Clint I have ditched that "Recorder" patch as Ceilometer is
> > > getting a Alarm History api soon, so we can defer to that for that
> > > functionality (alarm transitions).
> > >
> > > But we still need a better way to record events/logs for the user.
> > > So I make this blueprint a while ago:
> > > https://blueprints.launchpad.net/heat/+spec/user-visible-logs
> > >
> > > I am becomming more in favor of user options rather than deployer
> > > options if possible. So provide resources for Marconi, Meniscus and
> > > what ever...
> > > Although what is nice about Marconi is you could then hook up what
> > > ever you want to it.
> >
> > Logs are one thing (and Meniscus is a great choice for that), but events
> > are the very thing CM is designed to handle. Wouldn't it make sense to
> > push them back into there?
> >
>
> I'm not sure these events make sense in the current Ceilometer (I assume
> that is "CM" above) context. These events are:
>
> ... Creating stack A
> ... Creating stack A resource A
> ... Created stack A resource A
> ... Created stack A
>
> Users will want to be able to see all of the events for a stack, and
> likely we need to be able to paginate through them as well.
>
> They are fundamental and low level enough for Heat that I'm not sure
> putting them in Ceilometer makes much sense, but maybe I don't understand
> Ceilometer..  or "CM" is somethin

Re: [openstack-dev] [Neutron] FWaaS: Support for explicit commit

2013-08-12 Thread Sumit Naiksatam
Hi Aaron,

I seemed to have missed this email from you earlier. As compared to
existing Neutron resources, the FWaaS Firewall resource and workflow
is slightly different, since it's a two step process. The rules/policy
creation is decoupled (for audit reasons) from its application on the
backend firewall. Hence the need for the commit-like operation which
expresses the intent that the state of the rules/policy be applied to
the backend firewall. We can provide capabilities for bulk
creation/update of rules/policies as well but that I believe is
independent of this.

I posted a patch yesterday night for this
(https://review.openstack.org/#/c/41353/).

Thanks,
~Sumit.

On Wed, Aug 7, 2013 at 5:19 PM, Aaron Rosen  wrote:
> Hi Sumit,
>
> Neutron has a concept of a bulk creation where multiple things can be
> created in one api request rather that N (and then be implemented atomically
> on the backend). In my opinion, I think it would be better to implement a
> bulk update/delete operation rather than a commit. I think that having
> something like this that is generic could be useful to other api's in
> neutron.
>
> I do agree that one has to keep track of the order they are
> changing/adding/delete rules so that they don't allow two things to
> communicate that shouldn't be allowed to. If someone wanted to perform this
> type of bulk atomic change now could they create a new profile with the
> rules they desire and then switch out which profile is attached to the
> firewall?
>
> Best,
>
> Aaron
>
>
> On Wed, Aug 7, 2013 at 3:40 PM, Sumit Naiksatam 
> wrote:
>>
>> We had some discussion on this during the Neutron IRC meeting, and per
>> that discussion I have created a blueprint for this:
>>
>> https://blueprints.launchpad.net/neutron/+spec/neutron-fwaas-explicit-commit
>>
>> Further comments can be posted on the blueprint whiteboard and/or the
>> design spec doc.
>>
>> Thanks,
>> ~Sumit.
>>
>> On Fri, Aug 2, 2013 at 6:43 PM, Sumit Naiksatam
>>  wrote:
>> > Hi All,
>> >
>> > In Neutron Firewall as a Service (FWaaS), we currently support an
>> > implicit commit mode, wherein a change made to a firewall_rule is
>> > propagated immediately to all the firewalls that use this rule (via
>> > the firewall_policy association), and the rule gets applied in the
>> > backend firewalls. This might be acceptable, however this is different
>> > from the explicit commit semantics which most firewalls support.
>> > Having an explicit commit operation ensures that multiple rules can be
>> > applied atomically, as opposed to in the implicit case where each rule
>> > is applied atomically and thus opens up the possibility of security
>> > holes between two successive rule applications.
>> >
>> > So the proposal here is quite simple -
>> >
>> > * When any changes are made to the firewall_rules
>> > (added/deleted/updated), no changes will happen on the firewall (only
>> > the corresponding firewall_rule resources are modified).
>> >
>> > * We will support an explicit commit operation on the firewall
>> > resource. Any changes made to the rules since the last commit will now
>> > be applied to the firewall when this commit operation is invoked.
>> >
>> > * A show operation on the firewall will show a list of the currently
>> > committed rules, and also the pending changes.
>> >
>> > Kindly respond if you have any comments on this.
>> >
>> > Thanks,
>> > ~Sumit.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Dev] Block Storage libraries and shared code

2013-08-12 Thread John Griffith
On Mon, Aug 12, 2013 at 1:06 PM, Russell Bryant  wrote:

> On 08/12/2013 02:56 PM, Vishvananda Ishaya wrote:
> >
> > On Aug 12, 2013, at 8:55 AM, John Griffith  > > wrote:
> >
> >> Hey,
> >>
> >> There have been a couple of block storage related patches in Nova
> >> lately and I wanted to get some discussion going and also maybe
> >> increase some awareness on some efforts that were discussed at the
> >> last summit.  To catch up a bit here's the etherpad from the summit
> >> session [1].
> >>
> >> First off, there was a patch to move Nova's LVM code in to OSLO (here
> >> [2]).  This one is probably my fault for not having enough awareness
> >> out there regarding our plans/goals with brick.  I'd like to hear from
> >> folks if the brick approach is not sufficient or if there's some other
> >> reason that it's not desirable (hopefully it's just that folks didn't
> >> know about it).
> >>
> >> For reference/review the latest version of the brick/local_dev/lvm
> >> code is here: [4].
> >>
> >> One question we haven't answered on this yet is where this code should
> >> ultimately live.  Should it be in OSLO, or should it be a separate
> >> library that's part of Cinder and can be imported by other projects.
> >>  I'm mixed on this for a number of reasons but I think either approach
> >> is fine.
> >>
> >> The next item around this topic that came up was a patch to add
> >> support for using RBD for local volumes in Nova (here [3]).  You'll
> >> notice a number of folks mentioned brick on this, and I think that's
> >> the correct answer.  At the same time while I think that's the right
> >> answer long term I also would hate to see this feature NOT go in to H
> >> just because folks weren't aware of what was going on in Brick.  It's
> >> a bit late in the cycle so my thought on this is that I'd like to see
> >> this resubmitted using the brick/common approach.  If that can't be
> >> done between now and the feature freeze for H3 I'd rather see the
> >> patch go in as is than have the feature not be present at all for
> >> another release.  We can then address this when we get a better story
> >> in place for brick.
> >
> > It seems like the key question is whether or not the nova code is going
> > to be replaced by brick by Havana. If not, then this should go in as-is.
>
> +1.  I was still expecting that it was.  If not, I'm happy to go with this.
>
> What's the status on this work?
>
> https://blueprints.launchpad.net/nova/+spec/refactor-iscsi-fc-brick
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

It's still planned to go in (hopefully in the next day or two [at least the
nova submission should be up]).  There are a couple of fixes under review
on the Cinder side right now and a Nova patch is ready to go once those
merge.  I'll see if we can't get the Nova version uploaded today at least
as a WIP pending the fixes in progress on the Cinder side.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] IRC logging poll

2013-08-12 Thread Doug Hellmann
This poll was scheduled to close last week, but as I was traveling it was
held open until today.

The decision from the poll is to enable logging in the #openstack-metering
channel on IRC.

http://www.cs.cornell.edu/w8/~andru/cgi-perl/civs/results.pl?id=E_7d41b0ac63615dbf


On Wed, Jul 31, 2013 at 7:53 AM, Doug Hellmann
wrote:

> As agreed during our last meeting, I have created an online poll for the
> ceilometer core team to decide whether to enable logging in our IRC
> channel. The voting system sent email with the ballot information a few
> minutes ago to each of the email addresses for the core team members in
> gerrit. If you did not receive a ballot, please contact me.
>
> Doug
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] event table is a ticking time bomb

2013-08-12 Thread Clint Byrum
Excerpts from Doug Hellmann's message of 2013-08-12 12:08:58 -0700:
> On Fri, Aug 9, 2013 at 11:56 AM, Clint Byrum  wrote:
> 
> > Excerpts from Sandy Walsh's message of 2013-08-09 06:16:55 -0700:
> > >
> > > On 08/08/2013 11:36 PM, Angus Salkeld wrote:
> > > > On 08/08/13 13:16 -0700, Clint Byrum wrote:
> > > >> Last night while reviewing a feature which would add more events to
> > the
> > > >> event table, it dawned on me that the event table really must be
> > removed.
> > > >
> > > >
> > > >>
> > > >> https://bugs.launchpad.net/heat/+bug/1209492
> > > >>
> > > >> tl;dr: users can write an infinite number of rows to the event table
> > at
> > > >> a fairly alarming rate just by creating and updating a very large
> > stack
> > > >> that has no resources that cost any time or are even billable (like an
> > > >> autoscaling launch configuration).
> > > >>
> > > >> The table has no purge function, so the only way to clear out old
> > events
> > > >> is to delete the stack, or manually remove them directly in the
> > database.
> > > >>
> > > >> We've all been through this before, logging to a database seems great
> > > >> until you actually do it.
> > > >>
> > > >> I have some ideas for how to solve it, but I wanted to get a wider
> > > >> audience:
> > > >>
> > > >> 1) Make the event list a ring buffer. Have rows 0 - $MAX_BUFFER_SIZE
> > in
> > > >> each stack, and simply write each new event to the next open position,
> > > >> wrapping at $MAX_BUFFER_SIZE. Pros: little change to current code,
> > > >> just need an offset column added and code that will properly wrap to 0
> > > >> at $MAX_BUFFER_SIZE. Cons: still can incur heavy transactional load on
> > > >> the database server.A
> > > >>
> > > >> 1.b) Same, but instead of rows, just maintain a blob and append the
> > rows
> > > >> as json list. Lowers transactional load but would push some load onto
> > > >> the API servers and such to parse these out, and would make pagination
> > > >> challenging. Blobs also can be a drain on DB server performance.
> > > >>
> > > >> 2) Write a purge script. Delete old ones. Pros: No code change, just
> > > >> new code to do purging. Cons: same as 1, plus more vulnerability to an
> > > >> aggressive attacker who can fit a lot of data in between purges. Also
> > > >> large scale deletes can be really painful (see: keystone sql token
> > > >> backend).
> > > >>
> > > >> 3) Log events to Swift. I can't seem to find information on how/if
> > > >> appending works there. Tons of tiny single-row files is an option,
> > but I
> > > >> want to hear from people with more swift knowledge if that is a
> > viable,
> > > >> performant option. Pros: Scale to the moon. Can charge tenant for
> > usage
> > > >> and let them purge events as needed. Cons: Adds swift as a requirement
> > > >> of Heat.
> > > >>
> > > >> 4) Provide a way for users to receive logs via HTTP POST. Pros: Simple
> > > >> and punts the problem to the users. Cons: users will be SoL if they
> > > >> don't have a place to have logs posted to.
> > > >>
> > > >> 5) Provide a way for users to receive logs via messaging service like
> > > >> Marconi.  Pros/Cons: same as HTTP, but perhaps a little more confusing
> > > >> and ambitious given Marconi's short existence.
> > > >>
> > > >> 6) Provide a pluggable backend for logging. This seems like the way
> > most
> > > >> OpenStack projects solve these issues, which is to let the deployers
> > > >> choose and/or provide their own way to handle a sticky problem. Pros:
> > > >> Simple and flexible for the future. Cons: Would require writing at
> > least
> > > >> one backend provider that does what the previous 5 options suggest.
> > > >>
> > > >> To be clear: Heat cannot really exist without this, as it is the only
> > way
> > > >> to find out what your stack is doing or has done.
> > > >
> > > > btw Clint I have ditched that "Recorder" patch as Ceilometer is
> > > > getting a Alarm History api soon, so we can defer to that for that
> > > > functionality (alarm transitions).
> > > >
> > > > But we still need a better way to record events/logs for the user.
> > > > So I make this blueprint a while ago:
> > > > https://blueprints.launchpad.net/heat/+spec/user-visible-logs
> > > >
> > > > I am becomming more in favor of user options rather than deployer
> > > > options if possible. So provide resources for Marconi, Meniscus and
> > > > what ever...
> > > > Although what is nice about Marconi is you could then hook up what
> > > > ever you want to it.
> > >
> > > Logs are one thing (and Meniscus is a great choice for that), but events
> > > are the very thing CM is designed to handle. Wouldn't it make sense to
> > > push them back into there?
> > >
> >
> > I'm not sure these events make sense in the current Ceilometer (I assume
> > that is "CM" above) context. These events are:
> >
> > ... Creating stack A
> > ... Creating stack A resource A
> > ... Created stack A resource A
> > ... Created stack A
> >
> > Users will want to be abl

[openstack-dev] Keystone Apache2 Installation Question

2013-08-12 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
Hello,

I am looking for documentation on how to install/configure Apache2 as the 
Keystone front end for "Ubuntu 12.04". I have found various documentation 
snippets for a variety of applications and operating systems, but nothing for 
Ubuntu. Any pointers would greatly be appreciated. I have been trying to piece 
the installation/configuration from the following URLs but have yet to be 
successful.

http://docs.openstack.org/developer/keystone/apache-httpd.html#keystone-configuration
 
https://keystone-voms.readthedocs.org/en/latest/requirements.html 
https://github.com/enovance/keystone-wsgi-apache/blob/master/provision.sh
http://adam.younglogic.com/2012/04/keystone-httpd/

Regards,

Mark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone Apache2 Installation Question

2013-08-12 Thread Dolph Mathews
What problem(s) are you running into when following the above documentation
/ examples?


On Mon, Aug 12, 2013 at 3:32 PM, Miller, Mark M (EB SW Cloud - R&D -
Corvallis)  wrote:

> Hello,
>
> I am looking for documentation on how to install/configure Apache2 as the
> Keystone front end for "Ubuntu 12.04". I have found various documentation
> snippets for a variety of applications and operating systems, but nothing
> for Ubuntu. Any pointers would greatly be appreciated. I have been trying
> to piece the installation/configuration from the following URLs but have
> yet to be successful.
>
>
> http://docs.openstack.org/developer/keystone/apache-httpd.html#keystone-configuration
> https://keystone-voms.readthedocs.org/en/latest/requirements.html
> https://github.com/enovance/keystone-wsgi-apache/blob/master/provision.sh
> http://adam.younglogic.com/2012/04/keystone-httpd/
>
> Regards,
>
> Mark
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] the plan the plan the havana doc plan

2013-08-12 Thread Anne Gentle
On Thu, Aug 8, 2013 at 9:42 AM, Thierry Carrez wrote:

> Anne Gentle wrote:
> > Let me know your input on "continuous release with bug tracking" for the
> > guides in the list above. If you have additional ideas, I'd love to hear
> > them. I'm definitely pondering the integrated programs doc strategies
> > and open to ideas. We've been discussing on openstack-docs but want to
> > bring the discussion to a larger group so please reply to openstack-dev.
>
> I think it makes sense, especially in a world where the actual code you
> run may or may not be aligned with a "release" anyway. It's better than
> the previous situation, where guides were "released" a few weeks after
> the "release", creating a lot of confusion. Better designate a set that
> you release together with the release and go continuous for the others.
>
>
Thanks for the input! I've looked at the analytics data and nearly 1/4th of
visitors are going to the /trunk/ docs anyway, which change continuously
now. So I think a smaller "release" set of docs is the way to go.

Thanks,

Anne


> So +1 from me!
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone Apache2 Installation Question

2013-08-12 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
The commands/libraries  do not exist for Ubuntu, Keystone no longer starts up, 
directories between the sets of documents do not match, ...

From: Dolph Mathews [mailto:dolph.math...@gmail.com]
Sent: Monday, August 12, 2013 1:41 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Keystone Apache2 Installation Question

What problem(s) are you running into when following the above documentation / 
examples?

On Mon, Aug 12, 2013 at 3:32 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis) 
mailto:mark.m.mil...@hp.com>> wrote:
Hello,

I am looking for documentation on how to install/configure Apache2 as the 
Keystone front end for "Ubuntu 12.04". I have found various documentation 
snippets for a variety of applications and operating systems, but nothing for 
Ubuntu. Any pointers would greatly be appreciated. I have been trying to piece 
the installation/configuration from the following URLs but have yet to be 
successful.

http://docs.openstack.org/developer/keystone/apache-httpd.html#keystone-configuration
https://keystone-voms.readthedocs.org/en/latest/requirements.html
https://github.com/enovance/keystone-wsgi-apache/blob/master/provision.sh
http://adam.younglogic.com/2012/04/keystone-httpd/

Regards,

Mark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] event table is a ticking time bomb

2013-08-12 Thread Doug Hellmann
On Mon, Aug 12, 2013 at 4:11 PM, Clint Byrum  wrote:

> Excerpts from Doug Hellmann's message of 2013-08-12 12:08:58 -0700:
> > On Fri, Aug 9, 2013 at 11:56 AM, Clint Byrum  wrote:
> >
> > > Excerpts from Sandy Walsh's message of 2013-08-09 06:16:55 -0700:
> > > >
> > > > On 08/08/2013 11:36 PM, Angus Salkeld wrote:
> > > > > On 08/08/13 13:16 -0700, Clint Byrum wrote:
> > > > >> Last night while reviewing a feature which would add more events
> to
> > > the
> > > > >> event table, it dawned on me that the event table really must be
> > > removed.
> > > > >
> > > > >
> > > > >>
> > > > >> https://bugs.launchpad.net/heat/+bug/1209492
> > > > >>
> > > > >> tl;dr: users can write an infinite number of rows to the event
> table
> > > at
> > > > >> a fairly alarming rate just by creating and updating a very large
> > > stack
> > > > >> that has no resources that cost any time or are even billable
> (like an
> > > > >> autoscaling launch configuration).
> > > > >>
> > > > >> The table has no purge function, so the only way to clear out old
> > > events
> > > > >> is to delete the stack, or manually remove them directly in the
> > > database.
> > > > >>
> > > > >> We've all been through this before, logging to a database seems
> great
> > > > >> until you actually do it.
> > > > >>
> > > > >> I have some ideas for how to solve it, but I wanted to get a wider
> > > > >> audience:
> > > > >>
> > > > >> 1) Make the event list a ring buffer. Have rows 0 -
> $MAX_BUFFER_SIZE
> > > in
> > > > >> each stack, and simply write each new event to the next open
> position,
> > > > >> wrapping at $MAX_BUFFER_SIZE. Pros: little change to current code,
> > > > >> just need an offset column added and code that will properly wrap
> to 0
> > > > >> at $MAX_BUFFER_SIZE. Cons: still can incur heavy transactional
> load on
> > > > >> the database server.A
> > > > >>
> > > > >> 1.b) Same, but instead of rows, just maintain a blob and append
> the
> > > rows
> > > > >> as json list. Lowers transactional load but would push some load
> onto
> > > > >> the API servers and such to parse these out, and would make
> pagination
> > > > >> challenging. Blobs also can be a drain on DB server performance.
> > > > >>
> > > > >> 2) Write a purge script. Delete old ones. Pros: No code change,
> just
> > > > >> new code to do purging. Cons: same as 1, plus more vulnerability
> to an
> > > > >> aggressive attacker who can fit a lot of data in between purges.
> Also
> > > > >> large scale deletes can be really painful (see: keystone sql token
> > > > >> backend).
> > > > >>
> > > > >> 3) Log events to Swift. I can't seem to find information on how/if
> > > > >> appending works there. Tons of tiny single-row files is an option,
> > > but I
> > > > >> want to hear from people with more swift knowledge if that is a
> > > viable,
> > > > >> performant option. Pros: Scale to the moon. Can charge tenant for
> > > usage
> > > > >> and let them purge events as needed. Cons: Adds swift as a
> requirement
> > > > >> of Heat.
> > > > >>
> > > > >> 4) Provide a way for users to receive logs via HTTP POST. Pros:
> Simple
> > > > >> and punts the problem to the users. Cons: users will be SoL if
> they
> > > > >> don't have a place to have logs posted to.
> > > > >>
> > > > >> 5) Provide a way for users to receive logs via messaging service
> like
> > > > >> Marconi.  Pros/Cons: same as HTTP, but perhaps a little more
> confusing
> > > > >> and ambitious given Marconi's short existence.
> > > > >>
> > > > >> 6) Provide a pluggable backend for logging. This seems like the
> way
> > > most
> > > > >> OpenStack projects solve these issues, which is to let the
> deployers
> > > > >> choose and/or provide their own way to handle a sticky problem.
> Pros:
> > > > >> Simple and flexible for the future. Cons: Would require writing at
> > > least
> > > > >> one backend provider that does what the previous 5 options
> suggest.
> > > > >>
> > > > >> To be clear: Heat cannot really exist without this, as it is the
> only
> > > way
> > > > >> to find out what your stack is doing or has done.
> > > > >
> > > > > btw Clint I have ditched that "Recorder" patch as Ceilometer is
> > > > > getting a Alarm History api soon, so we can defer to that for that
> > > > > functionality (alarm transitions).
> > > > >
> > > > > But we still need a better way to record events/logs for the user.
> > > > > So I make this blueprint a while ago:
> > > > > https://blueprints.launchpad.net/heat/+spec/user-visible-logs
> > > > >
> > > > > I am becomming more in favor of user options rather than deployer
> > > > > options if possible. So provide resources for Marconi, Meniscus and
> > > > > what ever...
> > > > > Although what is nice about Marconi is you could then hook up what
> > > > > ever you want to it.
> > > >
> > > > Logs are one thing (and Meniscus is a great choice for that), but
> events
> > > > are the very thing CM is designed to handle. Wouldn't it make sense
> to
> > > > push them back 

[openstack-dev] [Ceilometer] Nova_tests failing in jenkins

2013-08-12 Thread Herndon, John Luke (HPCS - Ft. Collins)
Hi - 

The nova_tests are failing for a couple of different Ceilometer reviews,
due to 'module' object has no attribute 'add_driver'.

This review (https://review.openstack.org/#/c/41316/) had nothing to do
with the nova_tests, yet they are failing. Any clue what's going on?

Apologies if there is an obvious answer - I've never encountered this
before.

Thanks,
-john


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] [ceilometer] Periodic Auditing In Glance

2013-08-12 Thread Andrew Melton
I'd like to open a discussion about bringing periodic auditing
to Glance. This would be very similar to Nova's instance usage audit,
which emits compute.instance.exists events for each instance that was
active for the given period. Likewise, Glance would emit an image.exists
for any image which was or is active for the given period.

As Ceilometer is the largest consumer of notifications in Openstack,
we'd really like to have their input on this.

Ceilometer is currently moving from a pulling/polling approach to a
push/notification based approach. The image.exists notification would be
useful for the 'image' metric seen in their docs here:

http://docs.openstack.org/developer/ceilometer/measurements.html#image-glance

Currently that meter tracks if the image (still) exists, and it does this
by setting the Gauge to 1 every time it gets a notification for the image.
The benefit of using a periodic image.exists notification for this meter
is that it would regularly get updated to ensure that the image is actually
still around.

As for Rackspace, the value we see in periodic image.exists is that it
provides a secondary way to identify usage. Currently, if you want to
track image usage using notifications, you must rely on instantaneous
notifications like image.create and image.delete. Going with those
notifications you'd bill for a given image until you receive an
image.delete. One issue that could arise would be a dropped image.delete.
Given that case, that image would be billed for indefinitely. With a
combination of instantaneous and periodic notification, you can detect
dropped notifications and in this case, a billing system using both
notifications would not end up over billing by charging for that image.

So, my question to the Ceilometer community is this, does this sound like
something Ceilometer would find value in and use? If so, would this be
something
we would want most deployers turning on?

Thanks,
Andrew Melton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Pagination

2013-08-12 Thread Henry Nash
Hi

I'm working on extending the pagination into the backends.  Right now, we 
handle the pagination in the v3 controller classand in fact it is disabled 
right now and we return the whole list irrespective of whether page/per-page is 
set in the query string, e.g.:

def paginate(cls, context, refs):
"""Paginates a list of references by page & per_page query strings."""
# FIXME(dolph): client needs to support pagination first
return refs

page = context['query_string'].get('page', 1)
per_page = context['query_string'].get('per_page', 30)
return refs[per_page * (page - 1):per_page * page]

I wonder both for the V3 controller (which still needs to handle pagination for 
backends that do not support it) and the backends that dowhether we could 
use wether 'page' is defined in the query-string as an indicator as to whether 
we should paginate or not?  That way clients who can handle it can ask for it, 
those that don'twill just get everything.  

Henry

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone Apache2 Installation Question

2013-08-12 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
Looks like I may be ahead of the game. It doesn't look like this blueprint has 
been started yet. Am I correct?

https://blueprints.launchpad.net/devstack/+spec/devstack-setup-apache-keystone

A very valuable feature of Keystone is to configure it to leverage apache as 
its front end. As a means of demonstrating how this works, and to facilitate 
automated testing of this configuration in the future, support to devstack will 
be added to enable it to optionally install and configure keystone using apache 
as it front end. The design approach used will be that described in the 
keystone docs: 
https://github.com/openstack/keystone/blob/master/doc/source/apache-httpd.rst
Thanks,

Mark



From: Miller, Mark M (EB SW Cloud - R&D - Corvallis)
Sent: Monday, August 12, 2013 1:45 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Keystone Apache2 Installation Question

The commands/libraries  do not exist for Ubuntu, Keystone no longer starts up, 
directories between the sets of documents do not match, ...

From: Dolph Mathews [mailto:dolph.math...@gmail.com]
Sent: Monday, August 12, 2013 1:41 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Keystone Apache2 Installation Question

What problem(s) are you running into when following the above documentation / 
examples?

On Mon, Aug 12, 2013 at 3:32 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis) 
mailto:mark.m.mil...@hp.com>> wrote:
Hello,

I am looking for documentation on how to install/configure Apache2 as the 
Keystone front end for "Ubuntu 12.04". I have found various documentation 
snippets for a variety of applications and operating systems, but nothing for 
Ubuntu. Any pointers would greatly be appreciated. I have been trying to piece 
the installation/configuration from the following URLs but have yet to be 
successful.

http://docs.openstack.org/developer/keystone/apache-httpd.html#keystone-configuration
https://keystone-voms.readthedocs.org/en/latest/requirements.html
https://github.com/enovance/keystone-wsgi-apache/blob/master/provision.sh
http://adam.younglogic.com/2012/04/keystone-httpd/

Regards,

Mark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Concern about Havana database migrations

2013-08-12 Thread Michael Still
Hi.

I have increasing levels of concern about Havana database migrations
and I want to beg for help, especially from nova core reviewers.

Specifically, I don't think its possible to land all of the database
migrations people need before the 22 August proposal freeze. At the
moment I see 10 patches competing for migration number 207 for
example, and only one can take that number. That means the other nine
will need to rebase and go through a re-review, which takes at least a
day. Unfortunately we don't even do that well -- many of these patches
sit around for several days before getting reviewed.

So -- I'd like some help with reviewing database migrations please.
The way I do these reviews is:

 - determine what migration number is currently the next one free from
git (currently 208 because Dan just approved 207) [1].

 - go to http://openstack.stillhq.com/ci/migrations/nova/.html
to see what patchsets have proposed a migration with that number.

 - review them

 - if you're super keen you can also check
http://openstack.stillhq.com/ci for warnings about the migration
first, but I generally keep an eye on that so its not absolutely
required [2].

I would have brought this up at the meeting last week, but I decided
to have a baby instead. I also apologize for my intermittent
availability for the last ten weeks -- its been a complicated
pregnancy and things should get back to normal within the next week or
so.

Thanks,
Michael

1: that list is pretty empty at the moment because 207 only got taken,
but if people reviewed 207 migrations WITHOUT APPROVING THEM then that
would help have a high quality set of 208 proposals.

2: Joshua Hesketh is working on integrating the DB CI testing more
closely into gerrit now, and should have something to show for this
work for the Icehouse release.

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] event table is a ticking time bomb

2013-08-12 Thread Angus Salkeld

On 12/08/13 16:52 -0400, Doug Hellmann wrote:

On Mon, Aug 12, 2013 at 4:11 PM, Clint Byrum  wrote:


Excerpts from Doug Hellmann's message of 2013-08-12 12:08:58 -0700:
> On Fri, Aug 9, 2013 at 11:56 AM, Clint Byrum  wrote:
>
> > Excerpts from Sandy Walsh's message of 2013-08-09 06:16:55 -0700:
> > >
> > > On 08/08/2013 11:36 PM, Angus Salkeld wrote:
> > > > On 08/08/13 13:16 -0700, Clint Byrum wrote:
> > > >> Last night while reviewing a feature which would add more events
to
> > the
> > > >> event table, it dawned on me that the event table really must be
> > removed.
> > > >
> > > >
> > > >>
> > > >> https://bugs.launchpad.net/heat/+bug/1209492
> > > >>
> > > >> tl;dr: users can write an infinite number of rows to the event
table
> > at
> > > >> a fairly alarming rate just by creating and updating a very large
> > stack
> > > >> that has no resources that cost any time or are even billable
(like an
> > > >> autoscaling launch configuration).
> > > >>
> > > >> The table has no purge function, so the only way to clear out old
> > events
> > > >> is to delete the stack, or manually remove them directly in the
> > database.
> > > >>
> > > >> We've all been through this before, logging to a database seems
great
> > > >> until you actually do it.
> > > >>
> > > >> I have some ideas for how to solve it, but I wanted to get a wider
> > > >> audience:
> > > >>
> > > >> 1) Make the event list a ring buffer. Have rows 0 -
$MAX_BUFFER_SIZE
> > in
> > > >> each stack, and simply write each new event to the next open
position,
> > > >> wrapping at $MAX_BUFFER_SIZE. Pros: little change to current code,
> > > >> just need an offset column added and code that will properly wrap
to 0
> > > >> at $MAX_BUFFER_SIZE. Cons: still can incur heavy transactional
load on
> > > >> the database server.A
> > > >>
> > > >> 1.b) Same, but instead of rows, just maintain a blob and append
the
> > rows
> > > >> as json list. Lowers transactional load but would push some load
onto
> > > >> the API servers and such to parse these out, and would make
pagination
> > > >> challenging. Blobs also can be a drain on DB server performance.
> > > >>
> > > >> 2) Write a purge script. Delete old ones. Pros: No code change,
just
> > > >> new code to do purging. Cons: same as 1, plus more vulnerability
to an
> > > >> aggressive attacker who can fit a lot of data in between purges.
Also
> > > >> large scale deletes can be really painful (see: keystone sql token
> > > >> backend).
> > > >>
> > > >> 3) Log events to Swift. I can't seem to find information on how/if
> > > >> appending works there. Tons of tiny single-row files is an option,
> > but I
> > > >> want to hear from people with more swift knowledge if that is a
> > viable,
> > > >> performant option. Pros: Scale to the moon. Can charge tenant for
> > usage
> > > >> and let them purge events as needed. Cons: Adds swift as a
requirement
> > > >> of Heat.
> > > >>
> > > >> 4) Provide a way for users to receive logs via HTTP POST. Pros:
Simple
> > > >> and punts the problem to the users. Cons: users will be SoL if
they
> > > >> don't have a place to have logs posted to.
> > > >>
> > > >> 5) Provide a way for users to receive logs via messaging service
like
> > > >> Marconi.  Pros/Cons: same as HTTP, but perhaps a little more
confusing
> > > >> and ambitious given Marconi's short existence.
> > > >>
> > > >> 6) Provide a pluggable backend for logging. This seems like the
way
> > most
> > > >> OpenStack projects solve these issues, which is to let the
deployers
> > > >> choose and/or provide their own way to handle a sticky problem.
Pros:
> > > >> Simple and flexible for the future. Cons: Would require writing at
> > least
> > > >> one backend provider that does what the previous 5 options
suggest.
> > > >>
> > > >> To be clear: Heat cannot really exist without this, as it is the
only
> > way
> > > >> to find out what your stack is doing or has done.
> > > >
> > > > btw Clint I have ditched that "Recorder" patch as Ceilometer is
> > > > getting a Alarm History api soon, so we can defer to that for that
> > > > functionality (alarm transitions).
> > > >
> > > > But we still need a better way to record events/logs for the user.
> > > > So I make this blueprint a while ago:
> > > > https://blueprints.launchpad.net/heat/+spec/user-visible-logs
> > > >
> > > > I am becomming more in favor of user options rather than deployer
> > > > options if possible. So provide resources for Marconi, Meniscus and
> > > > what ever...
> > > > Although what is nice about Marconi is you could then hook up what
> > > > ever you want to it.
> > >
> > > Logs are one thing (and Meniscus is a great choice for that), but
events
> > > are the very thing CM is designed to handle. Wouldn't it make sense
to
> > > push them back into there?
> > >
> >
> > I'm not sure these events make sense in the current Ceilometer (I
assume
> > that is "CM" above) context. These events are:
> >
> > ... Creating stack A
> > ... Creating stack A r

Re: [openstack-dev] [Heat] event table is a ticking time bomb

2013-08-12 Thread Steve Baker
On 08/13/2013 10:39 AM, Angus Salkeld wrote:
> On 12/08/13 16:52 -0400, Doug Hellmann wrote:
>> On Mon, Aug 12, 2013 at 4:11 PM, Clint Byrum  wrote:
>>
>>> Excerpts from Doug Hellmann's message of 2013-08-12 12:08:58 -0700:
>>> > On Fri, Aug 9, 2013 at 11:56 AM, Clint Byrum 
>>> wrote:
>>> >
>>> > > Excerpts from Sandy Walsh's message of 2013-08-09 06:16:55 -0700:
>>> > > >
>>> > > > On 08/08/2013 11:36 PM, Angus Salkeld wrote:
>>> > > > > On 08/08/13 13:16 -0700, Clint Byrum wrote:
>>> > > > >> Last night while reviewing a feature which would add more
>>> events
>>> to
>>> > > the
>>> > > > >> event table, it dawned on me that the event table really
>>> must be
>>> > > removed.
>>> > > > >
>>> > > > >
>>> > > > >>
>>> > > > >> https://bugs.launchpad.net/heat/+bug/1209492
>>> > > > >>
>>> > > > >> tl;dr: users can write an infinite number of rows to the event
>>> table
>>> > > at
>>> > > > >> a fairly alarming rate just by creating and updating a very
>>> large
>>> > > stack
>>> > > > >> that has no resources that cost any time or are even billable
>>> (like an
>>> > > > >> autoscaling launch configuration).
>>> > > > >>
>>> > > > >> The table has no purge function, so the only way to clear
>>> out old
>>> > > events
>>> > > > >> is to delete the stack, or manually remove them directly in
>>> the
>>> > > database.
>>> > > > >>
>>> > > > >> We've all been through this before, logging to a database
>>> seems
>>> great
>>> > > > >> until you actually do it.
>>> > > > >>
>>> > > > >> I have some ideas for how to solve it, but I wanted to get
>>> a wider
>>> > > > >> audience:
>>> > > > >>
>>> > > > >> 1) Make the event list a ring buffer. Have rows 0 -
>>> $MAX_BUFFER_SIZE
>>> > > in
>>> > > > >> each stack, and simply write each new event to the next open
>>> position,
>>> > > > >> wrapping at $MAX_BUFFER_SIZE. Pros: little change to
>>> current code,
>>> > > > >> just need an offset column added and code that will
>>> properly wrap
>>> to 0
>>> > > > >> at $MAX_BUFFER_SIZE. Cons: still can incur heavy transactional
>>> load on
>>> > > > >> the database server.A
>>> > > > >>
>>> > > > >> 1.b) Same, but instead of rows, just maintain a blob and
>>> append
>>> the
>>> > > rows
>>> > > > >> as json list. Lowers transactional load but would push some
>>> load
>>> onto
>>> > > > >> the API servers and such to parse these out, and would make
>>> pagination
>>> > > > >> challenging. Blobs also can be a drain on DB server
>>> performance.
>>> > > > >>
>>> > > > >> 2) Write a purge script. Delete old ones. Pros: No code
>>> change,
>>> just
>>> > > > >> new code to do purging. Cons: same as 1, plus more
>>> vulnerability
>>> to an
>>> > > > >> aggressive attacker who can fit a lot of data in between
>>> purges.
>>> Also
>>> > > > >> large scale deletes can be really painful (see: keystone
>>> sql token
>>> > > > >> backend).
>>> > > > >>
>>> > > > >> 3) Log events to Swift. I can't seem to find information on
>>> how/if
>>> > > > >> appending works there. Tons of tiny single-row files is an
>>> option,
>>> > > but I
>>> > > > >> want to hear from people with more swift knowledge if that
>>> is a
>>> > > viable,
>>> > > > >> performant option. Pros: Scale to the moon. Can charge
>>> tenant for
>>> > > usage
>>> > > > >> and let them purge events as needed. Cons: Adds swift as a
>>> requirement
>>> > > > >> of Heat.
>>> > > > >>
>>> > > > >> 4) Provide a way for users to receive logs via HTTP POST.
>>> Pros:
>>> Simple
>>> > > > >> and punts the problem to the users. Cons: users will be SoL if
>>> they
>>> > > > >> don't have a place to have logs posted to.
>>> > > > >>
>>> > > > >> 5) Provide a way for users to receive logs via messaging
>>> service
>>> like
>>> > > > >> Marconi.  Pros/Cons: same as HTTP, but perhaps a little more
>>> confusing
>>> > > > >> and ambitious given Marconi's short existence.
>>> > > > >>
>>> > > > >> 6) Provide a pluggable backend for logging. This seems like
>>> the
>>> way
>>> > > most
>>> > > > >> OpenStack projects solve these issues, which is to let the
>>> deployers
>>> > > > >> choose and/or provide their own way to handle a sticky
>>> problem.
>>> Pros:
>>> > > > >> Simple and flexible for the future. Cons: Would require
>>> writing at
>>> > > least
>>> > > > >> one backend provider that does what the previous 5 options
>>> suggest.
>>> > > > >>
>>> > > > >> To be clear: Heat cannot really exist without this, as it
>>> is the
>>> only
>>> > > way
>>> > > > >> to find out what your stack is doing or has done.
>>> > > > >
>>> > > > > btw Clint I have ditched that "Recorder" patch as Ceilometer is
>>> > > > > getting a Alarm History api soon, so we can defer to that
>>> for that
>>> > > > > functionality (alarm transitions).
>>> > > > >
>>> > > > > But we still need a better way to record events/logs for the
>>> user.
>>> > > > > So I make this blueprint a while ago:
>>> > > > > https://blueprints.launchpad.net/heat/+spec/user-visible-logs
>>> > > > >
>>> > > > > I am becomming more in

[openstack-dev] [marconi] Queuing Service API stabilized

2013-08-12 Thread Kurt Griffiths
Hey folks,

I just wanted to send out a quick note for everyone who is interested in the 
Marconi project that our v1 API is quite stable now, and we've landed a flurry 
of bug fixes and performance optimizations lately. Now is a great time to kick 
the tires and tell us what you think. We are driving toward a solid baseline v1 
release of the service in the coming months.

I'm really proud of what the Marconi team has achieved so far, and would like 
to personally thank everyone who has contributed to the project.

Check it out here: https://launchpad.net/marconi

In just a couple minutes, you can pip install (or clone) the code, fire up a 
local instance, and take Marconi for a spin:

https://github.com/stackforge/marconi#running-a-local-marconi-server-with-mongodb

Flavio and I will be in Hong Kong later this year to discuss the past, present, 
and future of the project. In the meantime, don't hesitate to reach out to us 
on the mailing list or via #openstack-marconi.

Cheers,
Kurt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MultiClusterZones

2013-08-12 Thread Tom Fifield

On 13/08/13 00:03, Wolfgang Richter wrote:

What is the status of this proposal:

https://wiki.openstack.org/wiki/MultiClusterZones

Has anyone worked on it?


Hi Wolfgang,

That one is old. I think you probably want Cells, which has been 
implemented and has several sites running in production.


http://docs.openstack.org/trunk/openstack-ops/content/scaling.html#segregate_cloud

https://wiki.openstack.org/wiki/Blueprint-nova-compute-cells

http://comstud.com/GrizzlyCells.pdf

Regards,


Tom

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] event table is a ticking time bomb

2013-08-12 Thread Angus Salkeld

On 13/08/13 10:48 +1200, Steve Baker wrote:

On 08/13/2013 10:39 AM, Angus Salkeld wrote:

On 12/08/13 16:52 -0400, Doug Hellmann wrote:

On Mon, Aug 12, 2013 at 4:11 PM, Clint Byrum  wrote:


Excerpts from Doug Hellmann's message of 2013-08-12 12:08:58 -0700:
> On Fri, Aug 9, 2013 at 11:56 AM, Clint Byrum 
wrote:
>
> > Excerpts from Sandy Walsh's message of 2013-08-09 06:16:55 -0700:
> > >
> > > On 08/08/2013 11:36 PM, Angus Salkeld wrote:
> > > > On 08/08/13 13:16 -0700, Clint Byrum wrote:
> > > >> Last night while reviewing a feature which would add more
events
to
> > the
> > > >> event table, it dawned on me that the event table really
must be
> > removed.
> > > >
> > > >
> > > >>
> > > >> https://bugs.launchpad.net/heat/+bug/1209492
> > > >>
> > > >> tl;dr: users can write an infinite number of rows to the event
table
> > at
> > > >> a fairly alarming rate just by creating and updating a very
large
> > stack
> > > >> that has no resources that cost any time or are even billable
(like an
> > > >> autoscaling launch configuration).
> > > >>
> > > >> The table has no purge function, so the only way to clear
out old
> > events
> > > >> is to delete the stack, or manually remove them directly in
the
> > database.
> > > >>
> > > >> We've all been through this before, logging to a database
seems
great
> > > >> until you actually do it.
> > > >>
> > > >> I have some ideas for how to solve it, but I wanted to get
a wider
> > > >> audience:
> > > >>
> > > >> 1) Make the event list a ring buffer. Have rows 0 -
$MAX_BUFFER_SIZE
> > in
> > > >> each stack, and simply write each new event to the next open
position,
> > > >> wrapping at $MAX_BUFFER_SIZE. Pros: little change to
current code,
> > > >> just need an offset column added and code that will
properly wrap
to 0
> > > >> at $MAX_BUFFER_SIZE. Cons: still can incur heavy transactional
load on
> > > >> the database server.A
> > > >>
> > > >> 1.b) Same, but instead of rows, just maintain a blob and
append
the
> > rows
> > > >> as json list. Lowers transactional load but would push some
load
onto
> > > >> the API servers and such to parse these out, and would make
pagination
> > > >> challenging. Blobs also can be a drain on DB server
performance.
> > > >>
> > > >> 2) Write a purge script. Delete old ones. Pros: No code
change,
just
> > > >> new code to do purging. Cons: same as 1, plus more
vulnerability
to an
> > > >> aggressive attacker who can fit a lot of data in between
purges.
Also
> > > >> large scale deletes can be really painful (see: keystone
sql token
> > > >> backend).
> > > >>
> > > >> 3) Log events to Swift. I can't seem to find information on
how/if
> > > >> appending works there. Tons of tiny single-row files is an
option,
> > but I
> > > >> want to hear from people with more swift knowledge if that
is a
> > viable,
> > > >> performant option. Pros: Scale to the moon. Can charge
tenant for
> > usage
> > > >> and let them purge events as needed. Cons: Adds swift as a
requirement
> > > >> of Heat.
> > > >>
> > > >> 4) Provide a way for users to receive logs via HTTP POST.
Pros:
Simple
> > > >> and punts the problem to the users. Cons: users will be SoL if
they
> > > >> don't have a place to have logs posted to.
> > > >>
> > > >> 5) Provide a way for users to receive logs via messaging
service
like
> > > >> Marconi.  Pros/Cons: same as HTTP, but perhaps a little more
confusing
> > > >> and ambitious given Marconi's short existence.
> > > >>
> > > >> 6) Provide a pluggable backend for logging. This seems like
the
way
> > most
> > > >> OpenStack projects solve these issues, which is to let the
deployers
> > > >> choose and/or provide their own way to handle a sticky
problem.
Pros:
> > > >> Simple and flexible for the future. Cons: Would require
writing at
> > least
> > > >> one backend provider that does what the previous 5 options
suggest.
> > > >>
> > > >> To be clear: Heat cannot really exist without this, as it
is the
only
> > way
> > > >> to find out what your stack is doing or has done.
> > > >
> > > > btw Clint I have ditched that "Recorder" patch as Ceilometer is
> > > > getting a Alarm History api soon, so we can defer to that
for that
> > > > functionality (alarm transitions).
> > > >
> > > > But we still need a better way to record events/logs for the
user.
> > > > So I make this blueprint a while ago:
> > > > https://blueprints.launchpad.net/heat/+spec/user-visible-logs
> > > >
> > > > I am becomming more in favor of user options rather than
deployer
> > > > options if possible. So provide resources for Marconi,
Meniscus and
> > > > what ever...
> > > > Although what is nice about Marconi is you could then hook
up what
> > > > ever you want to it.
> > >
> > > Logs are one thing (and Meniscus is a great choice for that), but
events
> > > are the very thing CM is designed to handle. Wouldn't it make
sense
to
> > > push them back into there?
> > >
> >
> > I'm not sure these events make sense in the current Ceilometer (I
assume
> > that is "CM

Re: [openstack-dev] [Nova] Concern about Havana database migrations

2013-08-12 Thread Russell Bryant
On 08/12/2013 06:16 PM, Michael Still wrote:
> Hi.
> 
> I have increasing levels of concern about Havana database migrations
> and I want to beg for help, especially from nova core reviewers.
> 
> Specifically, I don't think its possible to land all of the database
> migrations people need before the 22 August proposal freeze. At the
> moment I see 10 patches competing for migration number 207 for
> example, and only one can take that number. That means the other nine
> will need to rebase and go through a re-review, which takes at least a
> day. Unfortunately we don't even do that well -- many of these patches
> sit around for several days before getting reviewed.
> 
> So -- I'd like some help with reviewing database migrations please.
> The way I do these reviews is:
> 
>  - determine what migration number is currently the next one free from
> git (currently 208 because Dan just approved 207) [1].
> 
>  - go to http://openstack.stillhq.com/ci/migrations/nova/.html
> to see what patchsets have proposed a migration with that number.
> 
>  - review them
> 
>  - if you're super keen you can also check
> http://openstack.stillhq.com/ci for warnings about the migration
> first, but I generally keep an eye on that so its not absolutely
> required [2].

A few notes to help short term:

1) Note that they do not need to *land* by the 22nd.  Features just need
to be proposed.  The merge deadline is a couple weeks after that.

2) If something is approved, but just has to be rebased to fix the
migration conflict, feel free to re-approve without waiting for a second +2.

3) If you see something that needs a rebase, feel free to be generous
and do it for them.  That should help cut out some of the delay.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Scheduler sub-group meeting agenda 8/13

2013-08-12 Thread Dugger, Donald D
As mentioned last week let's discuss:

1) Perspective for nova scheduler



This is Boris' ideas for the scheduler, you can read his detailed writeup at:


https://docs.google.com/document/d/1_DRv7it_mwalEZzLy5WO92TJcummpmWL4NWsWf0UWiQ/edit#heading=h.6ixj0ctv4rwu


--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone Apache2 Installation Question

2013-08-12 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
Progress: Got Keystone working under Apache2 with HTTP based on the following 2 
URLs . HTTPS is the next.

https://keystone-voms.readthedocs.org/en/latest/requirements.html
https://www.digitalocean.com/community/articles/how-to-create-a-ssl-certificate-on-apache-for-ubuntu-12-04

Mark

From: Miller, Mark M (EB SW Cloud - R&D - Corvallis)
Sent: Monday, August 12, 2013 3:10 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Keystone Apache2 Installation Question

Looks like I may be ahead of the game. It doesn't look like this blueprint has 
been started yet. Am I correct?

https://blueprints.launchpad.net/devstack/+spec/devstack-setup-apache-keystone

A very valuable feature of Keystone is to configure it to leverage apache as 
its front end. As a means of demonstrating how this works, and to facilitate 
automated testing of this configuration in the future, support to devstack will 
be added to enable it to optionally install and configure keystone using apache 
as it front end. The design approach used will be that described in the 
keystone docs: 
https://github.com/openstack/keystone/blob/master/doc/source/apache-httpd.rst
Thanks,

Mark



From: Miller, Mark M (EB SW Cloud - R&D - Corvallis)
Sent: Monday, August 12, 2013 1:45 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Keystone Apache2 Installation Question

The commands/libraries  do not exist for Ubuntu, Keystone no longer starts up, 
directories between the sets of documents do not match, ...

From: Dolph Mathews [mailto:dolph.math...@gmail.com]
Sent: Monday, August 12, 2013 1:41 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Keystone Apache2 Installation Question

What problem(s) are you running into when following the above documentation / 
examples?

On Mon, Aug 12, 2013 at 3:32 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis) 
mailto:mark.m.mil...@hp.com>> wrote:
Hello,

I am looking for documentation on how to install/configure Apache2 as the 
Keystone front end for "Ubuntu 12.04". I have found various documentation 
snippets for a variety of applications and operating systems, but nothing for 
Ubuntu. Any pointers would greatly be appreciated. I have been trying to piece 
the installation/configuration from the following URLs but have yet to be 
successful.

http://docs.openstack.org/developer/keystone/apache-httpd.html#keystone-configuration
https://keystone-voms.readthedocs.org/en/latest/requirements.html
https://github.com/enovance/keystone-wsgi-apache/blob/master/provision.sh
http://adam.younglogic.com/2012/04/keystone-httpd/

Regards,

Mark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-12 Thread Dolph Mathews
The way paginated links are defined by the v3 API (via `next` and
`previous` links), it can be completely up to the driver as to what the
query parameters look like. So, the client shouldn't have (nor require) any
knowledge of how to build query parameters for pagination. It just needs to
follow the links it's given.

'page' and 'per_page' are trivial for the controller to implement (as it's
just slicing into an list... as shown)... so that's a reasonable default
behavior (for when a driver does not support pagination). However, if the
underlying driver DOES support pagination, it should provide a way for the
controller to ask for the query parameters required to specify the
next/previous links (so, one driver could return `marker` and `limit`
parameters while another only exposes the `page` number, but not quantity
`per_page`).


On Mon, Aug 12, 2013 at 4:34 PM, Henry Nash wrote:

> Hi
>
> I'm working on extending the pagination into the backends.  Right now, we
> handle the pagination in the v3 controller classand in fact it is
> disabled right now and we return the whole list irrespective of whether
> page/per-page is set in the query string, e.g.:
>
> def *paginate*(cls, context, refs):
> *"""Paginates a list of references by page & per_page query
> strings."""*
> # FIXME(dolph): client needs to support pagination first
> return refs
>
> page = context[*'query_string'*].get(*'page'*, 1)
> per_page = context[*'query_string'*].get(*'per_page'*, 30)
> return refs[per_page * (page - 1):per_page * page]
>
> I wonder both for the V3 controller (which still needs to handle
> pagination for backends that do not support it) and the backends that
> dowhether we could use wether 'page' is defined in the query-string as
> an indicator as to whether we should paginate or not?  That way clients who
> can handle it can ask for it, those that don'twill just get everything.
>
> Henry
>
>


-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-12 Thread Adam Young

On 08/12/2013 05:34 PM, Henry Nash wrote:

Hi

I'm working on extending the pagination into the backends.  Right now, 
we handle the pagination in the v3 controller classand in fact it 
is disabled right now and we return the whole list irrespective of 
whether page/per-page is set in the query string, e.g.:
Pagination is a broken concept. We should not be returning lists so long 
that we need to paginate.  Instead, we should have query limits, and 
filters to refine the queries.


Some people are doing full user lists against LDAP.  I don't need to 
tell you how broken that is.  Why do we allow user-list at the Domain 
(or unscoped level)?


I'd argue that we should drop enumeration of objects in general, and 
certainly limit the number of results that come back.  Pagination in 
LDAP requires cursors, and thus continuos connections from Keystone to 
LDAP...this is not a scalable solution.


Do we really need this?




def *paginate*(cls, context, refs):
/"""Paginates a list of references by page & per_page query strings."""/
# FIXME(dolph): client needs to support pagination first
return refs

page = context[/'query_string'/].get(/'page'/, 1)
per_page = context[/'query_string'/].get(/'per_page'/, 30)
return refs[per_page * (page - 1):per_page * page]

I wonder both for the V3 controller (which still needs to handle 
pagination for backends that do not support it) and the backends that 
dowhether we could use wether 'page' is defined in the 
query-string as an indicator as to whether we should paginate or not? 
 That way clients who can handle it can ask for it, those that 
don'twill just get everything.


Henry



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-12 Thread Jay Pipes

On 08/12/2013 08:27 PM, Adam Young wrote:

On 08/12/2013 05:34 PM, Henry Nash wrote:

Hi

I'm working on extending the pagination into the backends.  Right now,
we handle the pagination in the v3 controller classand in fact it
is disabled right now and we return the whole list irrespective of
whether page/per-page is set in the query string, e.g.:

Pagination is a broken concept. We should not be returning lists so long
that we need to paginate.  Instead, we should have query limits, and
filters to refine the queries.

Some people are doing full user lists against LDAP.  I don't need to
tell you how broken that is.  Why do we allow user-list at the Domain
(or unscoped level)?

I'd argue that we should drop enumeration of objects in general, and
certainly limit the number of results that come back.  Pagination in
LDAP requires cursors, and thus continuos connections from Keystone to
LDAP...this is not a scalable solution.

Do we really need this?


Yes. It is very painful for operators right now to do any sort of 
administration of identity information when using the SQL backend. In 
Horizon, the users admin page takes forever and a day to load hundreds 
or thousands of user records (same for tenants). The CLI is similarly 
painful in production environments with thousands of user/tenants.


Best,
-jay


def *paginate*(cls, context, refs):
/"""Paginates a list of references by page & per_page query strings."""/
# FIXME(dolph): client needs to support pagination first
return refs

page = context[/'query_string'/].get(/'page'/, 1)
per_page = context[/'query_string'/].get(/'per_page'/, 30)
return refs[per_page * (page - 1):per_page * page]

I wonder both for the V3 controller (which still needs to handle
pagination for backends that do not support it) and the backends that
dowhether we could use wether 'page' is defined in the
query-string as an indicator as to whether we should paginate or not?
 That way clients who can handle it can ask for it, those that
don'twill just get everything.

Henry



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-12 Thread Adam Young

On 08/12/2013 08:33 PM, Jay Pipes wrote:

On 08/12/2013 08:27 PM, Adam Young wrote:

On 08/12/2013 05:34 PM, Henry Nash wrote:

Hi

I'm working on extending the pagination into the backends. Right now,
we handle the pagination in the v3 controller classand in fact it
is disabled right now and we return the whole list irrespective of
whether page/per-page is set in the query string, e.g.:

Pagination is a broken concept. We should not be returning lists so long
that we need to paginate.  Instead, we should have query limits, and
filters to refine the queries.

Some people are doing full user lists against LDAP.  I don't need to
tell you how broken that is.  Why do we allow user-list at the Domain
(or unscoped level)?

I'd argue that we should drop enumeration of objects in general, and
certainly limit the number of results that come back. Pagination in
LDAP requires cursors, and thus continuos connections from Keystone to
LDAP...this is not a scalable solution.

Do we really need this?


Yes. It is very painful for operators right now to do any sort of 
administration of identity information when using the SQL backend. In 
Horizon, the users admin page takes forever and a day to load hundreds 
or thousands of user records (same for tenants). The CLI is similarly 
painful in production environments with thousands of user/tenants.
Not arguing that it is not broken.   I would argue that there is 
something broken with out workflows. Pagination is not the answer. Not 
asking for the entire user list is the answer.
Honestly, if the list is 100K long, you are not going to page through 
it.  You need a better search filter.  Lets limit the number of results 
shown, and figure out how to correctly filter results.




Best,
-jay


def *paginate*(cls, context, refs):
/"""Paginates a list of references by page & per_page query 
strings."""/

# FIXME(dolph): client needs to support pagination first
return refs

page = context[/'query_string'/].get(/'page'/, 1)
per_page = context[/'query_string'/].get(/'per_page'/, 30)
return refs[per_page * (page - 1):per_page * page]

I wonder both for the V3 controller (which still needs to handle
pagination for backends that do not support it) and the backends that
dowhether we could use wether 'page' is defined in the
query-string as an indicator as to whether we should paginate or not?
 That way clients who can handle it can ask for it, those that
don'twill just get everything.

Henry



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-12 Thread Jamie Lennox
I'm not sure where it would make sense within the API to return the name
of the page/per_page variables to the client that doesn't involve having
already issued the call (ie returning the names within the links box
means you've already issued the query). If we standardize on the
page/per_page combination then this can be handled at the controller
level then the driver has permission to simply ignore it - or have the
controller do the slicing after the driver has returned.

To weigh in on the other question i think it should be checked that page
is an integer, unless per_page is specified in which case default to 1.

For example: 

GET /v3/users?page=

I would expect to return all users as page is not set. However: 

GET /v3/users?per_page=30

As per_page is useless without a page i think we can default to page=1.

As an aside are we indexing from 1?

On Mon, 2013-08-12 at 19:05 -0500, Dolph Mathews wrote:
> The way paginated links are defined by the v3 API (via `next` and
> `previous` links), it can be completely up to the driver as to what
> the query parameters look like. So, the client shouldn't have (nor
> require) any knowledge of how to build query parameters for
> pagination. It just needs to follow the links it's given.
> 
> 
> 'page' and 'per_page' are trivial for the controller to implement (as
> it's just slicing into an list... as shown)... so that's a reasonable
> default behavior (for when a driver does not support pagination).
> However, if the underlying driver DOES support pagination, it should
> provide a way for the controller to ask for the query parameters
> required to specify the next/previous links (so, one driver could
> return `marker` and `limit` parameters while another only exposes the
> `page` number, but not quantity `per_page`).
> 
> 
> On Mon, Aug 12, 2013 at 4:34 PM, Henry Nash
>  wrote:
> Hi
> 
> 
> I'm working on extending the pagination into the backends.
>  Right now, we handle the pagination in the v3 controller
> classand in fact it is disabled right now and we return
> the whole list irrespective of whether page/per-page is set in
> the query string, e.g.:
> 
> 
> def paginate(cls, context, refs):
> """Paginates a list of references by page & per_page
> query strings."""
> # FIXME(dolph): client needs to support pagination
> first
> return refs
> 
> 
> page = context['query_string'].get('page', 1)
> per_page = context['query_string'].get('per_page', 30)
> return refs[per_page * (page - 1):per_page * page]
> 
> 
> I wonder both for the V3 controller (which still needs to
> handle pagination for backends that do not support it) and the
> backends that dowhether we could use wether 'page' is
> defined in the query-string as an indicator as to whether we
> should paginate or not?  That way clients who can handle it
> can ask for it, those that don'twill just get everything.  
> 
> 
> Henry
> 
> 
> 
> 
> 
> 
> -- 
> 
> 
> -Dolph
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] LBaaS routed insertion implementation

2013-08-12 Thread Itsuro ODA
Hi,

I am interested in "5. Routed insertion implementation" which
is mentioned in the Neutron/LBaaS/HavanaPlan wiki page.
What is the current status of this ?

Thanks.
-- 
Itsuro ODA 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-12 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
The main reason I use user lists (i.e. keystone user-list) is to get the list 
of usernames/IDs for other keystone commands. I do not see the value of showing 
all of the users in an LDAP server when they are not part of the keystone 
database (i.e. do not have roles assigned to them). Performing a "keystone 
user-list" command against the HP Enterprise Directory locks up keystone for 
about 1 ½ hours in that it will not perform any other commands until it is 
done.  If it is decided that user lists are necessary, then at a minimum they 
need to be paged to return control back to keystone for another command.

Mark

From: Adam Young [mailto:ayo...@redhat.com]
Sent: Monday, August 12, 2013 5:27 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [keystone] Pagination

On 08/12/2013 05:34 PM, Henry Nash wrote:
Hi

I'm working on extending the pagination into the backends.  Right now, we 
handle the pagination in the v3 controller classand in fact it is disabled 
right now and we return the whole list irrespective of whether page/per-page is 
set in the query string, e.g.:
Pagination is a broken concept. We should not be returning lists so long that 
we need to paginate.  Instead, we should have query limits, and filters to 
refine the queries.

Some people are doing full user lists against LDAP.  I don't need to tell you 
how broken that is.  Why do we allow user-list at the Domain (or unscoped 
level)?

I'd argue that we should drop enumeration of objects in general, and certainly 
limit the number of results that come back.  Pagination in LDAP requires 
cursors, and thus continuos connections from Keystone to LDAP...this is not a 
scalable solution.

Do we really need this?




def paginate(cls, context, refs):
"""Paginates a list of references by page & per_page query strings."""
# FIXME(dolph): client needs to support pagination first
return refs

page = context['query_string'].get('page', 1)
per_page = context['query_string'].get('per_page', 30)
return refs[per_page * (page - 1):per_page * page]

I wonder both for the V3 controller (which still needs to handle pagination for 
backends that do not support it) and the backends that dowhether we could 
use wether 'page' is defined in the query-string as an indicator as to whether 
we should paginate or not?  That way clients who can handle it can ask for it, 
those that don'twill just get everything.

Henry





___

OpenStack-dev mailing list

OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] can't install devstack - nova-api did not start

2013-08-12 Thread XINYU ZHAO
Hi Sean
I uninstalled the oslo.config 1.1.1 version and run devstack, but this time
it stopped at

2013-08-09 18:55:16 + /opt/stack/new/keystone/bin/keystone-manage db_sync
2013-08-09 18:55:16 Traceback (most recent call last):
2013-08-09 18:55:16   File
"/opt/stack/new/keystone/bin/keystone-manage", line 16, in 
2013-08-09 18:55:16 from keystone import cli
2013-08-09 18:55:16   File "/opt/stack/new/keystone/keystone/cli.py",
line 23, in 
2013-08-09 18:55:16 from oslo.config import cfg
2013-08-09 18:55:16 ImportError: No module named config
2013-08-09 18:55:16 + [[ PKI == \P\K\I ]]


An unexpected error prevented the server from fulfilling your request.
(ProgrammingError) (1146, "Table 'keystone.service' doesn't exist") 'INSERT
INTO service (id, type, extra) VALUES (%s, %s, %s)'
('32578395572b4cf2a70ba70b6031cd1d', 'identity', '{"name": "keystone",
"description": "Keystone Identity Service"}') (HTTP 500)
2013-08-12 18:36:45 + KEYSTONE_SERVICE=
2013-08-12 18:36:45 + keystone endpoint-create --region RegionOne
--service_id --publicurl http://127.0.0.1:5000/v2.0 --adminurl
http://127.0.0.1:35357/v2.0 --internalurl http://127.0.0.1:5000/v2.0

it seems that  oslo.config was not properly imported after i re-installed
it.
but when i list the pip installations, it is there.

/usr/local/bin/pip freeze |grep oslo.config
-e git+
http://10.145.81.234/openstackci/gerrit/p/oslo.config@c65d70c02494805ce50b88f343f8fafe7a521724#egg=oslo.config-master
root@devstack-4:/# /usr/local/bin/pip search oslo.config
oslo.config   - Oslo configuration API
  INSTALLED: 1.2.0.a192.gc65d70c
  LATEST:1.1.1




On Sat, Aug 10, 2013 at 7:07 AM, Sean Dague  wrote:

> Silly pip, trix are for kids.
>
> Ok, well:
>
> sudo pip install -I oslo.config==1.1.1
>
> then pip uninstall oslo.config
>
> On 08/09/2013 06:58 PM, Roman Gorodeckij wrote:
>
>> stack@hp:~/devstack$ sudo pip install oslo.config
>> Requirement already satisfied (use --upgrade to upgrade): oslo.config in
>> /opt/stack/oslo.config
>> Requirement already satisfied (use --upgrade to upgrade): six in
>> /usr/local/lib/python2.7/dist-**packages (from oslo.config)
>> Cleaning up...
>> stack@hp:~/devstack$ sudo pip uninstall oslo.config
>> Can't uninstall 'oslo.config'. No files were found to uninstall.
>> stack@hp:~/devstack$
>>
>> stack@hp:~/devstack$ cat /tmp/devstack/log//screen-n-**api.log
>> | touch "/opt/stack/status/stack/n-**api.failure"nova &&
>> /usr/local/bin/nova-api |
>>
>> Traceback (most recent call last):
>>File "/usr/local/bin/nova-api", line 6, in 
>>  from nova.cmd.api import main
>>File "/opt/stack/nova/nova/cmd/api.**py", line 29, in 
>>  from nova import config
>>File "/opt/stack/nova/nova/config.**py", line 22, in 
>>  from nova.openstack.common.db.**sqlalchemy import session as
>> db_session
>>File "/opt/stack/nova/nova/**openstack/common/db/**sqlalchemy/session.py",
>> line 279, in 
>>  deprecated_opts=[cfg.**DeprecatedOpt('sql_connection'**,
>> AttributeError: 'module' object has no attribute 'DeprecatedOpt'
>>
>> nothing changed.
>>
>> On Aug 9, 2013, at 6:11 PM, Sean Dague  wrote:
>>
>>  This should be addressed by the latest devstack, however because we
>>> moved to oslo.config out of git, some install environments might still have
>>> oslo.config 1.1.0 somewhere, that pip no longer sees (so can't uninstall)
>>>
>>> sudo pip install oslo.config
>>> sudo pip uninstall oslo.config
>>>
>>> rerun devstack, see if it works.
>>>
>>> -Sean
>>>
>>> On 08/09/2013 09:14 AM, Roman Gorodeckij wrote:
>>>
 Tried to install devstack to dedicated server, ip's are defined.

 Here's the output:

 13-08-09 09:06:28 ++ echo -ne '\015'

 2013-08-09 09:06:28 + NL=$'\r'
 2013-08-09 09:06:28 + screen -S stack -p n-api -X stuff 'cd
 /opt/stack/nova && /'sr/local/bin/nova-api || touch
 "/opt/stack/status/stack/n-**api.failure"
 2013-08-09 09:06:28 + echo 'Waiting for nova-api to start...'
 2013-08-09 09:06:28 Waiting for nova-api to start...
 2013-08-09 09:06:28 + wait_for_service 60http://192.168.1.6:8774
 2013-08-09 09:06:28 + local timeout=60
 2013-08-09 09:06:28 + local url=http://192.168.1.6:8774
 2013-08-09 09:06:28 + timeout 60 sh -c 'while ! http_proxy=
 https_proxy= curl -shttp://192.168.1.6:8774  >/dev/null; do sleep 1;
 done'
 2013-08-09 09:07:28 + die 698 'nova-api did not start'
 2013-08-09 09:07:28 + local exitcode=0
 stack@hp:~/devstack$ 2013-08-09 09:07:28 + set +o xtrace

 Here's the log:

 2013-08-09 09:07:28 [ERROR] ./stack.sh:698 nova-api did not start
 stack@hp:~/devstack$ cat /tmp/devstack/log//screen-n-**api.log
 t/stack/status/stack/n-api.**failure"nova && /usr/local/bin/nova-api
 || touch "/op

 Traceback (most recent call last):
File "/usr/local/bin/nova-api", line 6, in 
  from nova.cmd.api import main
File "/opt/stack/nova/nova/cmd/api.**py", line 29

Re: [openstack-dev] can't install devstack - nova-api did not start

2013-08-12 Thread XINYU ZHAO
Hi Sean
I uninstalled the oslo.config 1.1.1 version and run devstack, but this time
it stopped at

2013-08-09 18:55:16 + /opt/stack/new/keystone/bin/keystone-manage db_sync
2013-08-09 18:55:16 Traceback (most recent call last):
2013-08-09 18:55:16   File
"/opt/stack/new/keystone/bin/keystone-manage", line 16, in 
2013-08-09 18:55:16 from keystone import cli
2013-08-09 18:55:16   File "/opt/stack/new/keystone/keystone/cli.py",
line 23, in 
2013-08-09 18:55:16 from oslo.config import cfg
2013-08-09 18:55:16 ImportError: No module named config
2013-08-09 18:55:16 + [[ PKI == \P\K\I ]]


An unexpected error prevented the server from fulfilling your request.
(ProgrammingError) (1146, "Table 'keystone.service' doesn't exist") 'INSERT
INTO service (id, type, extra) VALUES (%s, %s, %s)'
('32578395572b4cf2a70ba70b6031cd1d', 'identity', '{"name": "keystone",
"description": "Keystone Identity Service"}') (HTTP 500)
2013-08-12 18:36:45 + KEYSTONE_SERVICE=
2013-08-12 18:36:45 + keystone endpoint-create --region RegionOne
--service_id --publicurl http://127.0.0.1:5000/v2.0 --adminurl
http://127.0.0.1:35357/v2.0 --internalurl http://127.0.0.1:5000/v2.0

it seems that  oslo.config was not properly imported after i re-installed
it.
but when i list the pip installations, it is there.

/usr/local/bin/pip freeze |grep oslo.config
-e git+
http://10.145.81.234/openstackci/gerrit/p/oslo.config@c65d70c02494805ce50b88f343f8fafe7a521724#egg=oslo.config-master
root@devstack-4:/# /usr/local/bin/pip search oslo.config
oslo.config   - Oslo configuration API
  INSTALLED: 1.2.0.a192.gc65d70c
  LATEST:1.1.1



On Sat, Aug 10, 2013 at 7:07 AM, Sean Dague  wrote:

> Silly pip, trix are for kids.
>
> Ok, well:
>
> sudo pip install -I oslo.config==1.1.1
>
> then pip uninstall oslo.config
>
> On 08/09/2013 06:58 PM, Roman Gorodeckij wrote:
>
>> stack@hp:~/devstack$ sudo pip install oslo.config
>> Requirement already satisfied (use --upgrade to upgrade): oslo.config in
>> /opt/stack/oslo.config
>> Requirement already satisfied (use --upgrade to upgrade): six in
>> /usr/local/lib/python2.7/dist-**packages (from oslo.config)
>> Cleaning up...
>> stack@hp:~/devstack$ sudo pip uninstall oslo.config
>> Can't uninstall 'oslo.config'. No files were found to uninstall.
>> stack@hp:~/devstack$
>>
>> stack@hp:~/devstack$ cat /tmp/devstack/log//screen-n-**api.log
>> | touch "/opt/stack/status/stack/n-**api.failure"nova &&
>> /usr/local/bin/nova-api |
>>
>> Traceback (most recent call last):
>>File "/usr/local/bin/nova-api", line 6, in 
>>  from nova.cmd.api import main
>>File "/opt/stack/nova/nova/cmd/api.**py", line 29, in 
>>  from nova import config
>>File "/opt/stack/nova/nova/config.**py", line 22, in 
>>  from nova.openstack.common.db.**sqlalchemy import session as
>> db_session
>>File "/opt/stack/nova/nova/**openstack/common/db/**sqlalchemy/session.py",
>> line 279, in 
>>  deprecated_opts=[cfg.**DeprecatedOpt('sql_connection'**,
>> AttributeError: 'module' object has no attribute 'DeprecatedOpt'
>>
>> nothing changed.
>>
>> On Aug 9, 2013, at 6:11 PM, Sean Dague  wrote:
>>
>>  This should be addressed by the latest devstack, however because we
>>> moved to oslo.config out of git, some install environments might still have
>>> oslo.config 1.1.0 somewhere, that pip no longer sees (so can't uninstall)
>>>
>>> sudo pip install oslo.config
>>> sudo pip uninstall oslo.config
>>>
>>> rerun devstack, see if it works.
>>>
>>> -Sean
>>>
>>> On 08/09/2013 09:14 AM, Roman Gorodeckij wrote:
>>>
 Tried to install devstack to dedicated server, ip's are defined.

 Here's the output:

 13-08-09 09:06:28 ++ echo -ne '\015'

 2013-08-09 09:06:28 + NL=$'\r'
 2013-08-09 09:06:28 + screen -S stack -p n-api -X stuff 'cd
 /opt/stack/nova && /'sr/local/bin/nova-api || touch
 "/opt/stack/status/stack/n-**api.failure"
 2013-08-09 09:06:28 + echo 'Waiting for nova-api to start...'
 2013-08-09 09:06:28 Waiting for nova-api to start...
 2013-08-09 09:06:28 + wait_for_service 60http://192.168.1.6:8774
 2013-08-09 09:06:28 + local timeout=60
 2013-08-09 09:06:28 + local url=http://192.168.1.6:8774
 2013-08-09 09:06:28 + timeout 60 sh -c 'while ! http_proxy=
 https_proxy= curl -shttp://192.168.1.6:8774  >/dev/null; do sleep 1;
 done'
 2013-08-09 09:07:28 + die 698 'nova-api did not start'
 2013-08-09 09:07:28 + local exitcode=0
 stack@hp:~/devstack$ 2013-08-09 09:07:28 + set +o xtrace

 Here's the log:

 2013-08-09 09:07:28 [ERROR] ./stack.sh:698 nova-api did not start
 stack@hp:~/devstack$ cat /tmp/devstack/log//screen-n-**api.log
 t/stack/status/stack/n-api.**failure"nova && /usr/local/bin/nova-api
 || touch "/op

 Traceback (most recent call last):
File "/usr/local/bin/nova-api", line 6, in 
  from nova.cmd.api import main
File "/opt/stack/nova/nova/cmd/api.**py", line 29,

Re: [openstack-dev] [Ceilometer] Nova_tests failing in jenkins

2013-08-12 Thread Clark Boylan
On Mon, Aug 12, 2013 at 1:54 PM, Herndon, John Luke (HPCS - Ft.
Collins)  wrote:
> Hi -
>
> The nova_tests are failing for a couple of different Ceilometer reviews,
> due to 'module' object has no attribute 'add_driver'.
>
> This review (https://review.openstack.org/#/c/41316/) had nothing to do
> with the nova_tests, yet they are failing. Any clue what's going on?
>
Ceilometer tests depend on nova master. The nova tests then import
nova.openstack.common.notifier.api as notifier_api. This notifier_api
does not have an add_driver method. Looks like add_driver was removed
from nova in change I5ed80458f1073d6e5185e2769eed85a49dec5d10. This
problem arises due to asymmetric testing. The change in review has
nothing to do with nova_tests and these tests are failing because nova
changed not because Ceilometer changed.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vmware] VMwareAPI sub-team status update 2013-08-12

2013-08-12 Thread Shawn Hartsock

Greetings stackers!

Here's the Monday report for the VMware API subteam! We've got 4 ready for a 
core-reviewer to take a look at! Meanwhile, there's a healthy pile of patches 
(8 in all) that need rework. I've got 5 patches that need reviews done by 
someone who knows VMware API's and 3 that are in need of some work. The August 
22nd deadline is coming up, if you have a Havana feature to post and propose we 
need to get it up here. If you've got something that needs a VMware person's 
attention, make sure to add me so I can track it or pass it on to the right 
people.  (For example: Thanks to Dan Smith for letting me grab this patch: 
https://review.openstack.org/#/c/40682/ and have our CI team look at it.)

On that note: 
 
Merged: 2, Ready for core: 4
 

Needs one more core review/approval:
* NEW, https://review.openstack.org/#/c/37389/ ,'VMware: Ensure Neutron 
networking works with VMware drivers'
https://bugs.launchpad.net/nova/+bug/1202042
core votes,1, non-core votes,5, down votes, 0
* NEW, https://review.openstack.org/#/c/40105/ ,'VMware: use VM uuid for volume 
attach and detach'
https://bugs.launchpad.net/nova/+bug/1208173
core votes,1, non-core votes,2, down votes, 0

Ready for core reviewer:
* NEW, https://review.openstack.org/#/c/33100/ ,'Fixes host stats for 
VMWareVCDriver'
https://bugs.launchpad.net/nova/+bug/1190515
core votes,0, non-core votes,4, down votes, 0
* NEW, https://review.openstack.org/#/c/40298/ ,'Fix snapshot in 
VMWwareVCDriver'
https://bugs.launchpad.net/nova/+bug/1184807
core votes,0, non-core votes,4, down votes, 0

Needs VMware API expert review:
* NEW, https://review.openstack.org/#/c/37659/ ,'Enhance VMware Hyper instance 
disk usage'
https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
core votes,0, non-core votes,2, down votes, 0
* NEW, https://review.openstack.org/#/c/40245/ ,'Nova support for vmware cinder 
driver'
https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support
core votes,0, non-core votes,1, down votes, 0
* NEW, https://review.openstack.org/#/c/30282/ ,'Multiple Clusters using single 
compute service'

https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
core votes,0, non-core votes,2, down votes, 0
* NEW, https://review.openstack.org/#/c/40029/ ,'VMware: Config Drive Support'
https://bugs.launchpad.net/nova/+bug/1206584
core votes,0, non-core votes,2, down votes, 0
* NEW, https://review.openstack.org/#/c/37819/ ,'VMware image clone strategy 
settings and overrides'
https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy
core votes,0, non-core votes,0, down votes, 0

Needs discussion/work (has -1):
* NEW, https://review.openstack.org/#/c/34903/ ,'Deploy vCenter templates'

https://blueprints.launchpad.net/nova/+spec/deploy-vcenter-templates-from-vmware-nova-driver
core votes,0, non-core votes,2, down votes, -1
* NEW, https://review.openstack.org/#/c/30628/ ,'Fix VCDriver to pick the 
datastore that has capacity'
https://bugs.launchpad.net/nova/+bug/1171930
core votes,0, non-core votes,6, down votes, -1
* NEW, https://review.openstack.org/#/c/33504/ ,'VMware: nova-compute crashes 
if VC not available'
https://bugs.launchpad.net/nova/+bug/1192016
core votes,0, non-core votes,1, down votes, -1

Keep reviewing and keep up the good work.

Meeting info:
* https://wiki.openstack.org/wiki/Meetings/VMwareAPI

# Shawn Hartsock

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] can't install devstack - nova-api did not start

2013-08-12 Thread Noorul Islam K M
XINYU ZHAO  writes:

> Hi Sean
> I uninstalled the oslo.config 1.1.1 version and run devstack, but this time
> it stopped at
>
> 2013-08-09 18:55:16 + /opt/stack/new/keystone/bin/keystone-manage db_sync
> 2013-08-09 18:55:16 Traceback (most recent call last):
> 2013-08-09 18:55:16   File
> "/opt/stack/new/keystone/bin/keystone-manage", line 16, in 
> 2013-08-09 18:55:16 from keystone import cli
> 2013-08-09 18:55:16   File "/opt/stack/new/keystone/keystone/cli.py",
> line 23, in 
> 2013-08-09 18:55:16 from oslo.config import cfg
> 2013-08-09 18:55:16 ImportError: No module named config
> 2013-08-09 18:55:16 + [[ PKI == \P\K\I ]]
>
>
> An unexpected error prevented the server from fulfilling your request.
> (ProgrammingError) (1146, "Table 'keystone.service' doesn't exist") 'INSERT
> INTO service (id, type, extra) VALUES (%s, %s, %s)'
> ('32578395572b4cf2a70ba70b6031cd1d', 'identity', '{"name": "keystone",
> "description": "Keystone Identity Service"}') (HTTP 500)
> 2013-08-12 18:36:45 + KEYSTONE_SERVICE=
> 2013-08-12 18:36:45 + keystone endpoint-create --region RegionOne
> --service_id --publicurl http://127.0.0.1:5000/v2.0 --adminurl
> http://127.0.0.1:35357/v2.0 --internalurl http://127.0.0.1:5000/v2.0
>
> it seems that  oslo.config was not properly imported after i re-installed
> it.
> but when i list the pip installations, it is there.
>
> /usr/local/bin/pip freeze |grep oslo.config
> -e git+
> http://10.145.81.234/openstackci/gerrit/p/oslo.config@c65d70c02494805ce50b88f343f8fafe7a521724#egg=oslo.config-master
> root@devstack-4:/# /usr/local/bin/pip search oslo.config
> oslo.config   - Oslo configuration API
>   INSTALLED: 1.2.0.a192.gc65d70c
>   LATEST:1.1.1
>
>
>

Please paste the output of 

pip show oslo.config

Thanks and Regards
Noorul

>
> On Sat, Aug 10, 2013 at 7:07 AM, Sean Dague  wrote:
>
>> Silly pip, trix are for kids.
>>
>> Ok, well:
>>
>> sudo pip install -I oslo.config==1.1.1
>>
>> then pip uninstall oslo.config
>>
>> On 08/09/2013 06:58 PM, Roman Gorodeckij wrote:
>>
>>> stack@hp:~/devstack$ sudo pip install oslo.config
>>> Requirement already satisfied (use --upgrade to upgrade): oslo.config in
>>> /opt/stack/oslo.config
>>> Requirement already satisfied (use --upgrade to upgrade): six in
>>> /usr/local/lib/python2.7/dist-**packages (from oslo.config)
>>> Cleaning up...
>>> stack@hp:~/devstack$ sudo pip uninstall oslo.config
>>> Can't uninstall 'oslo.config'. No files were found to uninstall.
>>> stack@hp:~/devstack$
>>>
>>> stack@hp:~/devstack$ cat /tmp/devstack/log//screen-n-**api.log
>>> | touch "/opt/stack/status/stack/n-**api.failure"nova &&
>>> /usr/local/bin/nova-api |
>>>
>>> Traceback (most recent call last):
>>>File "/usr/local/bin/nova-api", line 6, in 
>>>  from nova.cmd.api import main
>>>File "/opt/stack/nova/nova/cmd/api.**py", line 29, in 
>>>  from nova import config
>>>File "/opt/stack/nova/nova/config.**py", line 22, in 
>>>  from nova.openstack.common.db.**sqlalchemy import session as
>>> db_session
>>>File 
>>> "/opt/stack/nova/nova/**openstack/common/db/**sqlalchemy/session.py",
>>> line 279, in 
>>>  deprecated_opts=[cfg.**DeprecatedOpt('sql_connection'**,
>>> AttributeError: 'module' object has no attribute 'DeprecatedOpt'
>>>
>>> nothing changed.
>>>
>>> On Aug 9, 2013, at 6:11 PM, Sean Dague  wrote:
>>>
>>>  This should be addressed by the latest devstack, however because we
 moved to oslo.config out of git, some install environments might still have
 oslo.config 1.1.0 somewhere, that pip no longer sees (so can't uninstall)

 sudo pip install oslo.config
 sudo pip uninstall oslo.config

 rerun devstack, see if it works.

 -Sean

 On 08/09/2013 09:14 AM, Roman Gorodeckij wrote:

> Tried to install devstack to dedicated server, ip's are defined.
>
> Here's the output:
>
> 13-08-09 09:06:28 ++ echo -ne '\015'
>
> 2013-08-09 09:06:28 + NL=$'\r'
> 2013-08-09 09:06:28 + screen -S stack -p n-api -X stuff 'cd
> /opt/stack/nova && /'sr/local/bin/nova-api || touch
> "/opt/stack/status/stack/n-**api.failure"
> 2013-08-09 09:06:28 + echo 'Waiting for nova-api to start...'
> 2013-08-09 09:06:28 Waiting for nova-api to start...
> 2013-08-09 09:06:28 + wait_for_service 60http://192.168.1.6:8774
> 2013-08-09 09:06:28 + local timeout=60
> 2013-08-09 09:06:28 + local url=http://192.168.1.6:8774
> 2013-08-09 09:06:28 + timeout 60 sh -c 'while ! http_proxy=
> https_proxy= curl -shttp://192.168.1.6:8774  >/dev/null; do sleep 1;
> done'
> 2013-08-09 09:07:28 + die 698 'nova-api did not start'
> 2013-08-09 09:07:28 + local exitcode=0
> stack@hp:~/devstack$ 2013-08-09 09:07:28 + set +o xtrace
>
> Here's the log:
>
> 2013-08-09 09:07:28 [ERROR] ./stack.sh:698 nova-api did not start
> stack@hp:~/devstack$ cat /tmp/devstack/log//screen-n-**api.log
> t/stack/status/stack

Re: [openstack-dev] [keystone] Pagination

2013-08-12 Thread Dolph Mathews
On Mon, Aug 12, 2013 at 7:51 PM, Jamie Lennox  wrote:

> I'm not sure where it would make sense within the API to return the name
> of the page/per_page variables to the client that doesn't involve having
> already issued the call (ie returning the names within the links box
> means you've already issued the query).


I think you're missing the point (and you're right: that wouldn't make
sense at all). The API client follows links. The controller builds links.
The driver defines it's own pagination interface to build related links.

If the client is forced to understand the pagination interface then the
abstraction is broken.


> If we standardize on the
> page/per_page combination


There doesn't need to be a "standard."


> then this can be handled at the controller
> level then the driver has permission to simply ignore it - or have the
> controller do the slicing after the driver has returned.
>

Correct. This sort of "default" pagination can be implemented by the
manager, and overridden by a specific driver.


>
> To weigh in on the other question i think it should be checked that page
> is an integer, unless per_page is specified in which case default to 1.
>
> For example:
>
> GET /v3/users?page=
>
> I would expect to return all users as page is not set. However:
>
> GET /v3/users?per_page=30
>
> As per_page is useless without a page i think we can default to page=1.
>
> As an aside are we indexing from 1?
>

Rhetorical: why not index from -1 and count in base 64? This is all
arbitrary and can vary by driver.


>
> On Mon, 2013-08-12 at 19:05 -0500, Dolph Mathews wrote:
> > The way paginated links are defined by the v3 API (via `next` and
> > `previous` links), it can be completely up to the driver as to what
> > the query parameters look like. So, the client shouldn't have (nor
> > require) any knowledge of how to build query parameters for
> > pagination. It just needs to follow the links it's given.
> >
> >
> > 'page' and 'per_page' are trivial for the controller to implement (as
> > it's just slicing into an list... as shown)... so that's a reasonable
> > default behavior (for when a driver does not support pagination).
> > However, if the underlying driver DOES support pagination, it should
> > provide a way for the controller to ask for the query parameters
> > required to specify the next/previous links (so, one driver could
> > return `marker` and `limit` parameters while another only exposes the
> > `page` number, but not quantity `per_page`).
> >
> >
> > On Mon, Aug 12, 2013 at 4:34 PM, Henry Nash
> >  wrote:
> > Hi
> >
> >
> > I'm working on extending the pagination into the backends.
> >  Right now, we handle the pagination in the v3 controller
> > classand in fact it is disabled right now and we return
> > the whole list irrespective of whether page/per-page is set in
> > the query string, e.g.:
> >
> >
> > def paginate(cls, context, refs):
> > """Paginates a list of references by page & per_page
> > query strings."""
> > # FIXME(dolph): client needs to support pagination
> > first
> > return refs
> >
> >
> > page = context['query_string'].get('page', 1)
> > per_page = context['query_string'].get('per_page', 30)
> > return refs[per_page * (page - 1):per_page * page]
> >
> >
> > I wonder both for the V3 controller (which still needs to
> > handle pagination for backends that do not support it) and the
> > backends that dowhether we could use wether 'page' is
> > defined in the query-string as an indicator as to whether we
> > should paginate or not?  That way clients who can handle it
> > can ask for it, those that don'twill just get everything.
> >
> >
> > Henry
> >
> >
> >
> >
> >
> >
> > --
> >
> >
> > -Dolph
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Pagination

2013-08-12 Thread Adam Young
On 08/12/2013 09:22 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis) 
wrote:


The main reason I use user lists (i.e. keystone user-list) is to get 
the list of usernames/IDs for other keystone commands. I do not see 
the value of showing all of the users in an LDAP server when they are 
not part of the keystone database (i.e. do not have roles assigned to 
them). Performing a "keystone user-list" command against the HP 
Enterprise Directory locks up keystone for about 1 ½ hours in that it 
will not perform any other commands until it is done.  If it is 
decided that user lists are necessary, then at a minimum they need to 
be paged to return control back to keystone for another command.




We need a way to tell HP ED to limit the number of rows, and to do 
filtering.


We have a bug for the second part.  I'll open one for the limit.


Mark

*From:*Adam Young [mailto:ayo...@redhat.com]
*Sent:* Monday, August 12, 2013 5:27 PM
*To:* openstack-dev@lists.openstack.org
*Subject:* Re: [openstack-dev] [keystone] Pagination

On 08/12/2013 05:34 PM, Henry Nash wrote:

Hi

I'm working on extending the pagination into the backends.  Right
now, we handle the pagination in the v3 controller classand in
fact it is disabled right now and we return the whole list
irrespective of whether page/per-page is set in the query string,
e.g.:

Pagination is a broken concept. We should not be returning lists so 
long that we need to paginate. Instead, we should have query limits, 
and filters to refine the queries.


Some people are doing full user lists against LDAP.  I don't need to 
tell you how broken that is.  Why do we allow user-list at the Domain 
(or unscoped level)?


I'd argue that we should drop enumeration of objects in general, and 
certainly limit the number of results that come back.  Pagination in 
LDAP requires cursors, and thus continuos connections from Keystone to 
LDAP...this is not a scalable solution.


Do we really need this?



def*paginate*(cls, context, refs):

/"""Paginates a list of references by page & per_page query strings."""/

# FIXME(_dolph_): client needs to support pagination first

returnrefs

  page = context[/'query_string'/].get(/'page'/, 1)

  per_page = context[/'query_string'/].get(/'per_page'/, 30)

returnrefs[per_page * (page - 1):per_page * page]

I wonder both for the V3 controller (which still needs to handle 
pagination for backends that do not support it) and the backends that 
dowhether we could use wether 'page' is defined in the 
query-string as an indicator as to whether we should paginate or not? 
 That way clients who can handle it can ask for it, those that 
don'twill just get everything.


Henry




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Any one can help on nova source hacking?

2013-08-12 Thread Zhang, Li ((Victor,ES-OCTO-HCC-CHINA-BJ))
Dear stackers,

Recently, I am planning to hack the source of the nova project, it is really a 
huge project in openstack!

I got lost in  where to start from, anybody out there can give a hint on this?

Thanks!

Vic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: Proposal for approving Auto HA development blueprint.

2013-08-12 Thread Konglingxian
Hi yongiman:

Your idea is good, but I think the auto HA operation is not OpenStack’s 
business. IMO, Ceilometer offers ‘monitoring’, Nova  offers ‘evacuation’, and 
you can combine them to realize HA operation.

So, I’m afraid I can’t understand the specific implementation details very well.

Any different opinions?

发件人: yongi...@gmail.com [mailto:yongi...@gmail.com]
发送时间: 2013年8月12日 20:52
收件人: openstack-dev@lists.openstack.org
主题: Re: [openstack-dev] Proposal for approving Auto HA development blueprint.



Hi,

Now, I am developing auto ha operation for vm high availability.

This function is all progress automatically.

It needs other service like ceilometer.

ceilometer monitors compute nodes.

When ceilometer detects broken compute node, it send a api call to Nova,
nova exposes for auto ha API.

When received auto ha call, nova progress auto ha operation.

All auto ha enabled VM where are running on broken host are all migrated to 
auto ha Host which is extra compute node for using only Auto-HA function.

Below is my blueprint and wiki page.

Wiki page is not yet completed. Now I am adding lots of information for this 
function.

Thanks

https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken

https://wiki.openstack.org/wiki/Autoha
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] can't install devstack - nova-api did not start

2013-08-12 Thread XINYU ZHAO
Name: oslo.config
Version: 1.2.0.a192.gc65d70c
Location: /opt/stack/new/oslo.config
Requires: six


On Mon, Aug 12, 2013 at 7:59 PM, Noorul Islam K M  wrote:

> XINYU ZHAO  writes:
>
> > Hi Sean
> > I uninstalled the oslo.config 1.1.1 version and run devstack, but this
> time
> > it stopped at
> >
> > 2013-08-09 18:55:16 + /opt/stack/new/keystone/bin/keystone-manage db_sync
> > 2013-08-09 18:55:16 Traceback (most recent call last):
> > 2013-08-09 18:55:16   File
> > "/opt/stack/new/keystone/bin/keystone-manage", line 16, in 
> > 2013-08-09 18:55:16 from keystone import cli
> > 2013-08-09 18:55:16   File "/opt/stack/new/keystone/keystone/cli.py",
> > line 23, in 
> > 2013-08-09 18:55:16 from oslo.config import cfg
> > 2013-08-09 18:55:16 ImportError: No module named config
> > 2013-08-09 18:55:16 + [[ PKI == \P\K\I ]]
> >
> >
> > An unexpected error prevented the server from fulfilling your request.
> > (ProgrammingError) (1146, "Table 'keystone.service' doesn't exist")
> 'INSERT
> > INTO service (id, type, extra) VALUES (%s, %s, %s)'
> > ('32578395572b4cf2a70ba70b6031cd1d', 'identity', '{"name": "keystone",
> > "description": "Keystone Identity Service"}') (HTTP 500)
> > 2013-08-12 18:36:45 + KEYSTONE_SERVICE=
> > 2013-08-12 18:36:45 + keystone endpoint-create --region RegionOne
> > --service_id --publicurl http://127.0.0.1:5000/v2.0 --adminurl
> > http://127.0.0.1:35357/v2.0 --internalurl http://127.0.0.1:5000/v2.0
> >
> > it seems that  oslo.config was not properly imported after i re-installed
> > it.
> > but when i list the pip installations, it is there.
> >
> > /usr/local/bin/pip freeze |grep oslo.config
> > -e git+
> >
> http://10.145.81.234/openstackci/gerrit/p/oslo.config@c65d70c02494805ce50b88f343f8fafe7a521724#egg=oslo.config-master
> > root@devstack-4:/# /usr/local/bin/pip search oslo.config
> > oslo.config   - Oslo configuration API
> >   INSTALLED: 1.2.0.a192.gc65d70c
> >   LATEST:1.1.1
> >
> >
> >
>
> Please paste the output of
>
> pip show oslo.config
>
> Thanks and Regards
> Noorul
>
> >
> > On Sat, Aug 10, 2013 at 7:07 AM, Sean Dague  wrote:
> >
> >> Silly pip, trix are for kids.
> >>
> >> Ok, well:
> >>
> >> sudo pip install -I oslo.config==1.1.1
> >>
> >> then pip uninstall oslo.config
> >>
> >> On 08/09/2013 06:58 PM, Roman Gorodeckij wrote:
> >>
> >>> stack@hp:~/devstack$ sudo pip install oslo.config
> >>> Requirement already satisfied (use --upgrade to upgrade): oslo.config
> in
> >>> /opt/stack/oslo.config
> >>> Requirement already satisfied (use --upgrade to upgrade): six in
> >>> /usr/local/lib/python2.7/dist-**packages (from oslo.config)
> >>> Cleaning up...
> >>> stack@hp:~/devstack$ sudo pip uninstall oslo.config
> >>> Can't uninstall 'oslo.config'. No files were found to uninstall.
> >>> stack@hp:~/devstack$
> >>>
> >>> stack@hp:~/devstack$ cat /tmp/devstack/log//screen-n-**api.log
> >>> | touch "/opt/stack/status/stack/n-**api.failure"nova &&
> >>> /usr/local/bin/nova-api |
> >>>
> >>> Traceback (most recent call last):
> >>>File "/usr/local/bin/nova-api", line 6, in 
> >>>  from nova.cmd.api import main
> >>>File "/opt/stack/nova/nova/cmd/api.**py", line 29, in 
> >>>  from nova import config
> >>>File "/opt/stack/nova/nova/config.**py", line 22, in 
> >>>  from nova.openstack.common.db.**sqlalchemy import session as
> >>> db_session
> >>>File
> "/opt/stack/nova/nova/**openstack/common/db/**sqlalchemy/session.py",
> >>> line 279, in 
> >>>  deprecated_opts=[cfg.**DeprecatedOpt('sql_connection'**,
> >>> AttributeError: 'module' object has no attribute 'DeprecatedOpt'
> >>>
> >>> nothing changed.
> >>>
> >>> On Aug 9, 2013, at 6:11 PM, Sean Dague  wrote:
> >>>
> >>>  This should be addressed by the latest devstack, however because we
>  moved to oslo.config out of git, some install environments might
> still have
>  oslo.config 1.1.0 somewhere, that pip no longer sees (so can't
> uninstall)
> 
>  sudo pip install oslo.config
>  sudo pip uninstall oslo.config
> 
>  rerun devstack, see if it works.
> 
>  -Sean
> 
>  On 08/09/2013 09:14 AM, Roman Gorodeckij wrote:
> 
> > Tried to install devstack to dedicated server, ip's are defined.
> >
> > Here's the output:
> >
> > 13-08-09 09:06:28 ++ echo -ne '\015'
> >
> > 2013-08-09 09:06:28 + NL=$'\r'
> > 2013-08-09 09:06:28 + screen -S stack -p n-api -X stuff 'cd
> > /opt/stack/nova && /'sr/local/bin/nova-api || touch
> > "/opt/stack/status/stack/n-**api.failure"
> > 2013-08-09 09:06:28 + echo 'Waiting for nova-api to start...'
> > 2013-08-09 09:06:28 Waiting for nova-api to start...
> > 2013-08-09 09:06:28 + wait_for_service 60http://192.168.1.6:8774
> > 2013-08-09 09:06:28 + local timeout=60
> > 2013-08-09 09:06:28 + local url=http://192.168.1.6:8774
> > 2013-08-09 09:06:28 + timeout 60 sh -c 'while ! http_proxy=
> > https_proxy= curl -shttp://192.168.1.6:87

Re: [openstack-dev] Any one can help on nova source hacking?

2013-08-12 Thread Noorul Islam K M
"Zhang, Li ((Victor,ES-OCTO-HCC-CHINA-BJ))"  writes:

> Dear stackers,
>
> Recently, I am planning to hack the source of the nova project, it is really 
> a huge project in openstack!
>
> I got lost in  where to start from, anybody out there can give a hint
> on this?

I also started recently.

1. First thing I did is to go through 'Contribute to OpenStack' section
   in the wiki page https://wiki.openstack.org/wiki/Main_Page

2. docs.openstack.org also has several documentations.

3. For nova https://launchpad.net/nova page has links to more documents.
   If you are looking at low level API documentation here it is 

   http://docs.openstack.org/developer/nova/devref/index.html

4. The following links were also helpful

   http://www.sandywalsh.com/2012/04/openstack-nova-internals-pt1-overview.html

   http://www.sandywalsh.com/2012/09/openstack-nova-internals-pt2-services.html

   After this there were no follow-up articles.

5. This document is very high level but has a comprehensive diagram in
   it 

   http://www.sandywalsh.com/2012/09/openstack-nova-internals-pt2-services.html

I hope this helps. If you find anything interesting, let me know.

Thanks and Regards
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] can't install devstack - nova-api did not start

2013-08-12 Thread Noorul Islam K M
XINYU ZHAO  writes:

> Name: oslo.config
> Version: 1.2.0.a192.gc65d70c
> Location: /opt/stack/new/oslo.config
> Requires: six
>

I had similar issues with pbr

This is what I did

pulled latest source from github

git remote update
git pull --ff-only origin master

sudo python setup.py develop

Thanks and Regards
Noorul


>
> On Mon, Aug 12, 2013 at 7:59 PM, Noorul Islam K M  wrote:
>
>> XINYU ZHAO  writes:
>>
>> > Hi Sean
>> > I uninstalled the oslo.config 1.1.1 version and run devstack, but this
>> time
>> > it stopped at
>> >
>> > 2013-08-09 18:55:16 + /opt/stack/new/keystone/bin/keystone-manage db_sync
>> > 2013-08-09 18:55:16 Traceback (most recent call last):
>> > 2013-08-09 18:55:16   File
>> > "/opt/stack/new/keystone/bin/keystone-manage", line 16, in 
>> > 2013-08-09 18:55:16 from keystone import cli
>> > 2013-08-09 18:55:16   File "/opt/stack/new/keystone/keystone/cli.py",
>> > line 23, in 
>> > 2013-08-09 18:55:16 from oslo.config import cfg
>> > 2013-08-09 18:55:16 ImportError: No module named config
>> > 2013-08-09 18:55:16 + [[ PKI == \P\K\I ]]
>> >
>> >
>> > An unexpected error prevented the server from fulfilling your request.
>> > (ProgrammingError) (1146, "Table 'keystone.service' doesn't exist")
>> 'INSERT
>> > INTO service (id, type, extra) VALUES (%s, %s, %s)'
>> > ('32578395572b4cf2a70ba70b6031cd1d', 'identity', '{"name": "keystone",
>> > "description": "Keystone Identity Service"}') (HTTP 500)
>> > 2013-08-12 18:36:45 + KEYSTONE_SERVICE=
>> > 2013-08-12 18:36:45 + keystone endpoint-create --region RegionOne
>> > --service_id --publicurl http://127.0.0.1:5000/v2.0 --adminurl
>> > http://127.0.0.1:35357/v2.0 --internalurl http://127.0.0.1:5000/v2.0
>> >
>> > it seems that  oslo.config was not properly imported after i re-installed
>> > it.
>> > but when i list the pip installations, it is there.
>> >
>> > /usr/local/bin/pip freeze |grep oslo.config
>> > -e git+
>> >
>> http://10.145.81.234/openstackci/gerrit/p/oslo.config@c65d70c02494805ce50b88f343f8fafe7a521724#egg=oslo.config-master
>> > root@devstack-4:/# /usr/local/bin/pip search oslo.config
>> > oslo.config   - Oslo configuration API
>> >   INSTALLED: 1.2.0.a192.gc65d70c
>> >   LATEST:1.1.1
>> >
>> >
>> >
>>
>> Please paste the output of
>>
>> pip show oslo.config
>>
>> Thanks and Regards
>> Noorul
>>
>> >
>> > On Sat, Aug 10, 2013 at 7:07 AM, Sean Dague  wrote:
>> >
>> >> Silly pip, trix are for kids.
>> >>
>> >> Ok, well:
>> >>
>> >> sudo pip install -I oslo.config==1.1.1
>> >>
>> >> then pip uninstall oslo.config
>> >>
>> >> On 08/09/2013 06:58 PM, Roman Gorodeckij wrote:
>> >>
>> >>> stack@hp:~/devstack$ sudo pip install oslo.config
>> >>> Requirement already satisfied (use --upgrade to upgrade): oslo.config
>> in
>> >>> /opt/stack/oslo.config
>> >>> Requirement already satisfied (use --upgrade to upgrade): six in
>> >>> /usr/local/lib/python2.7/dist-**packages (from oslo.config)
>> >>> Cleaning up...
>> >>> stack@hp:~/devstack$ sudo pip uninstall oslo.config
>> >>> Can't uninstall 'oslo.config'. No files were found to uninstall.
>> >>> stack@hp:~/devstack$
>> >>>
>> >>> stack@hp:~/devstack$ cat /tmp/devstack/log//screen-n-**api.log
>> >>> | touch "/opt/stack/status/stack/n-**api.failure"nova &&
>> >>> /usr/local/bin/nova-api |
>> >>>
>> >>> Traceback (most recent call last):
>> >>>File "/usr/local/bin/nova-api", line 6, in 
>> >>>  from nova.cmd.api import main
>> >>>File "/opt/stack/nova/nova/cmd/api.**py", line 29, in 
>> >>>  from nova import config
>> >>>File "/opt/stack/nova/nova/config.**py", line 22, in 
>> >>>  from nova.openstack.common.db.**sqlalchemy import session as
>> >>> db_session
>> >>>File
>> "/opt/stack/nova/nova/**openstack/common/db/**sqlalchemy/session.py",
>> >>> line 279, in 
>> >>>  deprecated_opts=[cfg.**DeprecatedOpt('sql_connection'**,
>> >>> AttributeError: 'module' object has no attribute 'DeprecatedOpt'
>> >>>
>> >>> nothing changed.
>> >>>
>> >>> On Aug 9, 2013, at 6:11 PM, Sean Dague  wrote:
>> >>>
>> >>>  This should be addressed by the latest devstack, however because we
>>  moved to oslo.config out of git, some install environments might
>> still have
>>  oslo.config 1.1.0 somewhere, that pip no longer sees (so can't
>> uninstall)
>> 
>>  sudo pip install oslo.config
>>  sudo pip uninstall oslo.config
>> 
>>  rerun devstack, see if it works.
>> 
>>  -Sean
>> 
>>  On 08/09/2013 09:14 AM, Roman Gorodeckij wrote:
>> 
>> > Tried to install devstack to dedicated server, ip's are defined.
>> >
>> > Here's the output:
>> >
>> > 13-08-09 09:06:28 ++ echo -ne '\015'
>> >
>> > 2013-08-09 09:06:28 + NL=$'\r'
>> > 2013-08-09 09:06:28 + screen -S stack -p n-api -X stuff 'cd
>> > /opt/stack/nova && /'sr/local/bin/nova-api || touch
>> > "/opt/stack/status/stack/n-**api.failure"
>> > 2013-08-09 09:06:28 + echo 'Waiting for nova-api to start...'
>> 

Re: [openstack-dev] Any one can help on nova source hacking?

2013-08-12 Thread Zhang, Li ((Victor,ES-OCTO-HCC-CHINA-BJ))
Hi Noorul,

Thanks for your sharing, I will take a look at the links you provided.

Sure, I will share  anything interesting during my hacking.

Thanks!

Vic



-Original Message-
From: Noorul Islam K M [mailto:noo...@noorul.com] 
Sent: Tuesday, August 13, 2013 12:21
To: Zhang, Li ((Victor,ES-OCTO-HCC-CHINA-BJ))
Cc: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Any one can help on nova source hacking?

"Zhang, Li ((Victor,ES-OCTO-HCC-CHINA-BJ))"  writes:

> Dear stackers,
>
> Recently, I am planning to hack the source of the nova project, it is really 
> a huge project in openstack!
>
> I got lost in  where to start from, anybody out there can give a hint 
> on this?

I also started recently.

1. First thing I did is to go through 'Contribute to OpenStack' section
   in the wiki page https://wiki.openstack.org/wiki/Main_Page

2. docs.openstack.org also has several documentations.

3. For nova https://launchpad.net/nova page has links to more documents.
   If you are looking at low level API documentation here it is 

   http://docs.openstack.org/developer/nova/devref/index.html

4. The following links were also helpful

   http://www.sandywalsh.com/2012/04/openstack-nova-internals-pt1-overview.html

   http://www.sandywalsh.com/2012/09/openstack-nova-internals-pt2-services.html

   After this there were no follow-up articles.

5. This document is very high level but has a comprehensive diagram in
   it 

   http://www.sandywalsh.com/2012/09/openstack-nova-internals-pt2-services.html

I hope this helps. If you find anything interesting, let me know.

Thanks and Regards
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: Proposal for approving Auto HA development blueprint.

2013-08-12 Thread Alex Glikson
Agree. Some enhancements to Nova might be still required (e.g., to handle 
resource reservations, so that there is enough capacity), but the 
end-to-end framework probably should be outside of existing services, 
probably talking to Nova, Ceilometer and potentially other components 
(maybe Cinder, Neutron, Ironic), and 'orchestrating' failure detection, 
fencing and recovery.
Probably worth a discussion at the upcoming summit.


Regards,
Alex



From:   Konglingxian 
To: OpenStack Development Mailing List 
, 
Date:   13/08/2013 07:07 AM
Subject:[openstack-dev] 答复:  Proposal for approving Auto HA 
development blueprint.



Hi yongiman:
 
Your idea is good, but I think the auto HA operation is not OpenStack’s 
business. IMO, Ceilometer offers ‘monitoring’, Nova  offers ‘evacuation’, 
and you can combine them to realize HA operation.
 
So, I’m afraid I can’t understand the specific implementation details very 
well.
 
Any different opinions?
 
发件人: yongi...@gmail.com [mailto:yongi...@gmail.com] 
发送时间: 2013年8月12日 20:52
收件人: openstack-dev@lists.openstack.org
主题: Re: [openstack-dev] Proposal for approving Auto HA development 
blueprint.
 
 
 
Hi,
 
Now, I am developing auto ha operation for vm high availability.
 
This function is all progress automatically.
 
It needs other service like ceilometer.
 
ceilometer monitors compute nodes.
 
When ceilometer detects broken compute node, it send a api call to Nova, 
nova exposes for auto ha API.
 
When received auto ha call, nova progress auto ha operation.
 
All auto ha enabled VM where are running on broken host are all migrated 
to auto ha Host which is extra compute node for using only Auto-HA 
function.
 
Below is my blueprint and wiki page.
 
Wiki page is not yet completed. Now I am adding lots of information for 
this function.
 
Thanks
 
https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken
 
https://wiki.openstack.org/wiki/Autoha
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Proposal to support new Cinder driver for CloudByte's Elastistor

2013-08-12 Thread Amit Das
Hi Team,

We have implemented a CINDER driver for our QoS aware storage solution
(CloudByte Elastistor).

We would like to integrate this driver code with the next version of
OpenStack (Havana).

Please let us know the approval processes to be followed for this new
driver support.

Regards,
Amit
*CloudByte Inc.* 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] can't install devstack - nova-api did not start

2013-08-12 Thread Roman Gorodeckij
Updating devstack to latest revision solves my problem. 

Sent from my iPhone

On 2013 Rugp. 13, at 05:00, XINYU ZHAO  wrote:

> Hi Sean
> I uninstalled the oslo.config 1.1.1 version and run devstack, but this time 
> it stopped at 
> 
> 2013-08-09 18:55:16 + /opt/stack/new/keystone/bin/keystone-manage db_sync
> 2013-08-09 18:55:16 Traceback (most recent call last):
> 2013-08-09 18:55:16   File "/opt/stack/new/keystone/bin/keystone-manage", 
> line 16, in 
> 2013-08-09 18:55:16 from keystone import cli
> 2013-08-09 18:55:16   File "/opt/stack/new/keystone/keystone/cli.py", line 
> 23, in 
> 2013-08-09 18:55:16 from oslo.config import cfg
> 2013-08-09 18:55:16 ImportError: No module named config
> 2013-08-09 18:55:16 + [[ PKI == \P\K\I ]]
> 
> An unexpected error prevented the server from fulfilling your request. 
> (ProgrammingError) (1146, "Table 'keystone.service' doesn't exist") 'INSERT 
> INTO service (id, type, extra) VALUES (%s, %s, %s)' 
> ('32578395572b4cf2a70ba70b6031cd1d', 'identity', '{"name": "keystone", 
> "description": "Keystone Identity Service"}') (HTTP 500)
> 2013-08-12 18:36:45 + KEYSTONE_SERVICE=
> 2013-08-12 18:36:45 + keystone endpoint-create --region RegionOne 
> --service_id --publicurl http://127.0.0.1:5000/v2.0 
> --adminurlhttp://127.0.0.1:35357/v2.0 --internalurl http://127.0.0.1:5000/v2.0
> 
> it seems that  oslo.config was not properly imported after i re-installed it. 
> but when i list the pip installations, it is there. 
> 
> /usr/local/bin/pip freeze |grep oslo.config
> -e 
> git+http://10.145.81.234/openstackci/gerrit/p/oslo.config@c65d70c02494805ce50b88f343f8fafe7a521724#egg=oslo.config-master
> root@devstack-4:/# /usr/local/bin/pip search oslo.config
> oslo.config   - Oslo configuration API
>   INSTALLED: 1.2.0.a192.gc65d70c
>   LATEST:1.1.1
> 
> 
> 
> On Sat, Aug 10, 2013 at 7:07 AM, Sean Dague  wrote:
>> Silly pip, trix are for kids.
>> 
>> Ok, well:
>> 
>> sudo pip install -I oslo.config==1.1.1
>> 
>> then pip uninstall oslo.config
>> 
>> On 08/09/2013 06:58 PM, Roman Gorodeckij wrote:
>>> stack@hp:~/devstack$ sudo pip install oslo.config
>>> Requirement already satisfied (use --upgrade to upgrade): oslo.config in 
>>> /opt/stack/oslo.config
>>> Requirement already satisfied (use --upgrade to upgrade): six in 
>>> /usr/local/lib/python2.7/dist-packages (from oslo.config)
>>> Cleaning up...
>>> stack@hp:~/devstack$ sudo pip uninstall oslo.config
>>> Can't uninstall 'oslo.config'. No files were found to uninstall.
>>> stack@hp:~/devstack$
>>> 
>>> stack@hp:~/devstack$ cat /tmp/devstack/log//screen-n-api.log
>>> | touch "/opt/stack/status/stack/n-api.failure"nova && 
>>> /usr/local/bin/nova-api |
>>> 
>>> Traceback (most recent call last):
>>>File "/usr/local/bin/nova-api", line 6, in 
>>>  from nova.cmd.api import main
>>>File "/opt/stack/nova/nova/cmd/api.py", line 29, in 
>>>  from nova import config
>>>File "/opt/stack/nova/nova/config.py", line 22, in 
>>>  from nova.openstack.common.db.sqlalchemy import session as db_session
>>>File "/opt/stack/nova/nova/openstack/common/db/sqlalchemy/session.py", 
>>> line 279, in 
>>>  deprecated_opts=[cfg.DeprecatedOpt('sql_connection',
>>> AttributeError: 'module' object has no attribute 'DeprecatedOpt'
>>> 
>>> nothing changed.
>>> 
>>> On Aug 9, 2013, at 6:11 PM, Sean Dague  wrote:
>>> 
 This should be addressed by the latest devstack, however because we moved 
 to oslo.config out of git, some install environments might still have 
 oslo.config 1.1.0 somewhere, that pip no longer sees (so can't uninstall)
 
 sudo pip install oslo.config
 sudo pip uninstall oslo.config
 
 rerun devstack, see if it works.
 
 -Sean
 
 On 08/09/2013 09:14 AM, Roman Gorodeckij wrote:
> Tried to install devstack to dedicated server, ip's are defined.
> 
> Here's the output:
> 
> 13-08-09 09:06:28 ++ echo -ne '\015'
> 
> 2013-08-09 09:06:28 + NL=$'\r'
> 2013-08-09 09:06:28 + screen -S stack -p n-api -X stuff 'cd 
> /opt/stack/nova && /'sr/local/bin/nova-api || touch 
> "/opt/stack/status/stack/n-api.failure"
> 2013-08-09 09:06:28 + echo 'Waiting for nova-api to start...'
> 2013-08-09 09:06:28 Waiting for nova-api to start...
> 2013-08-09 09:06:28 + wait_for_service 60http://192.168.1.6:8774
> 2013-08-09 09:06:28 + local timeout=60
> 2013-08-09 09:06:28 + local url=http://192.168.1.6:8774
> 2013-08-09 09:06:28 + timeout 60 sh -c 'while ! http_proxy= https_proxy= 
> curl -shttp://192.168.1.6:8774  >/dev/null; do sleep 1; done'
> 2013-08-09 09:07:28 + die 698 'nova-api did not start'
> 2013-08-09 09:07:28 + local exitcode=0
> stack@hp:~/devstack$ 2013-08-09 09:07:28 + set +o xtrace
> 
> Here's the log:
> 
> 2013-08-09 09:07:28 [ERROR] ./stack.sh:698 nova-api did not start
> stack@hp:~/devstack$ cat /tmp/devstack/log//screen-n-api.log
> t/s