Re: [openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-22 Thread Jay Lau
Can you check the kubelet log on your minions? Seems the container failed
to start, there might be something wrong for your minions node. Thanks.

2015-02-22 15:08 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Hi all,

 I tried to go through the new redis example at the quickstart guide [1],
 but was not able to go through. I was blocked by connecting to the redis
 slave container:

 *$ docker exec -i -t $REDIS_ID redis-cli*
 *Could not connect to Redis at 127.0.0.1:6379 http://127.0.0.1:6379:
 Connection refused*

 Here is the container log:

 *$ docker logs $REDIS_ID*
 *Error: Server closed the connection*
 *Failed to find master.*

 It looks like the redis master disappeared at some point. I tried to check
 the status in about every one minute. Below is the output.

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Pending*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Pending*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Failed*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *3b164230-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*

 Is anyone able to reproduce the problem above? If yes, I am going to file
 a bug.

 Thanks,
 Hongbin

 [1]
 https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst#exercising-the-services-using-devstack

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-22 Thread Yuriy Taraday
On Sun Feb 22 2015 at 6:27:16 AM Michael Bayer mba...@redhat.com wrote:




  On Feb 21, 2015, at 9:49 PM, Joshua Harlow harlo...@outlook.com wrote:
 
  Some comments/questions inline...
 
  Mike Bayer wrote:
 
  Yuriy Taradayyorik@gmail.com  wrote:
 
  On Fri Feb 20 2015 at 9:14:30 PM Joshua Harlowharlo...@outlook.com
 wrote:
  This feels like something we could do in the service manager base
 class,
  maybe by adding a post fork hook or something.
  +1 to that.
 
  I think it'd be nice to have the service __init__() maybe be something
 like:
 
def __init__(self, threads=1000, prefork_callbacks=None,
 postfork_callbacks=None):
   self.postfork_callbacks = postfork_callbacks or []
   self.prefork_callbacks = prefork_callbacks or []
   # always ensure we are closing any left-open fds last...
   self.prefork_callbacks.append(self._close_descriptors)
   ...
 
  (you must've meant postfork_callbacks.append)
 
  Note that multiprocessing module already have 
  `multiprocessing.util.register_after_fork`
 method that allows to register callback that will be called every time a
 Process object is run. If we remove explicit use of `os.fork` in
 oslo.service (replace it with Process class) we'll be able to specify any
 after-fork callbacks in libraries that they need.
  For example, EngineFacade could register `pool.dispose()` callback
 there (it should have some proper finalization logic though).
 
  +1 to use Process and the callback system for required initialization
 steps
  and so forth, however I don’t know that an oslo lib should silently
 register
  global events on the assumption of how its constructs are to be used.
 
  I think whatever Oslo library is responsible for initiating the
 Process/fork
  should be where it ensures that resources from other Oslo libraries are
 set
  up correctly. So oslo.service might register its own event handler with
 
  Sounds like some kind of new entrypoint + discovery service that
 oslo.service (eck can we name it something else, something that makes it
 useable for others on pypi...) would need to plug-in to. It would seems
 like this is a general python problem (who is to say that only oslo
 libraries use resources that need to be fixed/closed after forking); are
 there any recommendations that the python community has in general for this
 (aka, a common entrypoint *all* libraries export that allows them to do
 things when a fork is about to occur)?
 
  oslo.db such that it gets notified of new database engines so that it
 can
  associate a disposal with it; it would do something similar for
  oslo.messaging and other systems that use file handles.   The end
  result might be that it uses register_after_fork(), but the point is
 that
  oslo.db.sqlalchemy.create_engine doesn’t do this; it lets oslo.service
  apply a hook so that oslo.service can do it on behalf of oslo.db.
 
  Sounds sort of like global state/a 'open resource' pool that each
 library needs to maintain internally to it that tracks how
 applications/other libraries are using it; that feels sorta odd IMHO.
 
  Wouldn't that mean libraries that provide back resource objects, or
 resource containing objects..., for others to use would now need to capture
 who is using what (weakref pools?) to retain what all the resources are
 being used and by whom (so that they can fix/close them on fork); not every
 library has a pool (like sqlalchemy afaik does) to track these kind(s) of
 things (for better or worse...). And what if those libraries use other
 libraries that use resources (who owns what?); seems like this just gets
 very messy/impractical pretty quickly once you start using any kind of 3rd
 party library that doesn't follow the same pattern... (which brings me back
 to the question of isn't there a common python way/entrypoint that deal
 with forks that works better than ^).
 
 
  So, instead of oslo.service cutting through and closing out the file
  descriptors from underneath other oslo libraries that opened them, we
 set up
  communication channels between oslo libs that maintain a consistent
 layer of
  abstraction, and instead of making all libraries responsible for the
 side
  effects that might be introduced from other oslo libraries, we make the
  side-effect-causing library the point at which those effects are
  ameliorated as a service to other oslo libraries.   This allows us to
 keep
  the knowledge of what it means to use “multiprocessing” in one
  place, rather than spreading out its effects.
 
  If only we didn't have all those other libraries[1] that people use to
 (that afaik highly likely also have resources they open); so even with
 getting oslo.db and oslo.messaging into this kind of pattern, we are still
 left with the other 200+ that aren't/haven't been following this pattern ;-)

 I'm only trying to solve well known points like this one between two Oslo
 libraries.   Obviously trying to multiply out this pattern times all
 libraries, 

Re: [openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-22 Thread Hongbin Lu
Thanks Jay,

I checked the kubelet log. There are a lot of Watch closed error like
below. Here is the full log http://fpaste.org/188964/46261561/ .

*Status:Failure, Message:unexpected end of JSON input, Reason:*
*Status:Failure, Message:501: All the given peers are not reachable*

Please note that my environment was setup by following the quickstart
guide. It seems that all the kube components were running (checked by using
systemctl status command), and all nodes can ping each other. Any further
suggestion?

Thanks,
Hongbin


On Sun, Feb 22, 2015 at 3:58 AM, Jay Lau jay.lau@gmail.com wrote:

 Can you check the kubelet log on your minions? Seems the container failed
 to start, there might be something wrong for your minions node. Thanks.

 2015-02-22 15:08 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Hi all,

 I tried to go through the new redis example at the quickstart guide [1],
 but was not able to go through. I was blocked by connecting to the redis
 slave container:

 *$ docker exec -i -t $REDIS_ID redis-cli*
 *Could not connect to Redis at 127.0.0.1:6379 http://127.0.0.1:6379:
 Connection refused*

 Here is the container log:

 *$ docker logs $REDIS_ID*
 *Error: Server closed the connection*
 *Failed to find master.*

 It looks like the redis master disappeared at some point. I tried to
 check the status in about every one minute. Below is the output.

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Pending*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Pending*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Failed*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *3b164230-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*

 Is anyone able to reproduce the problem above? If yes, I am going to file
 a bug.

 Thanks,
 Hongbin

 [1]
 https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst#exercising-the-services-using-devstack

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay Lau (Guangya Liu)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 

Re: [openstack-dev] [nova][neutron] Passthrough of PF's from SR-IOV capable devices.

2015-02-22 Thread Irena Berezovsky
Please see inline

On Thu, Feb 19, 2015 at 4:43 PM, Steve Gordon sgor...@redhat.com wrote:

 - Original Message -
  From: Irena Berezovsky irenab@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org,
 
  On Thu, Feb 5, 2015 at 9:01 PM, Steve Gordon sgor...@redhat.com wrote:
 
   - Original Message -
From: Przemyslaw Czesnowicz przemyslaw.czesnow...@intel.com
To: OpenStack Development Mailing List (not for usage questions) 
   openstack-dev@lists.openstack.org
   
Hi
   
 1) If the device is a normal PCI device, but is a network card,
 am I
 still able to
 take advantage of the advanced syntax added circa Juno to define
 the
 relationship between that card and a given physical network so
 that the
 scheduler can place accordingly (and does this still use the ML2
 mech
 drvier for
 SR-IOV even though it's a normal device.
   
Actually libvirt won't allow using normal PCI devices for network
interfaces into VM.
Following error is thrown by libvirt 1.2.9.1:
libvirtError: unsupported configuration: Interface type hostdev is
   currently
supported on SR-IOV Virtual Functions only
   
I don't know why libvirt prohibits that. But we should prohibit that
 on
Openstack side as well.
  
   This is true for hostdev style configuration, normal PCI devices
 are
   still valid in Libvirt for passthrough using hostdev though. The
 former
   having been specifically created for handling passthrough of VFs, the
   latter being the more generic passthrough functionality and what was
 used
   with the original PCI passthrough functionality introduced circa
 Havana.
  
   I guess what I'm really asking in this particular question is what is
 the
   intersection of these two implementations - if any, as on face value it
   seems that to passthrough a physical PCI device I must use the older
 syntax
   and thus can't have the scheduler be aware of its external network
   connectivity.
  
  Support for normal PCI device passthrough for networking in SR-IOV like
  way will require new VIF Driver support for hostdev style device guest
 XML
  being created and some call invocation to set MAC address and VLAN tag.
 
  
 2) There is no functional reason from a Libvirt/Qemu perspective
 that I
 couldn't
 pass through a PF to a guest, and some users have expressed
 surprise
   to me
 when they have run into this check in the Nova driver. I assume in
 the
 initial
 implementation this was prevented to avoid a whole heap of fun
   additional
 logic
 that is required if this is allowed (e.g. check that no VFs from
 the PF
 being
 requested are already in use, remove all the associated VFs from
 the
   pool
 when
 assigning the PF, who gets allowed to use PFs versus VFs etc.). Am
 I
 correct here
 or is there another reason that this would be undesirable to allow
 in
 future -
 assuming such checks can also be designed - that I am missing?

I think that is correct. But even if the additional logic was
   implemented  it
wouldn't work because of how libvirt behaves currently.
  
   Again though, in the code we have a distinction between a physical
 device
   (as I was asking about in Q1) and a physical function (as I am asking
 about
   in Q2) and similarly whether libvirt allows or not depends on how you
   configure in the guest XML. Though I wouldn't be surprised on the PF
 case
   if it is in fact not allowed in Libvirt (even with hostdev) it is
 again
   important to consider this distinctly separate from passing through the
   physical device case which we DO allow currently in the code I'm asking
   about.
  
  I think what you suggest is not difficult to support, but current (since
  Juno) PCI device passthrough  for networking is all about SR-IOV PCI
 device
  passthrough. As I mentioned, to support  normal PCI device will require
  libvirt VIF Driver adjustment. I think its possible to make this work
 with
  existing neutron ML2 SRIOV Mechanism Driver.

 Understood, was just trying to understand if there was an explicit reason
 *not* to do this. How should we track this, keep adding to
 https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough ?


I think that probably new etherpad for Liberty should be  created in order
to track SR-IOV and PCI features. Most of the features proposed for Kilo
were rejected due to the nova and neutron priorities focus on other areas.
All listed and rejected features and new features priorities should be
evaluated and probably picked by people willing to drive it. For Kilo we
started this work during the pci_passthrough weekly meetings and finalized
at the summit. I think it worked well. I would suggest to do the same for
Liberty.

BR,
Irena


 Thanks,

 Steve

__
OpenStack Development Mailing List (not for 

Re: [openstack-dev] [horizon][heat]Vote for Openstack L summit topic The Heat Orchestration Template Builder: A demonstration

2015-02-22 Thread Jay Lau
It's really a great feature for HEAT which can accelerate the speed for
HEAT to production, wish you good luck! Thanks!

2015-02-21 2:21 GMT+08:00 Aggarwal, Nikunj nikunj.aggar...@hp.com:

  Hi,



 I have submitted  presentations for Openstack L summit :



 The Heat Orchestration Template Builder: A demonstration
 https://www.openstack.org/vote-vancouver/Presentation/the-heat-orchestration-template-builder-a-demonstration





 Please cast your vote if you feel it is worth for presentation.



 Thanks  Regards,

 Nikunj



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Match network topology in DB vs. network topology in nodes

2015-02-22 Thread Leo Y
How can I having a device owner value to match it vs. the compute node?

-- 
Regards,
Leo
-
I enjoy the massacre of ads. This sentence will slaughter ads without a
messy bloodbath
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][heat]Vote for Openstack L summit topic The Heat Orchestration Template Builder: A demonstration

2015-02-22 Thread Angus Salkeld
On Sat, Feb 21, 2015 at 4:21 AM, Aggarwal, Nikunj nikunj.aggar...@hp.com
wrote:

  Hi,



 I have submitted  presentations for Openstack L summit :



 The Heat Orchestration Template Builder: A demonstration
 https://www.openstack.org/vote-vancouver/Presentation/the-heat-orchestration-template-builder-a-demonstration




Hi

Nice to see work on a HOT builder progressing, but..
are you planning on integrating this with the other HOT builder efforts?
Is the code public (link)?

This is more of a framework to make these easier to build:
https://github.com/stackforge/merlin
https://wiki.openstack.org/wiki/Merlin

Timur (who works on Merlin) is working with Rackers to build this upstream
- I am not sure on the progress.
https://github.com/rackerlabs/hotbuilder
https://github.com/rackerlabs/foundry

It would be nice if we could all work together (I am hoping you are
already)?
Hopefully some of the others that are working on this can chip in and say
where they
are.

-Angus



 Please cast your vote if you feel it is worth for presentation.



 Thanks  Regards,

 Nikunj



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Ubuntu, qemu, NUMA support

2015-02-22 Thread Tom Fifield
On 17/02/15 23:32, Chris Friesen wrote:
 
 Hi all,
 
 Just thought I'd highlight here that Ubuntu 14.10 is using qemu 2.1, but
 they're not currently enabling NUMA support.
 
 I've reported it as a bug and it's been fixed for 15.04, but there is
 some pushback about fixing it in 14.10 on the grounds that it is a
 feature enhancement and not a bugfix:
 https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1417937

FYI, it looks like they're OK with it in Utopic now - the package has
been made and is awaiting SRU verification.

 Also, we currently assume that qemu can pin to NUMA nodes.  This is an
 invalid assumption since this was only added as of qemu 2.1, and there
 only if it's compiled with NUMA support.  At the very least we should
 have a version check, but if Ubuntu doesn't fix things then maybe we
 should actually verify the functionality first before trying to use it.
 
 I've opened a bug to track this issue:
 https://bugs.launchpad.net/nova/+bug/1422775

This bug might still be worthwhile, as quite a few folks will likely
stick with Trusty for Kilo. Though, did you by change check the flag
status of the package in the Ubuntu Cloud Archive? It packages a
different Qemu (ver 2.2) to the main repo ...


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest] UUID Tagging Requirement and Big Bang Patch

2015-02-22 Thread Chris Hoge
Once the gate settles down this week I’ll be sending up a major 
“big bang” patch to Tempest that will tag all of the tests with unique
identifiers, implementing this spec: 

https://github.com/openstack/qa-specs/blob/master/specs/meta-data-and-uuid-for-tests.rst

The work in progress is here, and includes a change to the gate that
every test developer should be aware of.

https://review.openstack.org/#/c/157273/

All tests will now require a UUID metadata identifier, generated from the
uuid.uuid4 function. The form of the identifier is a decorator like:

@test.meta(uuid='12345678-1234-5678-1234-567812345678')

To aid in hacking rules, the @test.meta decorator must be directly before the
function definition and after the @test.services decorator, which itself
must appear after all other decorators.

The gate will now require that every test have a uuid that is indeed
unique.

This work is meant to give a stable point of reference to tests that will
persist through test refactoring and moving.

Thanks,
Chris Hoge
Interop Engineer
OpenStack Foundation
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Ubuntu, qemu, NUMA support

2015-02-22 Thread Nikola Đipanov
On 02/23/2015 06:17 AM, Tom Fifield wrote:
 Also, we currently assume that qemu can pin to NUMA nodes.  This is an
 invalid assumption since this was only added as of qemu 2.1, and there
 only if it's compiled with NUMA support.  At the very least we should
 have a version check, but if Ubuntu doesn't fix things then maybe we
 should actually verify the functionality first before trying to use it.

 I've opened a bug to track this issue:
 https://bugs.launchpad.net/nova/+bug/1422775
 
 This bug might still be worthwhile, as quite a few folks will likely
 stick with Trusty for Kilo. Though, did you by change check the flag
 status of the package in the Ubuntu Cloud Archive? It packages a
 different Qemu (ver 2.2) to the main repo ...
 

Hey,

I've responded to the bug too (tl; dr - IMHO we should be failing the
instance request).

It might be better to move any discussion that ensues there so that it's
in one place.

Cheers for reporting it though!
N.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-22 Thread Hongbin Lu
Hi Jay,

I tried the native k8s commands (in a fresh bay):

*kubectl create -s http://192.168.1.249:8080 http://192.168.1.249:8080 -f
./redis-master.yaml*
*kubectl create -s http://192.168.1.249:8080 http://192.168.1.249:8080 -f
./redis-sentinel-service.yaml*
*kubectl create -s http://192.168.1.249:8080 http://192.168.1.249:8080 -f
./redis-controller.yaml*
*kubectl create -s http://192.168.1.249:8080 http://192.168.1.249:8080 -f
./redis-sentinel-controller.yaml*

It still didn't work (same symptom as before). I cannot spot any difference
between the original yaml file and the parsed yam file. Any other idea?

Thanks,
Hongbin

On Sun, Feb 22, 2015 at 8:38 PM, Jay Lau jay.lau@gmail.com wrote:

 I suspect that there are some error after the pod/services parsed, can you
 please use the native k8s command have a try first then debug k8s api part
 to check the difference of the original json file and the parsed json file?
 Thanks!

 kubectl create -f .json xxx



 2015-02-23 1:40 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Thanks Jay,

 I checked the kubelet log. There are a lot of Watch closed error like
 below. Here is the full log http://fpaste.org/188964/46261561/ .

 *Status:Failure, Message:unexpected end of JSON input, Reason:*
 *Status:Failure, Message:501: All the given peers are not reachable*

 Please note that my environment was setup by following the quickstart
 guide. It seems that all the kube components were running (checked by using
 systemctl status command), and all nodes can ping each other. Any further
 suggestion?

 Thanks,
 Hongbin


 On Sun, Feb 22, 2015 at 3:58 AM, Jay Lau jay.lau@gmail.com wrote:

 Can you check the kubelet log on your minions? Seems the container
 failed to start, there might be something wrong for your minions node.
 Thanks.

 2015-02-22 15:08 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Hi all,

 I tried to go through the new redis example at the quickstart guide
 [1], but was not able to go through. I was blocked by connecting to the
 redis slave container:

 *$ docker exec -i -t $REDIS_ID redis-cli*
 *Could not connect to Redis at 127.0.0.1:6379 http://127.0.0.1:6379:
 Connection refused*

 Here is the container log:

 *$ docker logs $REDIS_ID*
 *Error: Server closed the connection*
 *Failed to find master.*

 It looks like the redis master disappeared at some point. I tried to
 check the status in about every one minute. Below is the output.

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Pending*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Pending*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Failed*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 

Re: [openstack-dev] A question about strange behavior of oslo.config in eclipse

2015-02-22 Thread Joshua Zhang
Hi Doug,

 I can't find above error when using the latest codes (2015-02-22), but
new error occur ( see [1] ).  I feel it has no concern with the
code(ConfigOpts.import_opt), the same code can be run in both bash and
pycharm, it just didn't work in eclipse+pydev. It looks like there are some
conflicts between oslo/monkey patch and pydev. You are python expert, could
you give me some input by the following traceback ?

 Another question,  'Gevent compatible debugging' feature both in
eclipse and pycharm doesn't work, changing 'thread=False' for monkey patch
may cause the error [2], so now I have to get back to use pdb to debug
openstack. Could you have some idea to make oslo/monkey patch is more
friendly for IDE ? many thanks.

*[1]*, traceback when running 'neutron-server --config-file
/etc/neutron/neutron.conf --config-file
/etc/neutron/plugins/ml2/ml2_conf.ini' in eclipse+pydev env. while we can
run this command well in bash and pycharm.

Traceback (most recent call last):
  File /usr/local/bin/neutron-server, line 9, in module
load_entry_point('neutron==2015.1.dev202', 'console_scripts',
'neutron-server')()
  File /usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py,
line 521, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
  File /usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py,
line 2632, in load_entry_point
return ep.load()
  File /usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py,
line 2312, in load
return self.resolve()
  File /usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py,
line 2318, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File /bak/openstack/neutron/neutron/cmd/eventlet/server/__init__.py,
line 13, in module
from neutron import server
  File /bak/openstack/neutron/neutron/server/__init__.py, line 26, in
module
from neutron.common import config
  File /bak/openstack/neutron/neutron/common/config.py, line 25, in
module
import oslo_messaging
  File /usr/local/lib/python2.7/dist-packages/oslo_messaging/__init__.py,
line 18, in module
from .notify import *
  File
/usr/local/lib/python2.7/dist-packages/oslo_messaging/notify/__init__.py,
line 23, in module
from .notifier import *
  File
/usr/local/lib/python2.7/dist-packages/oslo_messaging/notify/notifier.py,
line 32, in module
help='Driver or drivers to handle sending notifications.'),
  File /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py, line
1108, in __init__
item_type=types.MultiString(),
AttributeError: 'module' object has no attribute 'MultiString'


*[2] *

2015-02-22 14:56:24.117 ERROR oslo_messaging.rpc.dispatcher
[req-65501748-3db5-43b6-b00e-732565d2192a TestNetworkVPNaaS-1393357785
TestNetworkVPNaaS-1147259963] Exception during message handling:
_oslo_messaging_localcontext_9bb7d928d1a042e085f354eb118e98a0
2015-02-22 14:56:24.117 28042 TRACE oslo_messaging.rpc.dispatcher Traceback
(most recent call last):
2015-02-22 14:56:24.117 28042 TRACE oslo_messaging.rpc.dispatcher   File
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py,
line 142, in _dispatch_and_reply
2015-02-22 14:56:24.117 28042 TRACE oslo_messaging.rpc.dispatcher
executor_callback))
2015-02-22 14:56:24.117 28042 TRACE oslo_messaging.rpc.dispatcher   File
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py,
line 188, in _dispatch
2015-02-22 14:56:24.117 28042 TRACE oslo_messaging.rpc.dispatcher
localcontext.clear_local_context()
2015-02-22 14:56:24.117 28042 TRACE oslo_messaging.rpc.dispatcher   File
/usr/local/lib/python2.7/dist-packages/oslo_messaging/localcontext.py,
line 55, in clear_local_context
2015-02-22 14:56:24.117 28042 TRACE oslo_messaging.rpc.dispatcher
delattr(_STORE, _KEY)
2015-02-22 14:56:24.117 28042 TRACE oslo_messaging.rpc.dispatcher
AttributeError:
_oslo_messaging_localcontext_9bb7d928d1a042e085f354eb118e98a0



On Sat, Feb 14, 2015 at 7:05 AM, Doug Hellmann d...@doughellmann.com
wrote:



 On Thu, Feb 12, 2015, at 07:19 AM, Joshua Zhang wrote:
  Hi Doug,
 
 Thank you very much for your reply. I don't have any codes, so no any
  special codes as well.
 Only thing I did is that:
 1, use devstack to install a fresh openstack env, all are ok.
 2, import neutron-vpnaas directory (no any my own codes) into eclipse
 as
  pydev project, for example, run unit test
  (neutron_vpnaas.tests.unit.services.vpn.test_vpn_service ) in eclipse, it
  throws the following exception.
 3, but this unit test can be run well in bash, see
  http://paste.openstack.org/show/172016/
 4, this unit test can also be run well in eclipse as long as I edit
  neutron/openstack/common/policy.py file to change oslo.config into
  oslo_config.
 
 
  ==
  ERROR: test_add_nat_rule
 
 

Re: [openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-22 Thread Jay Lau
I suspect that there are some error after the pod/services parsed, can you
please use the native k8s command have a try first then debug k8s api part
to check the difference of the original json file and the parsed json file?
Thanks!

kubectl create -f .json xxx



2015-02-23 1:40 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Thanks Jay,

 I checked the kubelet log. There are a lot of Watch closed error like
 below. Here is the full log http://fpaste.org/188964/46261561/ .

 *Status:Failure, Message:unexpected end of JSON input, Reason:*
 *Status:Failure, Message:501: All the given peers are not reachable*

 Please note that my environment was setup by following the quickstart
 guide. It seems that all the kube components were running (checked by using
 systemctl status command), and all nodes can ping each other. Any further
 suggestion?

 Thanks,
 Hongbin


 On Sun, Feb 22, 2015 at 3:58 AM, Jay Lau jay.lau@gmail.com wrote:

 Can you check the kubelet log on your minions? Seems the container failed
 to start, there might be something wrong for your minions node. Thanks.

 2015-02-22 15:08 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Hi all,

 I tried to go through the new redis example at the quickstart guide [1],
 but was not able to go through. I was blocked by connecting to the redis
 slave container:

 *$ docker exec -i -t $REDIS_ID redis-cli*
 *Could not connect to Redis at 127.0.0.1:6379 http://127.0.0.1:6379:
 Connection refused*

 Here is the container log:

 *$ docker logs $REDIS_ID*
 *Error: Server closed the connection*
 *Failed to find master.*

 It looks like the redis master disappeared at some point. I tried to
 check the status in about every one minute. Below is the output.

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Pending*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Pending*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Failed*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *3b164230-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*

 Is anyone able to reproduce the problem above? If yes, I am going to
 file a bug.

 Thanks,
 Hongbin

 [1]
 https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst#exercising-the-services-using-devstack


 __
 OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-22 Thread Michael Bayer




 On Feb 22, 2015, at 10:20 AM, Yuriy Taraday yorik@gmail.com wrote:
 
 
 
 On Sun Feb 22 2015 at 6:27:16 AM Michael Bayer mba...@redhat.com wrote:
 
 
 
  On Feb 21, 2015, at 9:49 PM, Joshua Harlow harlo...@outlook.com wrote:
 
  Some comments/questions inline...
 
  Mike Bayer wrote:
 
  Yuriy Taradayyorik@gmail.com  wrote:
 
  On Fri Feb 20 2015 at 9:14:30 PM Joshua Harlowharlo...@outlook.com  
  wrote:
  This feels like something we could do in the service manager base class,
  maybe by adding a post fork hook or something.
  +1 to that.
 
  I think it'd be nice to have the service __init__() maybe be something 
  like:
 
def __init__(self, threads=1000, prefork_callbacks=None,
 postfork_callbacks=None):
   self.postfork_callbacks = postfork_callbacks or []
   self.prefork_callbacks = prefork_callbacks or []
   # always ensure we are closing any left-open fds last...
   self.prefork_callbacks.append(self._close_descriptors)
   ...
 
  (you must've meant postfork_callbacks.append)
 
  Note that multiprocessing module already have 
  `multiprocessing.util.register_after_fork` method that allows to 
  register callback that will be called every time a Process object is 
  run. If we remove explicit use of `os.fork` in oslo.service (replace it 
  with Process class) we'll be able to specify any after-fork callbacks in 
  libraries that they need.
  For example, EngineFacade could register `pool.dispose()` callback there 
  (it should have some proper finalization logic though).
 
  +1 to use Process and the callback system for required initialization 
  steps
  and so forth, however I don’t know that an oslo lib should silently 
  register
  global events on the assumption of how its constructs are to be used.
 
  I think whatever Oslo library is responsible for initiating the 
  Process/fork
  should be where it ensures that resources from other Oslo libraries are 
  set
  up correctly. So oslo.service might register its own event handler with
 
  Sounds like some kind of new entrypoint + discovery service that 
  oslo.service (eck can we name it something else, something that makes it 
  useable for others on pypi...) would need to plug-in to. It would seems 
  like this is a general python problem (who is to say that only oslo 
  libraries use resources that need to be fixed/closed after forking); are 
  there any recommendations that the python community has in general for 
  this (aka, a common entrypoint *all* libraries export that allows them to 
  do things when a fork is about to occur)?
 
  oslo.db such that it gets notified of new database engines so that it can
  associate a disposal with it; it would do something similar for
  oslo.messaging and other systems that use file handles.   The end
  result might be that it uses register_after_fork(), but the point is that
  oslo.db.sqlalchemy.create_engine doesn’t do this; it lets oslo.service
  apply a hook so that oslo.service can do it on behalf of oslo.db.
 
  Sounds sort of like global state/a 'open resource' pool that each library 
  needs to maintain internally to it that tracks how applications/other 
  libraries are using it; that feels sorta odd IMHO.
 
  Wouldn't that mean libraries that provide back resource objects, or 
  resource containing objects..., for others to use would now need to 
  capture who is using what (weakref pools?) to retain what all the 
  resources are being used and by whom (so that they can fix/close them on 
  fork); not every library has a pool (like sqlalchemy afaik does) to track 
  these kind(s) of things (for better or worse...). And what if those 
  libraries use other libraries that use resources (who owns what?); seems 
  like this just gets very messy/impractical pretty quickly once you start 
  using any kind of 3rd party library that doesn't follow the same 
  pattern... (which brings me back to the question of isn't there a common 
  python way/entrypoint that deal with forks that works better than ^).
 
 
  So, instead of oslo.service cutting through and closing out the file
  descriptors from underneath other oslo libraries that opened them, we set 
  up
  communication channels between oslo libs that maintain a consistent layer 
  of
  abstraction, and instead of making all libraries responsible for the side
  effects that might be introduced from other oslo libraries, we make the
  side-effect-causing library the point at which those effects are
  ameliorated as a service to other oslo libraries.   This allows us to keep
  the knowledge of what it means to use “multiprocessing” in one
  place, rather than spreading out its effects.
 
  If only we didn't have all those other libraries[1] that people use to 
  (that afaik highly likely also have resources they open); so even with 
  getting oslo.db and oslo.messaging into this kind of pattern, we are still 
  left with the other 200+ that aren't/haven't been following this pattern 
  ;-)
 
 I'm 

Re: [openstack-dev] [api] [glance] conclusion needed on functional API

2015-02-22 Thread Jay Pipes

On 02/18/2015 06:37 PM, Brian Rosmaita wrote:

Thanks for your comment, Miguel.  Your suggestion is indeed very close
to the RESTful ideal.

However, I have a question for the entire API-WG.  Our (proposed)
mission is To improve the developer experience of API users by
converging the OpenStack API to a consistent and pragmatic RESTful
design. [1]  My question is: what is the sense of pragmatic in this
sentence?  I thought it meant that we advise the designers of OpenStack
APIs to adhere to RESTful design as much as possible, but allow them to
diverge where appropriate.  The proposed functional call to deactivate
an image seems to be an appropriate place to deviate from the ideal.
  Creating a task or action object so that the POST request will create
a new resource does not seem very pragmatic.  I believe that a necessary
component of encouraging OpenStack APIs to be consistent is to allow
some pragmatism.


Hi Brian,

I'm sure you're not surprised by my lack of enthusiasm for the 
functional Glance API spec for activating/deactivating an image :)


As for the role of the API WG in this kind of thing, you're absolutely 
correct that the goal of the WG is to improve the developer experience 
of API users with a consistent and pragmatic RESTful design.


I feel the proposed `PUT /images/{image_id}/actions/deactivate` is 
neither consistent (though to be fair, the things this would be 
consistent with in the Nova API -- i.e. the os-actions API -- are 
monstrosities IMHO) nor pragmatic.


This kind of thing, IMHO, is not something that belongs in the same REST 
API as the other Glance image API calls. It's purely an administrative 
thing and belongs in a separate API, and doesn't even need to be 
RESTful. The glance-manage command would be more appropriate, with 
direct calls to backend database systems to flip the status to 
activate/deactivate.


If this functionality really does need to be in the main user RESTful 
API, I believe it should follow the existing v2 Glance API's /tasks 
resource model for consistency and design reasons.


That said, I'm only one little voice on the API WG. Happy to hear 
other's views on this topic and go with the majority's view (after 
arguing for my points of course ;)


Best,
-jay


[1] https://review.openstack.org/#/c/155911/

On 2/18/15, 4:49 PM, Miguel Grinberg miguel.s.grinb...@gmail.com
mailto:miguel.s.grinb...@gmail.com wrote:

Out of all the proposals mentioned in this thread, I think Jay's (d)
option is what is closer to the REST ideal:

d) POST /images/{image_id}/tasks with payload:
{ action: deactivate|activate }

Even though I don't think this is the perfect solution, I can
recognize that at least it tries to be RESTful, unlike the other
three options suggested in the first message.

That said, I'm going to keep insisting that in a REST API state
changes are the most important thing, and actions are implicitly
derived by the server from these state changes requested by the
client. What you are trying to do is to reverse this flow, you want
the client to invoke an action, which in turn will cause an implicit
state change on the server. This isn't wrong in itself, it's just
not the way you do REST.

Jay's (d) proposal above could be improved by making the task a real
resource. Sending a POST request to the /tasks address creates a new
task resource, which gets a URI of its own, returned in the Location
header. You can then send a GET request to this URI to obtain status
info, such as whether the task completed or not. And since tasks are
now real resources, they should have a documented representation as
well.

Miguel

On Wed, Feb 18, 2015 at 1:19 PM, Brian Rosmaita
brian.rosma...@rackspace.com mailto:brian.rosma...@rackspace.com
wrote:

On 2/15/15, 2:35 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:
On 02/15/2015 01:13 PM, Brian Rosmaita wrote:
 On 2/15/15, 10:10 AM, Jay Pipes jaypi...@gmail.com 
mailto:jaypi...@gmail.com wrote:

 On 02/15/2015 01:31 AM, Brian Rosmaita wrote:
 This is a follow-up to the discussion at the 12 February API-WG
 meeting [1] concerning functional API in Glance [2].  We made
 some progress, but need to close this off so the spec can be
 implemented in Kilo.

 I believe this is where we left off: 1. The general consensus was
 that POST is the correct verb.

 Yes, POST is correct (though the resource is wrong).

 2. Did not agree on what to POST.  Three options are in play: (A)
 POST /images/{image_id}?action=deactivate POST
 /images/{image_id}?action=reactivate

 (B) POST /images/{image_id}/actions with payload describing the
 action, e.g., { action: deactivate } { action: reactivate
 }

 (C) POST