Re: [Openstack-operators] Help- Errors Restarting Nova on Openstack Liberty environment

2017-04-06 Thread Gaurav Goyal
Dear Openstack Users,

Awaiting your kind response please!
Our environment is in hanged situation because of above mentioned issue.



On Thu, Apr 6, 2017 at 1:32 PM, Gurpreet Thind 
wrote:

> Hi Openstack Operators,
>
> We wanted to install gnocchi on our openstack environment(Liberty). It was
> suppose to be a simple install but it also updated other libraries. Because
> of which now our Nova component (API, Compute,Conductor, ConsoleAuth,
> scheduler etc.) are not working. We are facing the Error: Exception:
> Versioning for this project requires either an sdist tarball, or access to
> an upstream git repository. Are you sure that git is installed?
> Could you please help. I have attached the detailed simulation of the
> steps and provided the logs. please assist.
>
> We no longer want Gnocchi, we want to revert back our system to the
> previous state.
>
> —Detailed Information —
>
> *—— We have upgraded pip and installed Gnocchi. *
>
> pip install gnocchi
>
>
> *—— On installing gnocchi the following files got updated.*
>
> gnocchi in /usr/lib/python2.7/site-packages
> Requirement already satisfied (use --upgrade to upgrade): cotyledon>=1.5.0
> in /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): PasteDeploy in
> /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade):
> oslo.middleware>=3.22.0 in /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): pandas>=0.18.0
> in /usr/lib64/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): ujson in
> /usr/lib64/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): jsonpatch in
> /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): numpy>=1.9.0 in
> /usr/lib64/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): six in
> /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade):
> oslo.utils>=3.18.0 in /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): werkzeug in
> /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): WebOb>=1.4.1 in
> /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): voluptuous in
> /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): trollius;
> python_version < "3.4" in /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade):
> oslo.serialization>=1.4.0 in /usr/lib/python2.7/site-packages (from
> gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): Paste in
> /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): iso8601 in
> /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade):
> oslo.config>=2.7.0 in /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): pbr in
> /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): futures in
> /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): tenacity>=3.1.0
> in /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): stevedore in
> /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): oslo.log>=2.3.0
> in /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade):
> oslo.policy>=0.3.0 in /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): pecan>=0.9 in
> /usr/lib/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): scipy>=0.18.1 in
> /usr/lib64/python2.7/site-packages (from gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): setproctitle;
> sys_platform != "win32" in /usr/lib64/python2.7/site-packages (from
> cotyledon>=1.5.0->gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): statsd>=3.2.1 in
> /usr/lib/python2.7/site-packages (from oslo.middleware>=3.22.0->gnocchi)
> Requirement already satisfied (use --upgrade to upgrade): oslo.i18n>=2.1.0
> in /usr/lib/python2.7/site-packages (from oslo.middleware>=3.22.0->
> gnocchi)
> Requirement already satisfied (use --upgrade to upgrade):
> debtcollector>=1.2.0 in /usr/lib/python2.7/site-packages (from
> oslo.middleware>=3.22.0->gnocchi)
> Requirement already satisfied (use 

Re: [Openstack-operators] [nova] Does anyone use the TypeAffinityFilter?

2017-04-06 Thread Matt Riedemann

On 4/6/2017 7:34 PM, Matt Riedemann wrote:

While working on trying to trim some RPC traffic between compute nodes
and the scheduler [1] I came across the TypeAffinityFilter which relies
on the instance.instance_type_id field, which is the original flavor.id
(primary key) that the instance was created with on a given host. The
idea being if I have an instance with type 20 on a host, then I can't
schedule another host with type 20 on it.


Oops, "then I can't schedule another *instance* with type 20 on it (the 
same host)".




The issue with this is that flavors can't be updated, they have to be
deleted and recreated. This is why we're changing the flavor
representation in the server response details in Pike [2] because the
instance.instance_type_id can point to a flavor that no longer exists,
so you can't look up the details on the flavor that was used to create a
given instance via the API (you could figure it out in the database, but
that's no fun).

So the big question is, does anyone use this filter and if so, have you
already hit the issue described here and if so, how are you working
around it? If no one is using it, I'd like to deprecate it.

[1]
https://blueprints.launchpad.net/nova/+spec/put-host-manager-instance-info-on-a-diet

[2]
https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/instance-flavor-api.html





--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Operator feedback on App Keys...

2017-04-06 Thread De Rose, Ronald
Great Edmund!  We’d appreciate that.

-Ron

From: Edmund Rhudy (BLOOMBERG/ 120 PARK) [mailto:erh...@bloomberg.net]
Sent: Thursday, April 6, 2017 12:43 PM
To: De Rose, Ronald 
Cc: openstack-operators@lists.openstack.org
Subject: Re:[Openstack-operators] Operator feedback on App Keys...

We have had internal discussions about app keys in Keystone and would be very 
happy to see it implemented (and to assist with the implementation where 
possible).

From: ronald.de.r...@intel.com
Subject: Re:[Openstack-operators] Operator feedback on App Keys...
During the PTG, the keystone team had a lengthy discussion on a new security 
credential for applications/scripts/3rd party tools, to avoid having to put 
username/password in configuration files.  The following is a spec that 
addresses this feature by implementing App Keys to be used for application 
authentication.
https://review.openstack.org/#/c/450415/

I’m looking for feedback from operators and others on this idea to determine if 
there is a strong need/demand for this.

Thanks in advance.

-Ron

Ron De Rose
Keystone Core
IRC: rderose


___

OpenStack-operators mailing list

OpenStack-operators@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova] Does anyone use the TypeAffinityFilter?

2017-04-06 Thread Matt Riedemann
While working on trying to trim some RPC traffic between compute nodes 
and the scheduler [1] I came across the TypeAffinityFilter which relies 
on the instance.instance_type_id field, which is the original flavor.id 
(primary key) that the instance was created with on a given host. The 
idea being if I have an instance with type 20 on a host, then I can't 
schedule another host with type 20 on it.


The issue with this is that flavors can't be updated, they have to be 
deleted and recreated. This is why we're changing the flavor 
representation in the server response details in Pike [2] because the 
instance.instance_type_id can point to a flavor that no longer exists, 
so you can't look up the details on the flavor that was used to create a 
given instance via the API (you could figure it out in the database, but 
that's no fun).


So the big question is, does anyone use this filter and if so, have you 
already hit the issue described here and if so, how are you working 
around it? If no one is using it, I'd like to deprecate it.


[1] 
https://blueprints.launchpad.net/nova/+spec/put-host-manager-instance-info-on-a-diet
[2] 
https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/instance-flavor-api.html


--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Operator feedback on App Keys...

2017-04-06 Thread Edmund Rhudy (BLOOMBERG/ 120 PARK)
We have had internal discussions about app keys in Keystone and would be very 
happy to see it implemented (and to assist with the implementation where 
possible).

From: ronald.de.r...@intel.com 
Subject: Re:[Openstack-operators] Operator feedback on App Keys...

 

During the PTG, the keystone team had a lengthy discussion on a new security 
credential for applications/scripts/3rd party tools, to avoid having to put 
username/password in configuration files.  The following  is a spec that 
addresses this feature by implementing App Keys to be used for application 
authentication. 
https://review.openstack.org/#/c/450415/ 
  
I’m looking for feedback from operators and others on this idea to determine if 
there is a strong need/demand for this. 
  
Thanks in advance. 
  
-Ron 
  
Ron De Rose 
Keystone Core 
IRC: rderose 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Fault Genes WG Meeting Summary

2017-04-06 Thread Nematollah Bidokhti
Hi All,

Below is the link to the weekly meeting summary. There are a lot of exciting ML 
work being performed by the team.

https://etherpad.openstack.org/p/Fault-Genes

thanks,
Nemat



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Operator feedback on App Keys...

2017-04-06 Thread De Rose, Ronald
During the PTG, the keystone team had a lengthy discussion on a new security 
credential for applications/scripts/3rd party tools, to avoid having to put 
username/password in configuration files.  The following is a spec that 
addresses this feature by implementing App Keys to be used for application 
authentication.
https://review.openstack.org/#/c/450415/

I'm looking for feedback from operators and others on this idea to determine if 
there is a strong need/demand for this.

Thanks in advance.

-Ron

Ron De Rose
Keystone Core
IRC: rderose

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-06 Thread Blair Bethwaite
Hi Tim,

It does seem feasible, but imagine the aggregate juggling... it's
something of an indictment that from where we are today this seems
like a step forward. I'm not a fan of pushing that load onto operators
when it seems like what we actually need is fully-fledged workload
scheduling in Nova.

Cheers,

On 5 April 2017 at 04:48, Tim Bell  wrote:
> Some combination of spot/OPIE and Blazar would seem doable as long as the 
> resource provider reserves capacity appropriately (i.e. spot 
> resources>>blazar committed along with no non-spot requests for the same 
> aggregate).
>
> Is this feasible?
>
> Tim
>
> On 04.04.17, 19:21, "Jay Pipes"  wrote:
>
> On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
> > Hi Jay,
> >
> > On 4 April 2017 at 00:20, Jay Pipes  wrote:
> >> However, implementing the above in any useful fashion requires that 
> Blazar
> >> be placed *above* Nova and essentially that the cloud operator turns 
> off
> >> access to Nova's  POST /servers API call for regular users. Because if 
> not,
> >> the information that Blazar acts upon can be simply circumvented by 
> any user
> >> at any time.
> >
> > That's something of an oversimplification. A reservation system
> > outside of Nova could manipulate Nova host-aggregates to "cordon off"
> > infrastructure from on-demand access (I believe Blazar already uses
> > this approach), and it's not much of a jump to imagine operators being
> > able to twiddle the available reserved capacity in a finite cloud so
> > that reserved capacity can be offered to the subset of users/projects
> > that need (or perhaps have paid for) it.
>
> Sure, I'm following you up until here.
>
> > Such a reservation system would even be able to backfill capacity
> > between reservations. At the end of the reservation the system
> > cleans-up any remaining instances and preps for the next
> > reservation.
>
> By "backfill capacity between reservations", do you mean consume
> resources on the compute hosts that are "reserved" by this paying
> customer at some date in the future? i.e. Spot instances that can be
> killed off as necessary by the reservation system to free resources to
> meet its reservation schedule?
>
> > The are a couple of problems with putting this outside of Nova though.
> > The main issue is that pre-emptible/spot type instances can't be
> > accommodated within the on-demand cloud capacity.
>
> Correct. The reservation system needs complete control over a subset of
> resource providers to be used for these spot instances. It would be like
> a hotel reservation system being used for a motel where cars could
> simply pull up to a room with a vacant sign outside the door. The
> reservation system would never be able to work on accurate data unless
> some part of the motel's rooms were carved out for reservation system to
> use and cars to not pull up and take.
>
>  >  You could have the
> > reservation system implementing this feature, but that would then put
> > other scheduling constraints on the cloud in order to be effective
> > (e.g., there would need to be automation changing the size of the
> > on-demand capacity so that the maximum pre-emptible capacity was
> > always available). The other issue (admittedly minor, but still a
> > consideration) is that it's another service - personally I'd love to
> > see Nova support these advanced use-cases directly.
>
> Welcome to the world of microservices. :)
>
> -jay
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>



-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-06 Thread Blair Bethwaite
Hi Jay,

On 5 April 2017 at 03:21, Jay Pipes  wrote:
> On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
>> That's something of an oversimplification. A reservation system
>> outside of Nova could manipulate Nova host-aggregates to "cordon off"
>> infrastructure from on-demand access (I believe Blazar already uses
>> this approach), and it's not much of a jump to imagine operators being
>> able to twiddle the available reserved capacity in a finite cloud so
>> that reserved capacity can be offered to the subset of users/projects
>> that need (or perhaps have paid for) it.
>
>
> Sure, I'm following you up until here.
>
>> Such a reservation system would even be able to backfill capacity
>> between reservations. At the end of the reservation the system
>> cleans-up any remaining instances and preps for the next
>> reservation.
>
>
> By "backfill capacity between reservations", do you mean consume resources
> on the compute hosts that are "reserved" by this paying customer at some
> date in the future? i.e. Spot instances that can be killed off as necessary
> by the reservation system to free resources to meet its reservation
> schedule?

That is one possible use-case, but it could also backfill with other
reservations that do not overlap. This is a common feature of HPC job
schedulers that have to deal with the competing needs of large
parallel jobs (single users with temporal workload constraints) and
many small jobs (many users with throughput needs).

>> The are a couple of problems with putting this outside of Nova though.
>> The main issue is that pre-emptible/spot type instances can't be
>> accommodated within the on-demand cloud capacity.
>
>
> Correct. The reservation system needs complete control over a subset of
> resource providers to be used for these spot instances. It would be like a
> hotel reservation system being used for a motel where cars could simply pull
> up to a room with a vacant sign outside the door. The reservation system
> would never be able to work on accurate data unless some part of the motel's
> rooms were carved out for reservation system to use and cars to not pull up
> and take.

In order to make reservations, yes. However, preemptible instances are
a valid use-case without also assuming reservations (they just happen
to complement each other). If we want the system to be really useful
and flexible we should be considering leases and queuing, e.g.:

- Leases requiring a single VM or groups of VMs that must run in parallel.
- Best-effort leases, which will wait in a queue until resources
become available.
- Advance reservation leases, which must start at a specific time.
- Immediate leases, which must start right now, or not at all.

The above bullets are pulled from
http://haizea.cs.uchicago.edu/whatis.html (Haizea is a scheduling
framework that can plug into OpenNebula), and I believe these fit very
well with the scheduling needs of the majority of private & hybrid
clouds. It also has other notable features such as preemptible leases.

I remain perplexed by the fact that OpenStack, as the preeminent open
private cloud framework, still only deals in on-demand access as
though most cloud-deployments are infinite. Yet today users have to
keep polling the boot API until they get something: "not now... not
now... not now..." - no queuing, no fair-share, nothing. Users should
only ever see NoValidHost if they requested "an instance now or not at
all".

I do not mean to ignore the existence of Blazar here, but development
on that has only recently started up again and part of the challenge
for Blazar is that resource leases, even simple whole compute nodes,
don't seem to have ever been well supported in Nova.

-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-06 Thread Masahito MUROI

Hi all,

I'm late to the discussion.

Some of members in Blazar's team have an interest from NFV side for the 
resource reservation.  So we have one usecase that telecom operators 
want to reserve instance slots at a specific time window because of 
expected workload increasing.


I'm thinking the challenge of current Blazar is how to realize two 
demands, one from scientific group and another from NFV, for the 
resource reservation.


On 2017/04/05 2:21, Jay Pipes wrote:
> On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
>> Hi Jay,
>>
>> On 4 April 2017 at 00:20, Jay Pipes  wrote:
>>> However, implementing the above in any useful fashion requires that
>>> Blazar
>>> be placed *above* Nova and essentially that the cloud operator 
turns off

>>> access to Nova's  POST /servers API call for regular users. Because
>>> if not,
>>> the information that Blazar acts upon can be simply circumvented by
>>> any user
>>> at any time.
>>
>> That's something of an oversimplification. A reservation system
>> outside of Nova could manipulate Nova host-aggregates to "cordon off"
>> infrastructure from on-demand access (I believe Blazar already uses
>> this approach), and it's not much of a jump to imagine operators being
>> able to twiddle the available reserved capacity in a finite cloud so
>> that reserved capacity can be offered to the subset of users/projects
>> that need (or perhaps have paid for) it.
>
> Sure, I'm following you up until here.
>
>> Such a reservation system would even be able to backfill capacity
>> between reservations. At the end of the reservation the system
>> cleans-up any remaining instances and preps for the next
>> reservation.
>
> By "backfill capacity between reservations", do you mean consume
> resources on the compute hosts that are "reserved" by this paying
> customer at some date in the future? i.e. Spot instances that can be
> killed off as necessary by the reservation system to free resources to
> meet its reservation schedule?
>
>> The are a couple of problems with putting this outside of Nova though.
>> The main issue is that pre-emptible/spot type instances can't be
>> accommodated within the on-demand cloud capacity.
>
> Correct. The reservation system needs complete control over a subset of
> resource providers to be used for these spot instances. It would be like
> a hotel reservation system being used for a motel where cars could
> simply pull up to a room with a vacant sign outside the door. The
> reservation system would never be able to work on accurate data unless
> some part of the motel's rooms were carved out for reservation system to
> use and cars to not pull up and take.
I agree the reservation system looks like hotel reservation system. But 
Blazar provides a reservation system like a block reservation. Operators 
defines a pool used for the future reservation requests. Then they give 
id or something to an user when the user requests a reservation. The 
user creates their resource with the id and it could be located inside 
of the block reservation only if the user consumes the reservation in 
the specified time window.


Of course, as you mentioned above, regular users could creates a 
resource and it could violate the reservation assumptions.  IIRC, 
however, same situation could happen in other projects, for instance 
Heat's stack.


What Blazar does is creating/configuring aggregations or other things 
that drive resources of regular users to be scheduled to outside of the 
block reservation.  Or regular users can create their resources with a 
special flag and the resources could be located inside of the block 
reservation. But operators can't ensure the resources remains until the 
users say 'delete the resources' because Blazar could clean-up the 
resources before others reservation starts.


>
>>  You could have the
>> reservation system implementing this feature, but that would then put
>> other scheduling constraints on the cloud in order to be effective
>> (e.g., there would need to be automation changing the size of the
>> on-demand capacity so that the maximum pre-emptible capacity was
>> always available). The other issue (admittedly minor, but still a
>> consideration) is that it's another service - personally I'd love to
>> see Nova support these advanced use-cases directly.
>
> Welcome to the world of microservices. :)
>
> -jay

best regards,
Masahito




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators