Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-14 Thread Masahito MUROI

Hi scientific team,

As Jay mentioned the previous mail, I drafted the instance 
reservation[1] of Blazar and some have already added their comments.


1. https://etherpad.openstack.org/p/new-instance-reservation

Please adds your comments, concerns and/or what you want. It could make 
more clear what the draft's missing now and what the instance 
reservation needs to include. Additionally, I think we would have better 
discussion in the forum session basing the previous discussion.



best regards,
Masahito


On 2017/04/12 4:22, Jay Pipes wrote:

On 04/11/2017 02:08 PM, Pierre Riteau wrote:

On 4 Apr 2017, at 22:23, Jay Pipes mailto:jaypi...@gmail.com>> wrote:

On 04/04/2017 02:48 PM, Tim Bell wrote:

Some combination of spot/OPIE


What is OPIE?


Maybe I missed a message: I didn’t see any reply to Jay’s question about
OPIE.


Thanks!


OPIE is the OpenStack Preemptible Instances
Extension: https://github.com/indigo-dc/opie
I am sure other on this list can provide more information.


Got it.


I think running OPIE instances inside Blazar reservations would be
doable without many changes to the implementation.
We’ve talked about this idea several times, this forum session would be
an ideal place to draw up an implementation plan.


I just looked through the OPIE source code. One thing I'm wondering is
why the code for killing off pre-emptible instances is being done in the
filter_scheduler module?

Why not have a separate service that merely responds to the raising of a
NoValidHost exception being raised from the scheduler with a call to go
and terminate one or more instances that would have allowed the original
request to land on a host?

Right here is where OPIE goes and terminates pre-emptible instances:

https://github.com/indigo-dc/opie/blob/master/opie/scheduler/filter_scheduler.py#L92-L100


However, that code should actually be run when line 90 raises NoValidHost:

https://github.com/indigo-dc/opie/blob/master/opie/scheduler/filter_scheduler.py#L90


There would be no need at all for "detecting overcommit" here:

https://github.com/indigo-dc/opie/blob/master/opie/scheduler/filter_scheduler.py#L96


Simply detect a NoValidHost being returned to the conductor from the
scheduler, examine if there are pre-emptible instances currently running
that could be terminated and terminate them, and re-run the original
call to select_destinations() (the scheduler call) just like a Retry
operation normally does.

There's be no need whatsoever to involve any changes to the scheduler at
all.


and Blazar would seem doable as long as the resource provider
reserves capacity appropriately (i.e. spot resources>>blazar
committed along with no non-spot requests for the same aggregate).
Is this feasible?


No. :)

As mentioned in previous emails and on the etherpad here:

https://etherpad.openstack.org/p/new-instance-reservation

I am firmly against having the resource tracker or the placement API
represent inventory or allocations with a temporal aspect to them (i.e.
allocations in the future).

A separate system (hopefully Blazar) is needed to manage the time-based
associations to inventories of resources over a period in the future.

Best,
-jay


I'm not sure how the above is different from the constraints I mention
below about having separate sets of resource providers for preemptible
instances than for non-preemptible instances?

Best,
-jay


Tim

On 04.04.17, 19:21, "Jay Pipes" mailto:jaypi...@gmail.com>> wrote:

   On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
   > Hi Jay,
   >
   > On 4 April 2017 at 00:20, Jay Pipes mailto:jaypi...@gmail.com>> wrote:
   >> However, implementing the above in any useful fashion requires
that Blazar
   >> be placed *above* Nova and essentially that the cloud operator
turns off
   >> access to Nova's  POST /servers API call for regular users.
Because if not,
   >> the information that Blazar acts upon can be simply
circumvented by any user
   >> at any time.
   >
   > That's something of an oversimplification. A reservation system
   > outside of Nova could manipulate Nova host-aggregates to "cordon
off"
   > infrastructure from on-demand access (I believe Blazar already
uses
   > this approach), and it's not much of a jump to imagine operators
being
   > able to twiddle the available reserved capacity in a finite
cloud so
   > that reserved capacity can be offered to the subset of
users/projects
   > that need (or perhaps have paid for) it.

   Sure, I'm following you up until here.

   > Such a reservation system would even be able to backfill capacity
   > between reservations. At the end of the reservation the system
   > cleans-up any remaining instances and preps for the next
   > reservation.

   By "backfill capacity between reservations", do you mean consume
   resources on the compute hosts that are "reserved" by this paying
   customer at some date in the future? i.e. Spot instances that can be
   killed off as necessary by the reservation system to free
resources to
  

Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-11 Thread Jay Pipes

On 04/11/2017 02:08 PM, Pierre Riteau wrote:

On 4 Apr 2017, at 22:23, Jay Pipes mailto:jaypi...@gmail.com>> wrote:

On 04/04/2017 02:48 PM, Tim Bell wrote:

Some combination of spot/OPIE


What is OPIE?


Maybe I missed a message: I didn’t see any reply to Jay’s question about
OPIE.


Thanks!


OPIE is the OpenStack Preemptible Instances
Extension: https://github.com/indigo-dc/opie
I am sure other on this list can provide more information.


Got it.


I think running OPIE instances inside Blazar reservations would be
doable without many changes to the implementation.
We’ve talked about this idea several times, this forum session would be
an ideal place to draw up an implementation plan.


I just looked through the OPIE source code. One thing I'm wondering is 
why the code for killing off pre-emptible instances is being done in the 
filter_scheduler module?


Why not have a separate service that merely responds to the raising of a 
NoValidHost exception being raised from the scheduler with a call to go 
and terminate one or more instances that would have allowed the original 
request to land on a host?


Right here is where OPIE goes and terminates pre-emptible instances:

https://github.com/indigo-dc/opie/blob/master/opie/scheduler/filter_scheduler.py#L92-L100

However, that code should actually be run when line 90 raises NoValidHost:

https://github.com/indigo-dc/opie/blob/master/opie/scheduler/filter_scheduler.py#L90

There would be no need at all for "detecting overcommit" here:

https://github.com/indigo-dc/opie/blob/master/opie/scheduler/filter_scheduler.py#L96

Simply detect a NoValidHost being returned to the conductor from the 
scheduler, examine if there are pre-emptible instances currently running 
that could be terminated and terminate them, and re-run the original 
call to select_destinations() (the scheduler call) just like a Retry 
operation normally does.


There's be no need whatsoever to involve any changes to the scheduler at 
all.



and Blazar would seem doable as long as the resource provider
reserves capacity appropriately (i.e. spot resources>>blazar
committed along with no non-spot requests for the same aggregate).
Is this feasible?


No. :)

As mentioned in previous emails and on the etherpad here:

https://etherpad.openstack.org/p/new-instance-reservation

I am firmly against having the resource tracker or the placement API 
represent inventory or allocations with a temporal aspect to them (i.e. 
allocations in the future).


A separate system (hopefully Blazar) is needed to manage the time-based 
associations to inventories of resources over a period in the future.


Best,
-jay


I'm not sure how the above is different from the constraints I mention
below about having separate sets of resource providers for preemptible
instances than for non-preemptible instances?

Best,
-jay


Tim

On 04.04.17, 19:21, "Jay Pipes" mailto:jaypi...@gmail.com>> wrote:

   On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
   > Hi Jay,
   >
   > On 4 April 2017 at 00:20, Jay Pipes mailto:jaypi...@gmail.com>> wrote:
   >> However, implementing the above in any useful fashion requires
that Blazar
   >> be placed *above* Nova and essentially that the cloud operator
turns off
   >> access to Nova's  POST /servers API call for regular users.
Because if not,
   >> the information that Blazar acts upon can be simply
circumvented by any user
   >> at any time.
   >
   > That's something of an oversimplification. A reservation system
   > outside of Nova could manipulate Nova host-aggregates to "cordon
off"
   > infrastructure from on-demand access (I believe Blazar already uses
   > this approach), and it's not much of a jump to imagine operators
being
   > able to twiddle the available reserved capacity in a finite cloud so
   > that reserved capacity can be offered to the subset of
users/projects
   > that need (or perhaps have paid for) it.

   Sure, I'm following you up until here.

   > Such a reservation system would even be able to backfill capacity
   > between reservations. At the end of the reservation the system
   > cleans-up any remaining instances and preps for the next
   > reservation.

   By "backfill capacity between reservations", do you mean consume
   resources on the compute hosts that are "reserved" by this paying
   customer at some date in the future? i.e. Spot instances that can be
   killed off as necessary by the reservation system to free resources to
   meet its reservation schedule?

   > The are a couple of problems with putting this outside of Nova
though.
   > The main issue is that pre-emptible/spot type instances can't be
   > accommodated within the on-demand cloud capacity.

   Correct. The reservation system needs complete control over a
subset of
   resource providers to be used for these spot instances. It would
be like
   a hotel reservation system being used for a motel where cars could
   simply pull up to a room with a vacant sign outside the door. The
   reservatio

Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-11 Thread Pierre Riteau
> On 4 Apr 2017, at 22:23, Jay Pipes  wrote:
> 
> On 04/04/2017 02:48 PM, Tim Bell wrote:
>> Some combination of spot/OPIE
> 
> What is OPIE?

Maybe I missed a message: I didn’t see any reply to Jay’s question about OPIE.

OPIE is the OpenStack Preemptible Instances Extension: 
https://github.com/indigo-dc/opie 
I am sure other on this list can provide more information.

I think running OPIE instances inside Blazar reservations would be doable 
without many changes to the implementation.
We’ve talked about this idea several times, this forum session would be an 
ideal place to draw up an implementation plan.

>> and Blazar would seem doable as long as the resource provider
>> reserves capacity appropriately (i.e. spot resources>>blazar
>> committed along with no non-spot requests for the same aggregate).
>> Is this feasible?
> 
> I'm not sure how the above is different from the constraints I mention below 
> about having separate sets of resource providers for preemptible instances 
> than for non-preemptible instances?
> 
> Best,
> -jay
> 
>> Tim
>> 
>> On 04.04.17, 19:21, "Jay Pipes"  wrote:
>> 
>>On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
>>> Hi Jay,
>>>
>>> On 4 April 2017 at 00:20, Jay Pipes  wrote:
>>>> However, implementing the above in any useful fashion requires that 
>> Blazar
>>>> be placed *above* Nova and essentially that the cloud operator turns 
>> off
>>>> access to Nova's  POST /servers API call for regular users. Because if 
>> not,
>>>> the information that Blazar acts upon can be simply circumvented by 
>> any user
>>>> at any time.
>>>
>>> That's something of an oversimplification. A reservation system
>>> outside of Nova could manipulate Nova host-aggregates to "cordon off"
>>> infrastructure from on-demand access (I believe Blazar already uses
>>> this approach), and it's not much of a jump to imagine operators being
>>> able to twiddle the available reserved capacity in a finite cloud so
>>> that reserved capacity can be offered to the subset of users/projects
>>> that need (or perhaps have paid for) it.
>> 
>>Sure, I'm following you up until here.
>> 
>>> Such a reservation system would even be able to backfill capacity
>>> between reservations. At the end of the reservation the system
>>> cleans-up any remaining instances and preps for the next
>>> reservation.
>> 
>>By "backfill capacity between reservations", do you mean consume
>>resources on the compute hosts that are "reserved" by this paying
>>customer at some date in the future? i.e. Spot instances that can be
>>killed off as necessary by the reservation system to free resources to
>>meet its reservation schedule?
>> 
>>> The are a couple of problems with putting this outside of Nova though.
>>> The main issue is that pre-emptible/spot type instances can't be
>>> accommodated within the on-demand cloud capacity.
>> 
>>Correct. The reservation system needs complete control over a subset of
>>resource providers to be used for these spot instances. It would be like
>>a hotel reservation system being used for a motel where cars could
>>simply pull up to a room with a vacant sign outside the door. The
>>reservation system would never be able to work on accurate data unless
>>some part of the motel's rooms were carved out for reservation system to
>>use and cars to not pull up and take.
>> 
>> >  You could have the
>>> reservation system implementing this feature, but that would then put
>>> other scheduling constraints on the cloud in order to be effective
>>> (e.g., there would need to be automation changing the size of the
>>> on-demand capacity so that the maximum pre-emptible capacity was
>>> always available). The other issue (admittedly minor, but still a
>>> consideration) is that it's another service - personally I'd love to
>>> see Nova support these advanced use-cases directly.
>> 
>>Welcome to the world of microservices. :)
>> 
>>-jay
>> 
>>___
>>OpenStack-operators mailing list
>>OpenStack-operators@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
>> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-06 Thread Blair Bethwaite
Hi Tim,

It does seem feasible, but imagine the aggregate juggling... it's
something of an indictment that from where we are today this seems
like a step forward. I'm not a fan of pushing that load onto operators
when it seems like what we actually need is fully-fledged workload
scheduling in Nova.

Cheers,

On 5 April 2017 at 04:48, Tim Bell  wrote:
> Some combination of spot/OPIE and Blazar would seem doable as long as the 
> resource provider reserves capacity appropriately (i.e. spot 
> resources>>blazar committed along with no non-spot requests for the same 
> aggregate).
>
> Is this feasible?
>
> Tim
>
> On 04.04.17, 19:21, "Jay Pipes"  wrote:
>
> On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
> > Hi Jay,
> >
> > On 4 April 2017 at 00:20, Jay Pipes  wrote:
> >> However, implementing the above in any useful fashion requires that 
> Blazar
> >> be placed *above* Nova and essentially that the cloud operator turns 
> off
> >> access to Nova's  POST /servers API call for regular users. Because if 
> not,
> >> the information that Blazar acts upon can be simply circumvented by 
> any user
> >> at any time.
> >
> > That's something of an oversimplification. A reservation system
> > outside of Nova could manipulate Nova host-aggregates to "cordon off"
> > infrastructure from on-demand access (I believe Blazar already uses
> > this approach), and it's not much of a jump to imagine operators being
> > able to twiddle the available reserved capacity in a finite cloud so
> > that reserved capacity can be offered to the subset of users/projects
> > that need (or perhaps have paid for) it.
>
> Sure, I'm following you up until here.
>
> > Such a reservation system would even be able to backfill capacity
> > between reservations. At the end of the reservation the system
> > cleans-up any remaining instances and preps for the next
> > reservation.
>
> By "backfill capacity between reservations", do you mean consume
> resources on the compute hosts that are "reserved" by this paying
> customer at some date in the future? i.e. Spot instances that can be
> killed off as necessary by the reservation system to free resources to
> meet its reservation schedule?
>
> > The are a couple of problems with putting this outside of Nova though.
> > The main issue is that pre-emptible/spot type instances can't be
> > accommodated within the on-demand cloud capacity.
>
> Correct. The reservation system needs complete control over a subset of
> resource providers to be used for these spot instances. It would be like
> a hotel reservation system being used for a motel where cars could
> simply pull up to a room with a vacant sign outside the door. The
> reservation system would never be able to work on accurate data unless
> some part of the motel's rooms were carved out for reservation system to
> use and cars to not pull up and take.
>
>  >  You could have the
> > reservation system implementing this feature, but that would then put
> > other scheduling constraints on the cloud in order to be effective
> > (e.g., there would need to be automation changing the size of the
> > on-demand capacity so that the maximum pre-emptible capacity was
> > always available). The other issue (admittedly minor, but still a
> > consideration) is that it's another service - personally I'd love to
> > see Nova support these advanced use-cases directly.
>
> Welcome to the world of microservices. :)
>
> -jay
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>



-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-06 Thread Blair Bethwaite
Hi Jay,

On 5 April 2017 at 03:21, Jay Pipes  wrote:
> On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
>> That's something of an oversimplification. A reservation system
>> outside of Nova could manipulate Nova host-aggregates to "cordon off"
>> infrastructure from on-demand access (I believe Blazar already uses
>> this approach), and it's not much of a jump to imagine operators being
>> able to twiddle the available reserved capacity in a finite cloud so
>> that reserved capacity can be offered to the subset of users/projects
>> that need (or perhaps have paid for) it.
>
>
> Sure, I'm following you up until here.
>
>> Such a reservation system would even be able to backfill capacity
>> between reservations. At the end of the reservation the system
>> cleans-up any remaining instances and preps for the next
>> reservation.
>
>
> By "backfill capacity between reservations", do you mean consume resources
> on the compute hosts that are "reserved" by this paying customer at some
> date in the future? i.e. Spot instances that can be killed off as necessary
> by the reservation system to free resources to meet its reservation
> schedule?

That is one possible use-case, but it could also backfill with other
reservations that do not overlap. This is a common feature of HPC job
schedulers that have to deal with the competing needs of large
parallel jobs (single users with temporal workload constraints) and
many small jobs (many users with throughput needs).

>> The are a couple of problems with putting this outside of Nova though.
>> The main issue is that pre-emptible/spot type instances can't be
>> accommodated within the on-demand cloud capacity.
>
>
> Correct. The reservation system needs complete control over a subset of
> resource providers to be used for these spot instances. It would be like a
> hotel reservation system being used for a motel where cars could simply pull
> up to a room with a vacant sign outside the door. The reservation system
> would never be able to work on accurate data unless some part of the motel's
> rooms were carved out for reservation system to use and cars to not pull up
> and take.

In order to make reservations, yes. However, preemptible instances are
a valid use-case without also assuming reservations (they just happen
to complement each other). If we want the system to be really useful
and flexible we should be considering leases and queuing, e.g.:

- Leases requiring a single VM or groups of VMs that must run in parallel.
- Best-effort leases, which will wait in a queue until resources
become available.
- Advance reservation leases, which must start at a specific time.
- Immediate leases, which must start right now, or not at all.

The above bullets are pulled from
http://haizea.cs.uchicago.edu/whatis.html (Haizea is a scheduling
framework that can plug into OpenNebula), and I believe these fit very
well with the scheduling needs of the majority of private & hybrid
clouds. It also has other notable features such as preemptible leases.

I remain perplexed by the fact that OpenStack, as the preeminent open
private cloud framework, still only deals in on-demand access as
though most cloud-deployments are infinite. Yet today users have to
keep polling the boot API until they get something: "not now... not
now... not now..." - no queuing, no fair-share, nothing. Users should
only ever see NoValidHost if they requested "an instance now or not at
all".

I do not mean to ignore the existence of Blazar here, but development
on that has only recently started up again and part of the challenge
for Blazar is that resource leases, even simple whole compute nodes,
don't seem to have ever been well supported in Nova.

-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-06 Thread Masahito MUROI

Hi all,

I'm late to the discussion.

Some of members in Blazar's team have an interest from NFV side for the 
resource reservation.  So we have one usecase that telecom operators 
want to reserve instance slots at a specific time window because of 
expected workload increasing.


I'm thinking the challenge of current Blazar is how to realize two 
demands, one from scientific group and another from NFV, for the 
resource reservation.


On 2017/04/05 2:21, Jay Pipes wrote:
> On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
>> Hi Jay,
>>
>> On 4 April 2017 at 00:20, Jay Pipes  wrote:
>>> However, implementing the above in any useful fashion requires that
>>> Blazar
>>> be placed *above* Nova and essentially that the cloud operator 
turns off

>>> access to Nova's  POST /servers API call for regular users. Because
>>> if not,
>>> the information that Blazar acts upon can be simply circumvented by
>>> any user
>>> at any time.
>>
>> That's something of an oversimplification. A reservation system
>> outside of Nova could manipulate Nova host-aggregates to "cordon off"
>> infrastructure from on-demand access (I believe Blazar already uses
>> this approach), and it's not much of a jump to imagine operators being
>> able to twiddle the available reserved capacity in a finite cloud so
>> that reserved capacity can be offered to the subset of users/projects
>> that need (or perhaps have paid for) it.
>
> Sure, I'm following you up until here.
>
>> Such a reservation system would even be able to backfill capacity
>> between reservations. At the end of the reservation the system
>> cleans-up any remaining instances and preps for the next
>> reservation.
>
> By "backfill capacity between reservations", do you mean consume
> resources on the compute hosts that are "reserved" by this paying
> customer at some date in the future? i.e. Spot instances that can be
> killed off as necessary by the reservation system to free resources to
> meet its reservation schedule?
>
>> The are a couple of problems with putting this outside of Nova though.
>> The main issue is that pre-emptible/spot type instances can't be
>> accommodated within the on-demand cloud capacity.
>
> Correct. The reservation system needs complete control over a subset of
> resource providers to be used for these spot instances. It would be like
> a hotel reservation system being used for a motel where cars could
> simply pull up to a room with a vacant sign outside the door. The
> reservation system would never be able to work on accurate data unless
> some part of the motel's rooms were carved out for reservation system to
> use and cars to not pull up and take.
I agree the reservation system looks like hotel reservation system. But 
Blazar provides a reservation system like a block reservation. Operators 
defines a pool used for the future reservation requests. Then they give 
id or something to an user when the user requests a reservation. The 
user creates their resource with the id and it could be located inside 
of the block reservation only if the user consumes the reservation in 
the specified time window.


Of course, as you mentioned above, regular users could creates a 
resource and it could violate the reservation assumptions.  IIRC, 
however, same situation could happen in other projects, for instance 
Heat's stack.


What Blazar does is creating/configuring aggregations or other things 
that drive resources of regular users to be scheduled to outside of the 
block reservation.  Or regular users can create their resources with a 
special flag and the resources could be located inside of the block 
reservation. But operators can't ensure the resources remains until the 
users say 'delete the resources' because Blazar could clean-up the 
resources before others reservation starts.


>
>>  You could have the
>> reservation system implementing this feature, but that would then put
>> other scheduling constraints on the cloud in order to be effective
>> (e.g., there would need to be automation changing the size of the
>> on-demand capacity so that the maximum pre-emptible capacity was
>> always available). The other issue (admittedly minor, but still a
>> consideration) is that it's another service - personally I'd love to
>> see Nova support these advanced use-cases directly.
>
> Welcome to the world of microservices. :)
>
> -jay

best regards,
Masahito




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-04 Thread Jay Pipes

On 04/04/2017 02:48 PM, Tim Bell wrote:

Some combination of spot/OPIE


What is OPIE?


and Blazar would seem doable as long as the resource provider
reserves capacity appropriately (i.e. spot resources>>blazar
committed along with no non-spot requests for the same aggregate).
Is this feasible?


I'm not sure how the above is different from the constraints I mention 
below about having separate sets of resource providers for preemptible 
instances than for non-preemptible instances?


Best,
-jay


Tim

On 04.04.17, 19:21, "Jay Pipes"  wrote:

On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
> Hi Jay,
>
> On 4 April 2017 at 00:20, Jay Pipes  wrote:
>> However, implementing the above in any useful fashion requires that 
Blazar
>> be placed *above* Nova and essentially that the cloud operator turns off
>> access to Nova's  POST /servers API call for regular users. Because if 
not,
>> the information that Blazar acts upon can be simply circumvented by any 
user
>> at any time.
>
> That's something of an oversimplification. A reservation system
> outside of Nova could manipulate Nova host-aggregates to "cordon off"
> infrastructure from on-demand access (I believe Blazar already uses
> this approach), and it's not much of a jump to imagine operators being
> able to twiddle the available reserved capacity in a finite cloud so
> that reserved capacity can be offered to the subset of users/projects
> that need (or perhaps have paid for) it.

Sure, I'm following you up until here.

> Such a reservation system would even be able to backfill capacity
> between reservations. At the end of the reservation the system
> cleans-up any remaining instances and preps for the next
> reservation.

By "backfill capacity between reservations", do you mean consume
resources on the compute hosts that are "reserved" by this paying
customer at some date in the future? i.e. Spot instances that can be
killed off as necessary by the reservation system to free resources to
meet its reservation schedule?

> The are a couple of problems with putting this outside of Nova though.
> The main issue is that pre-emptible/spot type instances can't be
> accommodated within the on-demand cloud capacity.

Correct. The reservation system needs complete control over a subset of
resource providers to be used for these spot instances. It would be like
a hotel reservation system being used for a motel where cars could
simply pull up to a room with a vacant sign outside the door. The
reservation system would never be able to work on accurate data unless
some part of the motel's rooms were carved out for reservation system to
use and cars to not pull up and take.

 >  You could have the
> reservation system implementing this feature, but that would then put
> other scheduling constraints on the cloud in order to be effective
> (e.g., there would need to be automation changing the size of the
> on-demand capacity so that the maximum pre-emptible capacity was
> always available). The other issue (admittedly minor, but still a
> consideration) is that it's another service - personally I'd love to
> see Nova support these advanced use-cases directly.

Welcome to the world of microservices. :)

-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-04 Thread Tim Bell
Some combination of spot/OPIE and Blazar would seem doable as long as the 
resource provider reserves capacity appropriately (i.e. spot resources>>blazar 
committed along with no non-spot requests for the same aggregate).

Is this feasible?

Tim

On 04.04.17, 19:21, "Jay Pipes"  wrote:

On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
> Hi Jay,
>
> On 4 April 2017 at 00:20, Jay Pipes  wrote:
>> However, implementing the above in any useful fashion requires that 
Blazar
>> be placed *above* Nova and essentially that the cloud operator turns off
>> access to Nova's  POST /servers API call for regular users. Because if 
not,
>> the information that Blazar acts upon can be simply circumvented by any 
user
>> at any time.
>
> That's something of an oversimplification. A reservation system
> outside of Nova could manipulate Nova host-aggregates to "cordon off"
> infrastructure from on-demand access (I believe Blazar already uses
> this approach), and it's not much of a jump to imagine operators being
> able to twiddle the available reserved capacity in a finite cloud so
> that reserved capacity can be offered to the subset of users/projects
> that need (or perhaps have paid for) it.

Sure, I'm following you up until here.

> Such a reservation system would even be able to backfill capacity
> between reservations. At the end of the reservation the system
> cleans-up any remaining instances and preps for the next
> reservation.

By "backfill capacity between reservations", do you mean consume 
resources on the compute hosts that are "reserved" by this paying 
customer at some date in the future? i.e. Spot instances that can be 
killed off as necessary by the reservation system to free resources to 
meet its reservation schedule?

> The are a couple of problems with putting this outside of Nova though.
> The main issue is that pre-emptible/spot type instances can't be
> accommodated within the on-demand cloud capacity.

Correct. The reservation system needs complete control over a subset of 
resource providers to be used for these spot instances. It would be like 
a hotel reservation system being used for a motel where cars could 
simply pull up to a room with a vacant sign outside the door. The 
reservation system would never be able to work on accurate data unless 
some part of the motel's rooms were carved out for reservation system to 
use and cars to not pull up and take.

 >  You could have the
> reservation system implementing this feature, but that would then put
> other scheduling constraints on the cloud in order to be effective
> (e.g., there would need to be automation changing the size of the
> on-demand capacity so that the maximum pre-emptible capacity was
> always available). The other issue (admittedly minor, but still a
> consideration) is that it's another service - personally I'd love to
> see Nova support these advanced use-cases directly.

Welcome to the world of microservices. :)

-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-04 Thread Jay Pipes

On 04/03/2017 06:07 PM, Blair Bethwaite wrote:

Hi Jay,

On 4 April 2017 at 00:20, Jay Pipes  wrote:

However, implementing the above in any useful fashion requires that Blazar
be placed *above* Nova and essentially that the cloud operator turns off
access to Nova's  POST /servers API call for regular users. Because if not,
the information that Blazar acts upon can be simply circumvented by any user
at any time.


That's something of an oversimplification. A reservation system
outside of Nova could manipulate Nova host-aggregates to "cordon off"
infrastructure from on-demand access (I believe Blazar already uses
this approach), and it's not much of a jump to imagine operators being
able to twiddle the available reserved capacity in a finite cloud so
that reserved capacity can be offered to the subset of users/projects
that need (or perhaps have paid for) it.


Sure, I'm following you up until here.


Such a reservation system would even be able to backfill capacity
between reservations. At the end of the reservation the system
cleans-up any remaining instances and preps for the next
reservation.


By "backfill capacity between reservations", do you mean consume 
resources on the compute hosts that are "reserved" by this paying 
customer at some date in the future? i.e. Spot instances that can be 
killed off as necessary by the reservation system to free resources to 
meet its reservation schedule?



The are a couple of problems with putting this outside of Nova though.
The main issue is that pre-emptible/spot type instances can't be
accommodated within the on-demand cloud capacity.


Correct. The reservation system needs complete control over a subset of 
resource providers to be used for these spot instances. It would be like 
a hotel reservation system being used for a motel where cars could 
simply pull up to a room with a vacant sign outside the door. The 
reservation system would never be able to work on accurate data unless 
some part of the motel's rooms were carved out for reservation system to 
use and cars to not pull up and take.


>  You could have the

reservation system implementing this feature, but that would then put
other scheduling constraints on the cloud in order to be effective
(e.g., there would need to be automation changing the size of the
on-demand capacity so that the maximum pre-emptible capacity was
always available). The other issue (admittedly minor, but still a
consideration) is that it's another service - personally I'd love to
see Nova support these advanced use-cases directly.


Welcome to the world of microservices. :)

-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-04 Thread Tomáš Vondra
Hi!
Did someone mention automation changing the spot instance capacity? I did an 
article in 2013 that proposes exactly that. The model forecasts the workload 
curve of the majority traffic, which is presumed to be interactive, and the 
rest may be used for batch traffic. The forecast used is SARIMA and is usable 
up to a few days in advance. Would anybody be interested in trying the forecast 
on data from their cloud?
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.671.7397&rep=rep1&type=pdf¨
Tomas Vondra, dept. of Cybernetics, CTU FEE

-Original Message-
From: Blair Bethwaite [mailto:blair.bethwa...@gmail.com] 
Sent: Tuesday, April 04, 2017 12:08 AM
To: Jay Pipes
Cc: openstack-oper.
Subject: Re: [Openstack-operators] [scientific] Resource reservation 
requirements (Blazar) - Forum session

Hi Jay,

On 4 April 2017 at 00:20, Jay Pipes  wrote:
> However, implementing the above in any useful fashion requires that 
> Blazar be placed *above* Nova and essentially that the cloud operator 
> turns off access to Nova's  POST /servers API call for regular users. 
> Because if not, the information that Blazar acts upon can be simply 
> circumvented by any user at any time.

That's something of an oversimplification. A reservation system outside of Nova 
could manipulate Nova host-aggregates to "cordon off"
infrastructure from on-demand access (I believe Blazar already uses this 
approach), and it's not much of a jump to imagine operators being able to 
twiddle the available reserved capacity in a finite cloud so that reserved 
capacity can be offered to the subset of users/projects that need (or perhaps 
have paid for) it. Such a reservation system would even be able to backfill 
capacity between reservations. At the end of the reservation the system 
cleans-up any remaining instances and preps for the next reservation.

The are a couple of problems with putting this outside of Nova though.
The main issue is that pre-emptible/spot type instances can't be accommodated 
within the on-demand cloud capacity. You could have the reservation system 
implementing this feature, but that would then put other scheduling constraints 
on the cloud in order to be effective (e.g., there would need to be automation 
changing the size of the on-demand capacity so that the maximum pre-emptible 
capacity was always available). The other issue (admittedly minor, but still a
consideration) is that it's another service - personally I'd love to see Nova 
support these advanced use-cases directly.

--
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-03 Thread Blair Bethwaite
Hi Jay,

On 4 April 2017 at 00:20, Jay Pipes  wrote:
> However, implementing the above in any useful fashion requires that Blazar
> be placed *above* Nova and essentially that the cloud operator turns off
> access to Nova's  POST /servers API call for regular users. Because if not,
> the information that Blazar acts upon can be simply circumvented by any user
> at any time.

That's something of an oversimplification. A reservation system
outside of Nova could manipulate Nova host-aggregates to "cordon off"
infrastructure from on-demand access (I believe Blazar already uses
this approach), and it's not much of a jump to imagine operators being
able to twiddle the available reserved capacity in a finite cloud so
that reserved capacity can be offered to the subset of users/projects
that need (or perhaps have paid for) it. Such a reservation system
would even be able to backfill capacity between reservations. At the
end of the reservation the system cleans-up any remaining instances
and preps for the next reservation.

The are a couple of problems with putting this outside of Nova though.
The main issue is that pre-emptible/spot type instances can't be
accommodated within the on-demand cloud capacity. You could have the
reservation system implementing this feature, but that would then put
other scheduling constraints on the cloud in order to be effective
(e.g., there would need to be automation changing the size of the
on-demand capacity so that the maximum pre-emptible capacity was
always available). The other issue (admittedly minor, but still a
consideration) is that it's another service - personally I'd love to
see Nova support these advanced use-cases directly.

-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-03 Thread Joe Topjian
On Mon, Apr 3, 2017 at 8:20 AM, Jay Pipes  wrote:

> On 04/01/2017 08:32 PM, Joe Topjian wrote:
>
>> On Sat, Apr 1, 2017 at 5:21 PM, Matt Riedemann > > wrote:
>>
>> On 4/1/2017 8:36 AM, Blair Bethwaite wrote:
>>
>> Hi all,
>>
>> The below was suggested for a Forum session but we don't yet have
>> a
>> submission or name to chair/moderate. I, for one, would certainly
>> be
>> interested in providing input. Do we have any owners out there?
>>
>> Resource reservation requirements:
>> ==
>> The Blazar project [https://wiki.openstack.org/wiki/Blazar
>> ] has been
>> revived following Barcelona and will soon release a new version.
>> Now
>> is a good time to get involved and share requirements with the
>> community. Our development priorities are described through
>> Blueprints
>> on Launchpad: https://blueprints.launchpad.net/blazar
>> 
>>
>> In particular, support for pre-emptible instances could be
>> combined
>> with resource reservation to maximize utilization on unreserved
>> resources.+1
>>
>>
>> Regarding resource reservation, please see this older Nova spec
>> which is related:
>>
>> https://review.openstack.org/#/c/389216/
>> 
>>
>> And see the points that Jay Pipes makes in that review. Before
>> spending a lot of time reviving the project, I'd encourage people to
>> read and digest the points made in that review and if there
>> responses or other use cases then let's discuss them *before*
>> bringing a service back from the dead and assume it will be
>> integrated into the other projects.
>>
>> This is appreciated. I'll describe the way I've seen Blazar used and I
>> believe it's quite different than the above slot reservation as well as
>> spot instance support, but please let me know if I am incorrect or if
>> there have been other discussions about this use-case elsewhere:
>>
>> A research group has a finite amount of specialized hardware and there
>> are more people wanting to use this hardware than what's currently
>> available. Let's use high performance GPUs as an example. The group is
>> OK with publishing the amount of hardware they have available (normally
>> this is hidden as best as possible). By doing this, a researcher can use
>> Blazar as sort of a community calendar, see that there are 3 GPU nodes
>> available for the week of April 3, and reserve them for that time period.
>>
>
> Yeah, I totally understand this use case.
>
> However, implementing the above in any useful fashion requires that Blazar
> be placed *above* Nova and essentially that the cloud operator turns off
> access to Nova's  POST /servers API call for regular users. Because if not,
> the information that Blazar acts upon can be simply circumvented by any
> user at any time.
>
> In other words, your "3 GPU nodes available for the week of April 3" can
> change at any time by a user that goes and launches instances that consumes
> those 3 GPU nodes.
>
> If you have a certain type of OpenStack deployment that isn't multi-user
> and where the only thing that launches instances is an
> automation/orchestration tool (in other words, an NFV MANO system), the
> reservation concepts works great -- because you don't have pesky users that
> can sidestep the system and actually launch instances that would impact
> reserved consumables.
>
> However, if you *do* have normal users of your cloud -- as most scientific
> deployments must have -- then I'm afraid the only way to make this work is
> to have users *only* use the Blazar API to reserve instances and
> essentially shut off the normal Nova POST /servers API.
>
> Does that make sense?
>

Ah, yes, indeed it does. Thanks, Jay.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-03 Thread Jay Pipes

On 04/01/2017 08:32 PM, Joe Topjian wrote:

On Sat, Apr 1, 2017 at 5:21 PM, Matt Riedemann mailto:mriede...@gmail.com>> wrote:

On 4/1/2017 8:36 AM, Blair Bethwaite wrote:

Hi all,

The below was suggested for a Forum session but we don't yet have a
submission or name to chair/moderate. I, for one, would certainly be
interested in providing input. Do we have any owners out there?

Resource reservation requirements:
==
The Blazar project [https://wiki.openstack.org/wiki/Blazar
] has been
revived following Barcelona and will soon release a new version. Now
is a good time to get involved and share requirements with the
community. Our development priorities are described through
Blueprints
on Launchpad: https://blueprints.launchpad.net/blazar


In particular, support for pre-emptible instances could be combined
with resource reservation to maximize utilization on unreserved
resources.+1


Regarding resource reservation, please see this older Nova spec
which is related:

https://review.openstack.org/#/c/389216/


And see the points that Jay Pipes makes in that review. Before
spending a lot of time reviving the project, I'd encourage people to
read and digest the points made in that review and if there
responses or other use cases then let's discuss them *before*
bringing a service back from the dead and assume it will be
integrated into the other projects.

This is appreciated. I'll describe the way I've seen Blazar used and I
believe it's quite different than the above slot reservation as well as
spot instance support, but please let me know if I am incorrect or if
there have been other discussions about this use-case elsewhere:

A research group has a finite amount of specialized hardware and there
are more people wanting to use this hardware than what's currently
available. Let's use high performance GPUs as an example. The group is
OK with publishing the amount of hardware they have available (normally
this is hidden as best as possible). By doing this, a researcher can use
Blazar as sort of a community calendar, see that there are 3 GPU nodes
available for the week of April 3, and reserve them for that time period.


Yeah, I totally understand this use case.

However, implementing the above in any useful fashion requires that 
Blazar be placed *above* Nova and essentially that the cloud operator 
turns off access to Nova's  POST /servers API call for regular users. 
Because if not, the information that Blazar acts upon can be simply 
circumvented by any user at any time.


In other words, your "3 GPU nodes available for the week of April 3" can 
change at any time by a user that goes and launches instances that 
consumes those 3 GPU nodes.


If you have a certain type of OpenStack deployment that isn't multi-user 
and where the only thing that launches instances is an 
automation/orchestration tool (in other words, an NFV MANO system), the 
reservation concepts works great -- because you don't have pesky users 
that can sidestep the system and actually launch instances that would 
impact reserved consumables.


However, if you *do* have normal users of your cloud -- as most 
scientific deployments must have -- then I'm afraid the only way to make 
this work is to have users *only* use the Blazar API to reserve 
instances and essentially shut off the normal Nova POST /servers API.


Does that make sense?

Best,
-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-02 Thread Blair Bethwaite
Hi all,

So I've proposed a Forum session to discuss some of these issues and
use-cases: http://forumtopics.openstack.org/cfp/details/124 - there
would seem to be value in getting Nova, Blazar and OPIE folks together
to talk about advanced scheduling use-cases. In particular it looks
like there is a need (and gap?) for Nova's scheduler to model and
support (either directly or through extension/pluggability) more than
just on-demand scheduling workflows.

However, I'm not an active dev and am somewhat ignorant on the state
of nova-scheduler and related components now, so would appreciate
someone who knows what they are talking about to help chair this
session and provide some updates to the proposal that include
background reading etc.

Cheers,

On 2 April 2017 at 09:21, Matt Riedemann  wrote:
> On 4/1/2017 8:36 AM, Blair Bethwaite wrote:
>>
>> Hi all,
>>
>> The below was suggested for a Forum session but we don't yet have a
>> submission or name to chair/moderate. I, for one, would certainly be
>> interested in providing input. Do we have any owners out there?
>>
>> Resource reservation requirements:
>> ==
>> The Blazar project [https://wiki.openstack.org/wiki/Blazar] has been
>> revived following Barcelona and will soon release a new version. Now
>> is a good time to get involved and share requirements with the
>> community. Our development priorities are described through Blueprints
>> on Launchpad: https://blueprints.launchpad.net/blazar
>>
>> In particular, support for pre-emptible instances could be combined
>> with resource reservation to maximize utilization on unreserved
>> resources.+1
>
>
> Regarding resource reservation, please see this older Nova spec which is
> related:
>
> https://review.openstack.org/#/c/389216/
>
> And see the points that Jay Pipes makes in that review. Before spending a
> lot of time reviving the project, I'd encourage people to read and digest
> the points made in that review and if there responses or other use cases
> then let's discuss them *before* bringing a service back from the dead and
> assume it will be integrated into the other projects.
>
>>
>> Is Blazar the right project to discuss reservations of finite
>> consumable resources like software licenses?
>>
>>   Blazar would like to ultimately support many different kinds of
>> resources (volumes, floating IPs, etc.). Software licenses can be
>> another type.
>> ==
>> (https://etherpad.openstack.org/p/BOS-UC-brainstorming-scientific-wg)
>>
>> Cheers,
>>
>
> John Garbutt also has a WIP backlog spec in Nova related to pre-emtiple
> instances:
>
> https://review.openstack.org/#/c/438640/
>
> --
>
> Thanks,
>
> Matt
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-01 Thread Joe Topjian
On Sat, Apr 1, 2017 at 5:21 PM, Matt Riedemann  wrote:

> On 4/1/2017 8:36 AM, Blair Bethwaite wrote:
>
>> Hi all,
>>
>> The below was suggested for a Forum session but we don't yet have a
>> submission or name to chair/moderate. I, for one, would certainly be
>> interested in providing input. Do we have any owners out there?
>>
>> Resource reservation requirements:
>> ==
>> The Blazar project [https://wiki.openstack.org/wiki/Blazar] has been
>> revived following Barcelona and will soon release a new version. Now
>> is a good time to get involved and share requirements with the
>> community. Our development priorities are described through Blueprints
>> on Launchpad: https://blueprints.launchpad.net/blazar
>>
>> In particular, support for pre-emptible instances could be combined
>> with resource reservation to maximize utilization on unreserved
>> resources.+1
>>
>
> Regarding resource reservation, please see this older Nova spec which is
> related:
>
> https://review.openstack.org/#/c/389216/
>
> And see the points that Jay Pipes makes in that review. Before spending a
> lot of time reviving the project, I'd encourage people to read and digest
> the points made in that review and if there responses or other use cases
> then let's discuss them *before* bringing a service back from the dead and
> assume it will be integrated into the other projects.


This is appreciated. I'll describe the way I've seen Blazar used and I
believe it's quite different than the above slot reservation as well as
spot instance support, but please let me know if I am incorrect or if there
have been other discussions about this use-case elsewhere:

A research group has a finite amount of specialized hardware and there are
more people wanting to use this hardware than what's currently available.
Let's use high performance GPUs as an example. The group is OK with
publishing the amount of hardware they have available (normally this is
hidden as best as possible). By doing this, a researcher can use Blazar as
sort of a community calendar, see that there are 3 GPU nodes available for
the week of April 3, and reserve them for that time period.


>
>> Is Blazar the right project to discuss reservations of finite
>> consumable resources like software licenses?
>>
>>   Blazar would like to ultimately support many different kinds of
>> resources (volumes, floating IPs, etc.). Software licenses can be
>> another type.
>> ==
>> (https://etherpad.openstack.org/p/BOS-UC-brainstorming-scientific-wg)
>>
>> Cheers,
>>
>>
> John Garbutt also has a WIP backlog spec in Nova related to pre-emtiple
> instances:
>
> https://review.openstack.org/#/c/438640/
>
> --
>
> Thanks,
>
> Matt
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-01 Thread Matt Riedemann

On 4/1/2017 8:36 AM, Blair Bethwaite wrote:

Hi all,

The below was suggested for a Forum session but we don't yet have a
submission or name to chair/moderate. I, for one, would certainly be
interested in providing input. Do we have any owners out there?

Resource reservation requirements:
==
The Blazar project [https://wiki.openstack.org/wiki/Blazar] has been
revived following Barcelona and will soon release a new version. Now
is a good time to get involved and share requirements with the
community. Our development priorities are described through Blueprints
on Launchpad: https://blueprints.launchpad.net/blazar

In particular, support for pre-emptible instances could be combined
with resource reservation to maximize utilization on unreserved
resources.+1


Regarding resource reservation, please see this older Nova spec which is 
related:


https://review.openstack.org/#/c/389216/

And see the points that Jay Pipes makes in that review. Before spending 
a lot of time reviving the project, I'd encourage people to read and 
digest the points made in that review and if there responses or other 
use cases then let's discuss them *before* bringing a service back from 
the dead and assume it will be integrated into the other projects.




Is Blazar the right project to discuss reservations of finite
consumable resources like software licenses?

  Blazar would like to ultimately support many different kinds of
resources (volumes, floating IPs, etc.). Software licenses can be
another type.
==
(https://etherpad.openstack.org/p/BOS-UC-brainstorming-scientific-wg)

Cheers,



John Garbutt also has a WIP backlog spec in Nova related to pre-emtiple 
instances:


https://review.openstack.org/#/c/438640/

--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-01 Thread Blair Bethwaite
Hi all,

The below was suggested for a Forum session but we don't yet have a
submission or name to chair/moderate. I, for one, would certainly be
interested in providing input. Do we have any owners out there?

Resource reservation requirements:
==
The Blazar project [https://wiki.openstack.org/wiki/Blazar] has been
revived following Barcelona and will soon release a new version. Now
is a good time to get involved and share requirements with the
community. Our development priorities are described through Blueprints
on Launchpad: https://blueprints.launchpad.net/blazar

In particular, support for pre-emptible instances could be combined
with resource reservation to maximize utilization on unreserved
resources.+1

Is Blazar the right project to discuss reservations of finite
consumable resources like software licenses?

  Blazar would like to ultimately support many different kinds of
resources (volumes, floating IPs, etc.). Software licenses can be
another type.
==
(https://etherpad.openstack.org/p/BOS-UC-brainstorming-scientific-wg)

Cheers,

-- 
Blair Bethwaite
Senior HPC Consultant

Monash eResearch Centre
Monash University
Room G26, 15 Innovation Walk, Clayton Campus
Clayton VIC 3800
Australia
Mobile: 0439-545-002
Office: +61 3-9903-2800

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators