Hi Blair,
Sorry for the late reply, could you elaborate more on the proxy driver idea
?
On Mon, May 21, 2018 at 4:05 PM, Blair Bethwaite
wrote:
> (Please excuse the top-posting)
>
> The other possibility is that the Cyborg managed devices are plumbed in
> via IP in guest network space. Then
(Please excuse the top-posting)
The other possibility is that the Cyborg managed devices are plumbed in via
IP in guest network space. Then "attach" isn't so much a Nova problem as a
Neutron one - probably similar to Manila.
Has the Cyborg team considered a RESTful-API proxy driver, i.e.,
@Chris yes we are actively exploring this option :)
On Mon, May 21, 2018 at 2:27 PM, Chris Friesen
wrote:
> On 05/19/2018 05:58 PM, Blair Bethwaite wrote:
>
>> G'day Jay,
>>
>> On 20 May 2018 at 08:37, Jay Pipes wrote:
>>
>>> If it's not the VM
On 05/19/2018 05:58 PM, Blair Bethwaite wrote:
G'day Jay,
On 20 May 2018 at 08:37, Jay Pipes wrote:
If it's not the VM or baremetal machine that is using the accelerator, what
is?
It will be a VM or BM, but I don't think accelerators should be tied
to the life of a
G'day Jay,
On 20 May 2018 at 08:37, Jay Pipes wrote:
> If it's not the VM or baremetal machine that is using the accelerator, what
> is?
It will be a VM or BM, but I don't think accelerators should be tied
to the life of a single instance if that isn't technically necessary
On 5/19/2018 6:30 AM, Jay Pipes wrote:
The solution is to implement the following two specs:
https://review.openstack.org/#/c/509042/
Bunch of upgrade / data migration landmines that we have to solve with
this, not the least of which is that people using the CachingScheduler
don't have
On 05/19/2018 03:19 PM, Blair Bethwaite wrote:
Relatively Cyborg-naive question here...
I thought Cyborg was going to support a hot-plug model. So I certainly
hope it is not the expectation that accelerators will be encoded into
Nova flavors? That will severely limit its usefulness.
Hi Blair!
Relatively Cyborg-naive question here...
I thought Cyborg was going to support a hot-plug model. So I certainly
hope it is not the expectation that accelerators will be encoded into
Nova flavors? That will severely limit its usefulness.
On 19 May 2018 at 23:30, Jay Pipes
On 05/18/2018 07:58 AM, Nadathur, Sundar wrote:
Agreed. Not sure how other projects handle it, but here's the situation
for Cyborg. A request may get scheduled on a compute node with no
intervention by Cyborg. So, the earliest check that can be made today is
in the selected compute node. A
2018-05-18 19:58 GMT+08:00 Nadathur, Sundar :
> Hi Matt,
> On 5/17/2018 3:18 PM, Matt Riedemann wrote:
>
> On 5/17/2018 3:36 PM, Nadathur, Sundar wrote:
>
> This applies only to the resources that Nova handles, IIUC, which does not
> handle accelerators. The generic
On 5/18/2018 5:06 AM, Sylvain Bauza wrote:
Le ven. 18 mai 2018 à 13:59, Nadathur, Sundar
> a écrit :
Hi Matt,
On 5/17/2018 3:18 PM, Matt Riedemann wrote:
On 5/17/2018 3:36 PM, Nadathur, Sundar wrote:
This
Le ven. 18 mai 2018 à 13:59, Nadathur, Sundar a
écrit :
> Hi Matt,
> On 5/17/2018 3:18 PM, Matt Riedemann wrote:
>
> On 5/17/2018 3:36 PM, Nadathur, Sundar wrote:
>
> This applies only to the resources that Nova handles, IIUC, which does not
> handle accelerators. The
Hi Matt,
On 5/17/2018 3:18 PM, Matt Riedemann wrote:
On 5/17/2018 3:36 PM, Nadathur, Sundar wrote:
This applies only to the resources that Nova handles, IIUC, which
does not handle accelerators. The generic method that Alex talks
about is obviously preferable but, if that is not available in
On 5/17/2018 3:36 PM, Nadathur, Sundar wrote:
This applies only to the resources that Nova handles, IIUC, which does
not handle accelerators. The generic method that Alex talks about is
obviously preferable but, if that is not available in Rocky, is the
filter an option?
If nova isn't
Hi all,
Thanks for all the feedback. Please see below.
2018-05-17 1:24 GMT+08:00 Jay Pipes >:
Placement already stores usage information for all allocations of
resources. There is already even a /usages API endpoint that you can
2018-05-17 9:38 GMT+08:00 Alex Xu :
>
>
> 2018-05-17 1:24 GMT+08:00 Jay Pipes :
>
>> On 05/16/2018 01:01 PM, Nadathur, Sundar wrote:
>>
>>> Hi,
>>> The Cyborg quota spec [1] proposes to implement a quota (maximum
>>> usage) for accelerators on a
2018-05-17 1:24 GMT+08:00 Jay Pipes :
> On 05/16/2018 01:01 PM, Nadathur, Sundar wrote:
>
>> Hi,
>> The Cyborg quota spec [1] proposes to implement a quota (maximum
>> usage) for accelerators on a per-project basis, to prevent one project
>> (tenant) from over-using some
On 5/16/2018 12:24 PM, Jay Pipes wrote:
Quota checks happen before Nova's scheduler gets involved, so having a
scheduler filter handle quota usage checking is pretty much a non-starter.
For server resources yeah, for things like instances quota, CPU and RAM,
etc.
Nova does an up-front quota
On 05/16/2018 01:01 PM, Nadathur, Sundar wrote:
Hi,
The Cyborg quota spec [1] proposes to implement a quota (maximum
usage) for accelerators on a per-project basis, to prevent one project
(tenant) from over-using some resources and starving other tenants.
There are separate resource
Hi,
The Cyborg quota spec [1] proposes to implement a quota (maximum
usage) for accelerators on a per-project basis, to prevent one project
(tenant) from over-using some resources and starving other tenants.
There are separate resource classes for different accelerator types
(GPUs, FPGAs,
20 matches
Mail list logo