On Dec 5, 2013, at 8:11 PM, Fox, Kevin M wrote:

> I think the security issue can be handled by not actually giving the 
> underlying resource to the user in the first place.
> 
> So, for example, if I wanted a bare metal node's worth of resource for my own 
> containering, I'd ask for a bare metal node and use a "blessed" image that 
> contains docker+nova bits that would hook back to the cloud. I wouldn't be 
> able to login to it, but containers started on it would be able to access my 
> tenant's networks. All access to it would have to be through nova 
> suballocations. The bare resource would count against my quotas, but nothing 
> run under it.
> 
        So this would be an extremely light weight hypervisor alternative, then?

        It's interesting because "bare-metal-to-tenant" security issues are 
tricky
        to overcome.

        -k

> Come to think of it, this sounds somewhat similar to what is planned for 
> Neutron service vm's. They count against the user's quota on one level but 
> not all access is directly given to the user. Maybe some of the same 
> implementation bits could be used.
> 
> Thanks,
> Kevin
> ________________________________________
> From: Mark McLoughlin [[email protected]]
> Sent: Thursday, December 05, 2013 1:53 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova][TripleO] Nested resources
> 
> Hi Kevin,
> 
> On Mon, 2013-12-02 at 12:39 -0800, Fox, Kevin M wrote:
>> Hi all,
>> 
>> I just want to run a crazy idea up the flag pole. TripleO has the
>> concept of an under and over cloud. In starting to experiment with
>> Docker, I see a pattern start to emerge.
>> 
>> * As a User, I may want to allocate a BareMetal node so that it is
>> entirely mine. I may want to run multiple VM's on it to reduce my own
>> cost. Now I have to manage the BareMetal nodes myself or nest
>> OpenStack into them.
>> * As a User, I may want to allocate a VM. I then want to run multiple
>> Docker containers on it to use it more efficiently. Now I have to
>> manage the VM's myself or nest OpenStack into them.
>> * As a User, I may want to allocate a BareMetal node so that it is
>> entirely mine. I then want to run multiple Docker containers on it to
>> use it more efficiently. Now I have to manage the BareMetal nodes
>> myself or nest OpenStack into them.
>> 
>> I think this can then be generalized to:
>> As a User, I would like to ask for resources of one type (One AZ?),
>> and be able to delegate resources back to Nova so that I can use Nova
>> to subdivide and give me access to my resources as a different type.
>> (As a different AZ?)
>> 
>> I think this could potentially cover some of the TripleO stuff without
>> needing an over/under cloud. For that use case, all the BareMetal
>> nodes could be added to Nova as such, allocated by the "services"
>> tenant as running a nested VM image type resource stack, and then made
>> available to all tenants. Sys admins then could dynamically shift
>> resources from VM providing nodes to BareMetal Nodes and back as
>> needed.
>> 
>> This allows a user to allocate some raw resources as a group, then
>> schedule higher level services to run only in that group, all with the
>> existing api.
>> 
>> Just how crazy an idea is this?
> 
> FWIW, I don't think it's a crazy idea at all - indeed I mumbled
> something similar a few times in conversation with random people over
> the past few months :)
> 
> With the increasing interest in containers, it makes a tonne of sense -
> you provision a number of VMs and now you want to carve them up by
> allocating containers on them. Right now, you'd need to run your own
> instance of Nova for that ... which is far too heavyweight.
> 
> It is a little crazy in the sense that it's a tonne of work, though.
> There's not a whole lot of point in discussing it too much until someone
> shows signs of wanting to implement it :)
> 
> One problem is how the API would model this nesting, another problem is
> making the scheduler aware that some nodes are only available to the
> tenant which owns them but maybe a bigger problem is the security model
> around allowing a node managed by an untrusted become a compute node.
> 
> Mark.
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> [email protected]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> _______________________________________________
> OpenStack-dev mailing list
> [email protected]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to