On 04/11/2017 05:28 PM, Jay Faulkner wrote:

On Apr 11, 2017, at 12:54 AM, Nisha Agarwal <agarwalnisha1...@gmail.com> wrote:

Hi John,

With ironic I thought everything is "passed through" by default,
because there is no virtualization in the way. (I am possibly
incorrectly assuming no BIOS tricks to turn off or re-assign PCI
devices dynamically.)

Yes with ironic everything is passed through by default.

So I am assuming this is purely a scheduling concern. If so, why are
the new custom resource classes not good enough? "ironic_blue" could
mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
and one 1Gb nic, etc.
Or is there something else that needs addressing here? Trying to
describe what you get with each flavor to end users?
Yes this is purely from scheduling perspective.
Currently how ironic works is we discover server attributes and populate them 
into node object. These attributes are then used for further scheduling of the 
node from nova scheduler using ComputeCapabilities filter. So this is something 
automated on ironic side, like we do inspection of the node 
properties/attributes and user need to create the flavor of their choice and 
the node which meets the user need is scheduled for ironic deploy.
With resource class name in place in ironic, we ask user to do a manual step 
i.e. create a resource class name based on the hardware attributes and this 
need to be done on per node basis. For this user need to know the server 
hardware properties in advance before assigning the resource class name to the 
node(s) and then assign the resource class name manually to the node.
In a broad way if i say, if we want to support scheduling based on quantity for 
ironic nodes there is no way we can do it through current resource class 
structure(actually just a tag) in ironic. A  user may want to schedule ironic 
nodes on different resources and each resource should be a different resource 
class (IMO).

Are you needing to aggregating similar hardware in a different way to the above
resource class approach?
i guess no but the above resource class approach takes away the automation on 
the ironic side and the whole purpose of inspection is defeated.


I strongly challenge the assertion made here that inspection is only useful in 
scheduling contexts. There are users who simply want to know about their 
hardware, and read the results as posted to swift. Inspection also handles 
discovery of new nodes when given basic information about them.

Also ironic-inspector is useful for automatically defining resource classes on nodes, so I'm not sure about this purpose being defeated as well.

/me makes a note to provide a few examples of such approach in ironic-inspector 
docs

Not sure about OOB inspection though.


-
Jay Faulkner
OSIC

Regards
Nisha


On Mon, Apr 10, 2017 at 4:29 PM, John Garbutt <j...@johngarbutt.com> wrote:
On 10 April 2017 at 11:31,  <sfinu...@redhat.com> wrote:
On Mon, 2017-04-10 at 11:50 +0530, Nisha Agarwal wrote:
Hi team,

Please could you pour in your suggestions on the mail?

I raised a blueprint in Nova for this https://blueprints.launchpad.ne
t/nova/+spec/pci-passthorugh-for-ironic and two RFEs at ironic side h
ttps://bugs.launchpad.net/ironic/+bug/1680780 and https://bugs.launch
pad.net/ironic/+bug/1681320 for the discussion topic.

If I understand you correctly, you want to be able to filter ironic
hosts by available PCI device, correct? Barring any possibility that
resource providers could do this for you yet, extending the nova ironic
driver to use the PCI passthrough filter sounds like the way to go.

With ironic I thought everything is "passed through" by default,
because there is no virtualization in the way. (I am possibly
incorrectly assuming no BIOS tricks to turn off or re-assign PCI
devices dynamically.)

So I am assuming this is purely a scheduling concern. If so, why are
the new custom resource classes not good enough? "ironic_blue" could
mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
and one 1Gb nic, etc.

Or is there something else that needs addressing here? Trying to
describe what you get with each flavor to end users? Are you needing
to aggregating similar hardware in a different way to the above
resource class approach?

Thanks,
johnthetubaguy

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to