So we wouldn't bother ourselves with whether it's a network resource,
gpu resource, or whatever else? That seems more feasible than trying
to teach or create individual objects to use PCI passthroughs,
although we'd miss out on some of the specifics, like configuring the
device. Perhaps that doesn't matter. My solution was with the mindset
of supporting any custom thing, I can see people saying 'please
include support for x,y,z', when they can add it via the xml. Hooks is
a good way to do that, you're right.

On Tue, Jun 11, 2013 at 3:35 PM, Edison Su <edison...@citrix.com> wrote:
>
>
>> -----Original Message-----
>> From: Marcus Sorensen [mailto:shadow...@gmail.com]
>> Sent: Tuesday, June 11, 2013 12:10 PM
>> To: dev@cloudstack.apache.org
>> Cc: Ryousei Takano; Kelven Yang
>> Subject: Re: PCI-Passthrough with CloudStack
>>
>> What we need is some sort of plugin system for the libvirt guest agent,
>> where people can inject their own additions to the xml. So we pass the VM
>> parameters (including name, os, nics, volumes etc) to your plugin, and it
>> returns either nothing, or some xml. Or perhaps an object that defines
>> additional xml for various resources.
>>
>> Or maybe we just pass the final cloudstack-generated XML to your plugin,
>> the external plugin processes it and returns it, complete with whatever
>> modifications it wants before cloudstack starts the VM. That would actually
>> be very simple to put in. Via the KVM host's agent.properties file we could
>> point to an external script. That script could be in whatever language, as 
>> long
>> as it's executable. It filters the XML and returns new XML which is used to
>> start the VM.
>
> If change vm's xml is enough, then how about use libvirt's hook system:
> http://www.libvirt.org/hooks.html
>
> I think, the issue is that, how to let cloudstack only create one VM per KVM 
> host, or few VMs per host(based on the available PCI devices on the host).
> If we think PCI devices are the resource CloudStack should to take care of 
> during the resource allocation, then we need a framework:
> 1. During host discovering, host can report whatever resources it can detect 
> to mgt server. RAM/CPU freq/local storage are the resources, that currently 
> supported by kvm agent. Here we may need to add PCI devices as another 
> resource.  Such as, KVM agent host returns a StartupAuxiliaryDevicesReportCmd 
> along as with other startupRouteringcmd/StartStorage*cmd etc, during the 
> startup.
> 2. There will be a listener on the mgt server, which can listen on 
> StartupAuxiliaryDevicesReportCmd, then records available PCI devices into DB, 
>  such as host_pci_device_ref table.
> 3. Need to extend FirstFitAllocator, take PCI devices as another resource 
> during the allocation. And also need to find a place to mark the PCI device 
> as used in host_pci_device_ref table, so the pci device won't be allocated to 
> more than one VM.
> 4. Have api to create a customized computing offering, the offering can 
> contain info about PCI device, such as how many PCI devices plugged into a VM.
> 5. If user chooses above customized computing offering during the VM 
> deployment, then the allocator in step 3 will be triggered, which will choose 
> a KVM host which has enough PCI devices to fulfill the computing offering.
> 6. In the startupcommand, the mgt server send to kvm host, it should contain 
> the PCI devices allocated to this VM.
> 7. At the KVM agent code, change VM's xml file properly based on the 
> startupcommand.
>
> How do you think?
>
>>
>> On Tue, Jun 11, 2013 at 12:59 PM, Paul Angus <paul.an...@shapeblue.com>
>> wrote:
>> > We're working with 'a very large broadcasting company' how are using
>> > cavium cards for ssl offload in all of their hosts
>> >
>> > We need to add:
>> >
>> > <hostdev mode='subsystem' type='pci' managed='yes'>
>> >         <source>
>> >                 <address domain='0x0000' bus='0x24' slot='0x00' 
>> > function='0x1'/>
>> >         </source>
>> > </hostdev>
>> >
>> > Into the xml definition of the guest VMs
>> >
>> > I'm very interested in working you guys to make this an integrated
>> > part of CloudStack
>> >
>> > Interestingly cavium card drivers can present a number of virtual 
>> > interfaces
>> specifically designed to be passed through to guest vms, but these must be
>> addressed separately so a single 'stock' xml definition wouldn't be flexible
>> enough to fully utilise the card.
>> >
>> >
>> > Regards,
>> >
>> > Paul Angus
>> > S: +44 20 3603 0540 | M: +447711418784 paul.an...@shapeblue.com
>> >
>> > -----Original Message-----
>> > From: Kelven Yang [mailto:kelven.y...@citrix.com]
>> > Sent: 11 June 2013 18:10
>> > To: dev@cloudstack.apache.org
>> > Cc: Ryousei Takano
>> > Subject: Re: PCI-Passthrough with CloudStack
>> >
>> >
>> >
>> > On 6/11/13 12:52 AM, "Pawit Pornkitprasan" <p.pa...@gmail.com> wrote:
>> >
>> >>Hi,
>> >>
>> >>I am implementing PCI-Passthrough to use with CloudStack for use with
>> >>high-performance networking (10 Gigabit Ethernet/Infiniband).
>> >>
>> >>The current design is to attach a PCI ID (from lspci) to a compute
>> >>offering. (Not a network offering since from CloudStack¹s point of
>> >>view, the pass through device has nothing to do with network and may
>> >>as well be used for other things.) A host tag can be used to limit
>> >>deployment to machines with the required PCI device.
>> >
>> >
>> >>
>> >>Then, when starting the virtual machine, the PCI ID is passed into
>> >>VirtualMachineTO to the agent (currently using KVM) and the agent
>> >>creates a corresponding <hostdev> (
>> >>http://libvirt.org/guide/html/Application_Development_Guide-
>> Device_Con
>> >>f
>> >>ig-
>> >>PCI_Pass.html)
>> >>tag and then libvirt will handle the rest.
>> >
>> >
>> > VirtualMachineTO.params is designed to carry generic VM specific
>> configurations, these configuration parameters can either be statically 
>> linked
>> with the VM or dynamically populated based on other factors like this one.
>> Are you passing PCI ID using VirtualMachineTO.params?
>> >
>> >>
>> >>For allocation, the current idea is to use CloudStack¹s capacity
>> >>system (at the same place where allocation of CPU and RAM is
>> >>determined) to limit 1 PCI-Passthrough VM per physical host.
>> >>
>> >>The current design has many limitations such as:
>> >>
>> >>   - One physical host can only have 1 VM with PCI-Passthrough, even if
>> >>   many PCI-cards with equivalent functions are available
>> >>   - The PCI ID is fixed inside the compute offering, so all machines have
>> >>   to be homogeneous and have the same PCI ID for the device.
>> >
>> > Anything that affects VM placement could have impact to HA/migration,
>> we probably need some graceful error-handling in these code paths,
>> hopefully these have been taken care of.
>> >
>> >>
>> >>The initial implementation is working. Any suggestions and comments
>> >>are welcomed.
>> >>
>> >>Thank you,
>> >>Pawit
>> >
>> >
>> > This email and any attachments to it may be confidential and are intended
>> solely for the use of the individual to whom it is addressed. Any views or
>> opinions expressed are solely those of the author and do not necessarily
>> represent those of Shape Blue Ltd or related companies. If you are not the
>> intended recipient of this email, you must neither take any action based
>> upon its contents, nor copy or show it to anyone. Please contact the sender 
>> if
>> you believe you have received this email in error. Shape Blue Ltd is a
>> company incorporated in England & Wales. ShapeBlue Services India LLP is
>> operated under license from Shape Blue Ltd. ShapeBlue is a registered
>> trademark.
>> >

Reply via email to