A few more concerns about the IRM proposed interfaces.

1. When the material talks about current interface limitation, 4.1.2, 
why it's a problem to allow a driver to get more that *2* MSI-X? Those 
integrated device drivers should be prepared that it can not get any 
MSI-X interrupt vector, and it might try the legacy INTX instead. So it 
should not be a problem even all MSI-X vectors have been given to those 
attached drivers. Late-attached drivers will just use legacy INTX 
interrupts. The justification for current *hard-coded* limitation 
doesn't make sense.

2. How the IRM framework decide to decrease the number of interrupt 
vectors that have been given to a driver? 4.2.1 talk about how driver 
participate the IRM interfaces, but it's obscure how the framework can 
wisely move interrupt resources around drivers.

3. How the IRM framework make *wise* decision about which driver can 
take more interrupt vectors than others? For example, when you have a 
10GbE NIC and a 1GbE NIC in the box, both drivers ask for 16 vectors 
when you don't have enough vectors left. To give the same amount of 
interrupt vectors to two driver instances are unreasonable. As part of 
Crossbow project, hardware resources are allocated depending on the real 
link speed and bandwidth need. But as the low level I/O framework, IRM 
don't have knowledge about those information. How do you prove that your 
"management" is reasonable?

4. What's the perimeter of IRM? In a virtualized environment, interrupts 
might have been bound to CPUs in an exclusive zone or a guest domain, 
when IRM asks such interrupt vectors back from the driver, who will take 
care of the interrupt re-targeting? It's out of driver's control, and I 
can not find any relevant information from this document.

Thanks,

Roamer


Kais Belgaied wrote:
> I am derailing this case on grounds of non-obviousness of its 
> architectural impact, and possible incompleteness. The discussion 
> already uncovered that there is more than a  minor amendment to 
> PSARC/2004/253 "Advanced DDI Interrupt Functions" .
> 
> To prepare for the full review, the architecture should address the 
> impact on device drivers and on the subsystems they are part of.
> If the scope of the project is intended to remain generic enough, the 
> material needs to reflect that more than one class of
> device drivers were considered in the architecture.
> 
> To elaborate (see Garrett's previous email), the interrupt handles that 
> a  NIC  driver acquired are actually exposed to the MAC layer (see
> PSARC/2006/357 - Crossbow), for enabling/disable the interrupts on demand.
> The proposal should be clear on how the behavior of such drivers is 
> intended to be modified when ported to the IRM interfaces.
> Should there be an extra notification event between MAC and the drivers 
> to invalidate the interrupt handles registered with MAC?
> Are drivers supposed to insulate MAC from the real interrupt handles 
> instead, and, internally map to real handles that can be
> added/removed? are they supposed to start "faking" the polling mode in 
> software on rx rings that lost their real interrupts for
> example?
> 
> Cryptographic accelerators are another class of I/O where an external 
> framework (the Solaris crypto framework) relies
> on driver notifications coming from job completion interrupts. See 
> PSARC/2001/557.
> What such drivers are supposed to do for proper handling 
> DDI_CB_INTR_REMOVE ?
> Should they block until the jobs drain and they get to call 
> crypto_provider_notification(READY), should they immediately
> notify an error for all pending  crypto requests?
> 
>    Kais.
> 

-- 

# telnet (650)-786-6759 (x86759)
Connected to Solaris.Sun.COM.
login: Lu, Yunsong
Last login: January 2, 2007 from beyond.sfbay
Yunsong.Lu at Sun.COM    v1.04    Since Mon Dec. 22, 2003
[Roamer at Solaris Networking]# cd ..

Reply via email to