Hello, Thanks for the answer. > I suppose it is OS-, device-, and driver-dependent.
You don't mention the CPU here directly. I assume the CPU and it's clock speed has also to do with the upper limit of interrupts from which on interrupts will be dropped. Am I right ? From your answer it seems that the device driver is the one (sloely?) which decides this upper limit on the number of interrupts. >if the handler eats all the cycles on handling the interrupts then the consumers of the device (applications) won't get the CPU. Therefore the OS may start dropping interrupts (and yes, losing data) at some point. A practical example which had driven me to post this question originally: I want to know when I should use NAPI (polling) instead of interrupts in a network card so that I won't loss interrupts. (and data , as a result). The rate of interrupts the nic receives depends of course on the number of packets which are sent by the application. So I assume that on machines with sow CPU the amount of maximum interrupts the CPU can handle will be lower (linearly ?) than the amount of maximum interrupts th CPU can handle on machines with faster CPU. Is there a way, except experimenting, in which I can get to know , for different CPUs (with different clockspeed) ,when it is better to use NAPI (polling) instead of interrupts? -- RG On 01 Dec 2005 13:42:23 +0000, Oleg Goldshmidt <[EMAIL PROTECTED]> wrote: > Rafi Gordon <[EMAIL PROTECTED]> writes: > > > I assume that there is a limit on the maximum number of > > interruprs a CPU can receive without dropping or losing > > interrupts and not handling them. > > > > Is there a way in which I can determine in Linux what is this > > limit (apart from bombing the CPU irq lines...) ? > > > > Is this a hw detail which is constant regardless > > of which operating system runs on that processor ? > > I suppose it is OS-, device-, and driver-dependent. It is likely > limited by the maximal rate at which the device driver is able to > handle the interrupts: if the handler eats all the cycles on handling > the interrupts then the consumers of the device (applications) won't > get the CPU. Therefore the OS may start dropping interrupts (and yes, > losing data) at some point. > > There is a variety of related mechanisms that an OS can employ. > In some cases the OS may disable interrupts while handling an > interrupt. In some cases it may set a timer, and if the timer > expires and the device in question claims it is ready it is an > indication that an interrupt was missed, and the handler may be > invoked manually. > > See, for instance, Chapter 10 of LDD3, > > http://www.oreilly.com/catalog/linuxdrive3/book/ch10.pdf > > -- > Oleg Goldshmidt | [EMAIL PROTECTED] | http://www.goldshmidt.org > ================================================================To unsubscribe, send mail to [EMAIL PROTECTED] with the word "unsubscribe" in the message body, e.g., run the command echo unsubscribe | mail [EMAIL PROTECTED]