Re: Receiving delayed packets from RTL8139 card in KVM
Hi Avi, Thanks for your insights. Since the interrupts are level triggered, we need to unmask the interrupts ASAP to see if we are getting another interrupt. In my ISR, the interrupts are unmasked after processing the packets as shown below: if( status & PKT_RX ) { rtl8139_rx(&rtl8139_netif ); //receive the packet rtl8139_clear_irq(status); //unmask the interrupt } else { .. Hence in this case when another receive interrupt arises when we are in the middle of receiving the packets that interrupt will not be caught as we did not unmask the interrupt. But in-turn if we unmask the interrupts immediately before receiving the packets as below, I am receiving all the interrupts properly. if( status & PKT_RX ) { rtl8139_clear_irq(status); //unmask the interrupt rtl8139_rx(&rtl8139_netif ); //receive the packet } else { .. Also I would like to know whether the QEMU RTL8139 card calculates its own checksum for packets, so that I need not do these checks in my LWIP thread? Please let me know. Thanks, karthik -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Receiving delayed packets from RTL8139 card in KVM
Hi Avi, > There may be a missing wakeup in the networking path. You can try adding > printf()s in RTL8139's receive function, and in kvm_set_irq_level() to see > if there's a problem in interrupt flow. Thanks for your insights. I debugged the RTL8139's receive function rtl8139_do_receive() and the functions rtl8139_update_irq() (responsible for initiating the rtl8139 interrupts) and kvm_set_irq_level(). I came to know that the packets were not delayed as opposed to previous assumption. Also the interrupts were promptly provided by the rtl8139 using rtl8139_update_irq(). Also it seems kvm_set_irq_level() injects the interrupts to the guest properly. But inside the guest(kitten LWK) I see the number of interrupts to be less than what is injected from KVM. This may be due to the interrupts being coalesced in KVM before injection or the guest somehow misses some interrupts. I feel the coalescing of the interrupts may cause the problem in my application. For eg. Two packets needs to be received, hence instead of injecting two interrupts only one coalesced interrupt is injected to the guest. The RTL8139 driver in guest on receiving the coalesced interrupt reads the first packet(which is present in buffer) but when it tries to read the next packet immediately, the packet has not yet arrived to the guest's RTL8139's buffer hence it assumes no more data and returns. The second packet here is not received leading to retransmission. But in this scenario if a second interrupt occured then the driver would have once again checked the buffer to find the data available thereby avoiding retransmission. Please let me know what is the minimum time between interrupts to be considered for coalescing. Is it possible to reduce this time further as it will be useful in my case. Please let me know your comments on this issue. Thanks for your time, karthik -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Receiving delayed packets from RTL8139 card in KVM
Hi All, I am running a light weight kernel named kitten in QEMU-KVM which uses LWIP (Light Weight TCP/IP) stack for networking. I am trying to fine tune this LWIP but I am now facing an issue regarding delayed packet reception. Basically I wrote a small pingpong program where master sends a data to a slave and slave returns it back. But I see some retransmissions due to missing acknowledgement. When I debugged further I came to know the following: 1) Master sends some data to Slave 2) Slave receives it and sends an acknowledgement to Master. Slave then returns the same data to Master 3) Master receives both acknowledgement and the data The problem occurs in the 3 step. The master receives a single interrupt for both the acknowledgement packet and data packet. So the driver tries to read and reads the acknowledgement packet first and then checks again whether there is more data in the buffer and it reads the data packet. But sometimes the data packet is available in the buffer after 90-100 microseconds after acknowledgement packet, hence the driver thinks there is no more packet to receive. So the receive is failed. This causes the slave to resend the packet which leads to great performance loss. And from the debug messages it is sure that there is no delay from the slave side in sending the data packet. When I run the same program in QEMU everything is running perfectly. I think it is because the emulation itself causes a long delay within which the packet arrives at the buffer and hence the scenario is not reproduced. Hence I would like to know whether KVM has issues with the RTL8139 card? and how to proceed further. Thanks in advance for your time, karthik -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: KVM outputs a blank screen
Hi All, In the previous mail I did not mention about my machine setup hence I am rewriting it. I am using Ubuntu Karmic 64 bit on Intel Core2 Duo with Intel-VT extensions. Actually KVM was working fine last week but I see only a blank screen now (either directly or using vnc). If I replace KVM with QEMU, I am getting the output. I have no clue why it happened and I don't remember changing any configuration. Also I tried removing KVM and reinstalling it. Still the problem exists. I also checked through lsmod | grep kvm and it shows that kvm_intel is being used. It would be great if some one would give a hint on where to debug. My last resort is to reinstall the linux box. Thanks for your time, karthik. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
KVM outputs a blank screen
Hi All, I have started using KVM very recently for my research. It was really great working with KVM. Everything was working fine till last week until suddenly I could see nothing in the output window. I could see only a blank screen. I don't remember changing any configuration details. Also if I use qemu instead of KVM I get the proper output. Please let me know if I am missing anything here. Thanks for your time, Karthik -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html