On Wed, Apr 08, 2009 at 08:06:11PM +0100, Jamie Lokier wrote:
Anthony Liguori wrote:
It doesn't. When an app enables events, we would start queuing them,
but if it didn't consume them in a timely manner (or at all), we would
start leaking memory badly.
We want to be robust even in
Hollis Blanchard wrote:
On Wed, 2009-04-08 at 16:14 -0500, Anthony Liguori wrote:
It has to be some finite amount. You're right, it's arbitrary, but so
is every other memory limitation we have in QEMU. You could make it
user configurable but that's just punting the problem.
You have to
Anthony Liguori wrote:
It doesn't. When an app enables events, we would start queuing them,
but if it didn't consume them in a timely manner (or at all), we would
start leaking memory badly.
We want to be robust even in the face of poorly written management
apps/scripts so we need some
Jamie Lokier wrote:
Anthony Liguori wrote:
It doesn't. When an app enables events, we would start queuing them,
but if it didn't consume them in a timely manner (or at all), we would
start leaking memory badly.
We want to be robust even in the face of poorly written management
Hollis Blanchard wrote:
On Wed, 2009-04-08 at 14:35 -0500, Anthony Liguori wrote:
You're basically saying that if something isn't connected, drop them.
If it is connected, do a monitor_printf() such that you're never queuing
events. Entirely reasonable and I've considered it.
However, I
Anthony Liguori wrote:
However, I do like the idea though of QEMU queuing events for a certain
period of time. Not everyone always has something connected to a
monitor. I may notice that my NFS server (which runs in a VM) is not
responding, VNC to the system, switch to the monitor, and take
On Wed, 2009-04-08 at 16:14 -0500, Anthony Liguori wrote:
Hollis Blanchard wrote:
On Wed, 2009-04-08 at 14:35 -0500, Anthony Liguori wrote:
You're basically saying that if something isn't connected, drop them.
If it is connected, do a monitor_printf() such that you're never queuing