Am 29.09.2011 22:11, schrieb Blue Swirl:
On Thu, Sep 29, 2011 at 7:57 AM, Peter Maydell
<peter.mayd...@linaro.org> wrote:
On 29 September 2011 06:03, Peter Chubb <peter.ch...@nicta.com.au> wrote:
Stefan> That's the reason why line buffering is needed today. I
Stefan> enable log file output to see what happened last before the
Stefan> crash.
Thanks for this, I didn't think of this use-case. I don't think I've
ever seen a qemu crash that wasn't caused by something really obvious.
You don't need the logging for the obvious ones :-)
abort() already flushes all open streams. So only signals that cause
immediate death are a problem: SIGSEGV is the obvious one. If its
handler called abort() then that would flush too. abort() is
guaranteed by the POSIX spec to be callable from a signal handler.
Catching SIGSEGV is likely to interact badly with the signal
handling in linux-user mode, I expect.
Stefan> Speed is not the primary target when somebody runs qemu -d ...
It is if it takes hours to reach the problem that causes
the abort(). Speeding up by an order of magnitude is worth it.
One tactic I've found useful in these cases is to run without
logging up to nearly the point where things fail, and then
do a savevm. Then you can loadvm on a qemu with logging enabled
and only look at the section of execution that causes the problem.
This sounds like it should be possible to enable and disable logging
during run time.
The QEMU monitor already supports setting the log level via command 'log'.
I used this command to examine problems with some user commands
running in an emulated Linux.
The performance could be improved by taking a trace point like
approach, where all possible processing is postponed to outside
process. Guest and host code disassembly and op printout could be left
to postprocessing, the logs should contain only binary data.