On Thu, Aug 23, 2012 at 1:21 PM, Jakub Jermar <[email protected]> wrote:
> On 23.8.2012 9:56, Adam Hraska wrote:
>> On Thu, Aug 23, 2012 at 12:45 AM, Jakub Jermar <[email protected]> wrote:
>>> Tonight, I did some testing and unfortunately it appears the branch is
>>> not that scrubbed or even tested as one would have hoped. These are the
>>> issues I noticed:
>>>
>>> - test cht panics the kernel when run on real amd64 or ia32 hardware
>>> (4-core Intel i5, 1.4GHz, 4G memory)
>>> - test cht panics the kernel when run on the above machine in QEMU (-smp
>>> 4 --enable-kvm)
>>> - unfortunately the panic message is interleaved with some other
>>> concurrent output so it is completely unreadable (some synchronization
>>> should be probably added back to printf)
>>
>> That is very unfortunate. While I expected there to be some latent
>> bugs in the implementation (as in any new software) I would not have
>> expected "test cht" would fail (eg leak memory) and definitely not to
>> panic.
>>
>> It is however quite difficult to respond to this point since I have no
>> idea what went wrong.
>
> I believe this is the same thing as the QEMU panic on UP. The same
> assertion is just hit by each CPU at about the same time, which would
> explain the blended panic message.

Hmm, good observation. That very well explains the panic messages.
If you by any chance had some spare time and were in the proper
mood could you, please, remove the assert in adt/cht.c:872 and give
it one last try? Without any holding back, eg SMP and real hw.

The assert is not really needed and I already regret adding it.

If it passes the test without any errors it means the panics are a
result of a silly mistake (the overly protective assert). If not, well,
then there are actual bugs in the code.

Of course I fully understand if you have better plans -- especially
on a Friday.

In any case, thank you.

Adam

_______________________________________________
HelenOS-devel mailing list
[email protected]
http://lists.modry.cz/cgi-bin/listinfo/helenos-devel

Reply via email to