Also I should add that the first bug with assert on IPC_PHONE_CONNECTING is
not affacted by a call to thread_create and seems to be reproducible all
the time, even when fibrils are run on one thread only.
Dne 3. 6. 2015 12:13 napsal uživatel "Jan Mareš" <[email protected]>:

> Here you are. Steps to reproduce: run the rebuild.sh - it should checkout
> and compile the mainline and the glib in the coastline for ia32. When its
> done it will also copy application called glib-test-rec-mutex to
> overlay/app. When you run glib-test-rec-mutex --verbose, you should get a
> panic immediately. For the "3 threads" problem run "repeat 10
> glib-test-rec-mutex" in more terminals and then keep checking top. When the
> race occurs the test will get stuck and the thread number in top will jump
> from 2 threads to 3 (I had a breakpoint on the thread_create function in
> userspace and it was called only once). If you try to kill the instance
> of glib-test-rec-mutex with 3 threads you should have another kernel panic.
>
> 2015-06-03 11:51 GMT+02:00 Jan Mareš <[email protected]>:
>
>> I will create a script for you that reproduces one or two kernel panics
>> on my branch - the script will checkout the branch and compile tests for
>> glib from coastline. In the process I will take screenshots as well. One of
>> those panics seemed to have 100% reproducibility so let's see if it still
>> there.
>>
>> 2015-06-03 11:26 GMT+02:00 Jakub Jermar <[email protected]>:
>>
>>> On 3.6.2015 11:17, Jan Mareš wrote:
>>> > Thank you for very prompt response. As I mentioned in our previous
>>> > discussion, I use fibrils for pthread implementation and I would like
>>> to
>>> > achieve some level of preemptivness. The idea of having a constant
>>> > number of threads and letting fibrils be distributed amongst these
>>> > threads really appeals to me (and some preemptivness seems to be
>>> > necessary for QEMU, although the problem may be somewhere else as
>>> usual).
>>> >
>>> > The problem I mentioned manifests when using my implementation of
>>> > pthread and calling pthread_create lots of times (100000) . Maybe it is
>>> > also connected with the fact that ids of fibrils aren't unique - it's a
>>> > pointer to a memory that is likely to be reclaimed by another fibril
>>> > when the previous one is destroyed.
>>> >
>>> > For destroying a joinable pthread(fibril) in pthread_exit that hasn't
>>> > been joined yet I use fibril_switch(FIBRIL_TO_MANAGER) and later, when
>>> > the thread is joined I call fibril_destroy on the id of this fibril,
>>> > that due to my understanding should be dead (not present in any list
>>> > except fibril_list). I know that fibril_destroy should only be called
>>> on
>>> > a fibril that has never run, but it seems like the same thing that the
>>> > field clean_after_me does when I would use
>>> > fibril_switch(FIBRIL_FROM_DEAD). I also thought of a way to avoid doing
>>> > these unclean calls by merely using a condition variable. That way I
>>> > could only call fibril_switch(FIBRIL_FROM_DEAD) which would make the
>>> > whole thing much clearer. I will try that and I will get back to you if
>>> > I still run into this problem. In that case I will also try to
>>> reproduce
>>> > the same problem on plain fibrils so we can filter out my code.
>>>
>>> You also mentioned some kernel panics and inconsistent thread counts.
>>> These are of the biggest concern to me because they almost certainly
>>> rule out the problem on your side. How do I reproduce these, were there
>>> any stacks reported by the panic, do you have a screenshot?
>>>
>>> Jakub
>>>
>>>
>>> _______________________________________________
>>> HelenOS-devel mailing list
>>> [email protected]
>>> http://lists.modry.cz/listinfo/helenos-devel
>>>
>>
>>
>
_______________________________________________
HelenOS-devel mailing list
[email protected]
http://lists.modry.cz/listinfo/helenos-devel

Reply via email to