I will create a script for you that reproduces one or two kernel panics on
my branch - the script will checkout the branch and compile tests for glib
from coastline. In the process I will take screenshots as well. One of
those panics seemed to have 100% reproducibility so let's see if it still
there.

2015-06-03 11:26 GMT+02:00 Jakub Jermar <[email protected]>:

> On 3.6.2015 11:17, Jan Mareš wrote:
> > Thank you for very prompt response. As I mentioned in our previous
> > discussion, I use fibrils for pthread implementation and I would like to
> > achieve some level of preemptivness. The idea of having a constant
> > number of threads and letting fibrils be distributed amongst these
> > threads really appeals to me (and some preemptivness seems to be
> > necessary for QEMU, although the problem may be somewhere else as usual).
> >
> > The problem I mentioned manifests when using my implementation of
> > pthread and calling pthread_create lots of times (100000) . Maybe it is
> > also connected with the fact that ids of fibrils aren't unique - it's a
> > pointer to a memory that is likely to be reclaimed by another fibril
> > when the previous one is destroyed.
> >
> > For destroying a joinable pthread(fibril) in pthread_exit that hasn't
> > been joined yet I use fibril_switch(FIBRIL_TO_MANAGER) and later, when
> > the thread is joined I call fibril_destroy on the id of this fibril,
> > that due to my understanding should be dead (not present in any list
> > except fibril_list). I know that fibril_destroy should only be called on
> > a fibril that has never run, but it seems like the same thing that the
> > field clean_after_me does when I would use
> > fibril_switch(FIBRIL_FROM_DEAD). I also thought of a way to avoid doing
> > these unclean calls by merely using a condition variable. That way I
> > could only call fibril_switch(FIBRIL_FROM_DEAD) which would make the
> > whole thing much clearer. I will try that and I will get back to you if
> > I still run into this problem. In that case I will also try to reproduce
> > the same problem on plain fibrils so we can filter out my code.
>
> You also mentioned some kernel panics and inconsistent thread counts.
> These are of the biggest concern to me because they almost certainly
> rule out the problem on your side. How do I reproduce these, were there
> any stacks reported by the panic, do you have a screenshot?
>
> Jakub
>
>
> _______________________________________________
> HelenOS-devel mailing list
> [email protected]
> http://lists.modry.cz/listinfo/helenos-devel
>
_______________________________________________
HelenOS-devel mailing list
[email protected]
http://lists.modry.cz/listinfo/helenos-devel

Reply via email to