On Tuesday, 9 February 2021 13:26:19 CET Philipp Eppelt wrote:
> 
> On 2/9/21 1:20 AM, Paul Boddie wrote:
> > 
> > This is very useful to know. I imagine that thread safety is regarded as
> > unnecessary for many use-cases, but it does surprise me that it has to be
> > introduced to make the allocator usable in a multithreaded program. Then
> > again, I might be doing things that are not particularly normal for L4Re
> > programs.
> 
> I agree that an out-of-the-box mechanism is helpful here and will
> prevent headaches in many situations, plus you wouldn't need to know
> about this. However, the (internal) discussion did not yet reach consensus.
> 
> Once you know about this shortcoming, a thread-safety wrapper around
> cap_alloc is fortunately straight forward.

Indeed. I find myself writing my own wrapper functions, anyway. In this case, 
I just set up a semaphore to protect the allocation functions, having another 
library function for its initialisation.

> > What kind of resource limits apply to L4Re tasks? In my work, I am
> > allocating capabilities for objects acting as dataspaces. Things seem to
> > work fairly well up to a certain point, and then capability allocation
> > seems to fail, and program failure is then not particularly graceful.
> > 
> > (I currently observe that creating 160 threads, each with its own IPC gate
> > capability seems to be unproblematic, but more than this will produce
> > errors that manifest themselves either in pthread library code or in my
> > own code where a critical section is supposed to be enforced.)
> 
> It looks like there is a capability limit per task at 4096 for
> L4Re::Util::cap_alloc.
> 160 thread (2 caps) + 160 IPC gates = 3*160 -> 480 caps
> 
> If you are handing out more than 3600 Dataspaces you might be running
> into this limit.

I don't think I should be remotely close to that. Just tracking the number of 
capabilities I'm allocating, however, and there really does seem to be a few 
thousand capabilities that are allocated by the end of the program.

I do wonder about how capabilities might be allocated to support the 
concurrency management features of the standard C++ library. In principle, a 
mutex might be implemented by a semaphore whose capability would only be 
allocated once in a task's lifetime (or an object's lifetime for object-level 
locking), with the guard operating on the semaphore via the mutex abstraction.

There shouldn't be so many mutexes allocated, and therefore not so many 
capabilities involved, but this is one thing I could imagine being 
troublesome. Anyway, it's very possible that I am not freeing explicitly 
allocated capabilities in my own code, so I guess I should pay some attention 
to that and see what might be happening.

> Now I'm fuzzy on the details, so I'll just provide a general direction.
> I don't know if you can replenish the Util::cap_alloc by providing
> additional memory to it or if you need to setup your own allocator (see
> l4re-core/l4re/util/libs/cap_alloc.cc). For the latter, you can set it
> up with a bigger piece of memory, to increase the number of manage
> capability slots.

Maybe I don't understand everything, but I imagine that any increased memory 
area for capability slots would need to be accompanied by increased storage 
for kernel use (the "object space" mentioned in the online documentation), so 
that the association between capabilities and "kernel objects" can be 
maintained. A brief search suggests that l4_task_add_ku_mem might be related 
to this, but this is just a quick suggestion.

> On 2/7/21 1:54 AM, Paul Boddie wrote:
> > I might also ask about support in L4Re for C++ threading constructs. When
> > developing my simulation, I decided to use the standard C++ support for
> > threading, mutexes, condition variables, and so on, as opposed to using
> > pthread functionality. I imagine that C++ library support for these things
> > just wraps the pthread functionality, but I wonder if there are not other
> > considerations that I might need to be aware of.
> 
> You are right, std::thread wraps libpthread. I guess you know about
> l4re-core/libstdc++-headers/include/thread-l4?

Yes, I've used the pthread library elsewhere, it being described in the 
documentation, and it is also appropriate for C language use.

> To my knowledge the behavior for the C++ constructs don't differ from
> the pthread ones. Be careful with std::thread.join(), the thread needs
> to cooperate and terminate/exit by itself. If it doesn't, your caller
> waits for a long time.

Indeed. I tend to expect my threads to terminate, so any problems with joining 
should indicate other problems. One motivation for using the C++ constructs is 
that they permit more concise code than the pthread library does, and since I 
don't care about things like setting priorities, their use facilitated 
consideration of the issues involved rather than distracting from those 
issues.

(In fact, when I decided to review my previous work, I actually prototyped the 
mechanisms in Python, whose concurrency support is acceptable in terms of the 
constructs provided, if not the actual provision of the performance benefits 
expected with concurrency. I then implemented the same mechanisms in C++ to 
reassure me about the approaches I had taken.)

> If you directly interact with the UTCB or use low-level functions which
> setup the UTCB, make sure to not do any other function calls, as they
> might change the UTCB. (That's also a high scorer in the headache list.)

I certainly know from experience that I have to be very careful around UTCB-
modifying operations. Indeed, one thing that I do which may be performance-
unfriendly but which helped my productivity greatly was to wrap the IPC 
mechanisms with abstractions and functions that stay well away from the UTCB.

Thanks for the advice and technical details!

Paul



_______________________________________________
l4-hackers mailing list
l4-hackers@os.inf.tu-dresden.de
http://os.inf.tu-dresden.de/mailman/listinfo/l4-hackers

Reply via email to