Re: [Mono-dev] High threadpool CPU usage
Thanks, Ludovic. I’m using mono-4.2.3. The ‘massive amounts of GC’ idea makes sense based on what I’m seeing. When I run pstack, I get a number of threadpool threads in stacks with: #0 0x7fdff1c6a952 in do_sigsuspend (set=0x954220 ) at ../sysdeps/unix/sysv/linux/sigsuspend.c:32 #1 __GI___sigsuspend (set=set@entry=0x954220 ) at ../sysdeps/unix/sysv/linux/sigsuspend.c:46 #2 0x005c7534 in suspend_thread (info=0x7fdf480008c0, context=context@entry=0x7fde997ea1c0) at sgen-os-posix.c:126 #3 0x005c771f in suspend_handler (_dummy=, _info=, context=0x7fde997ea1c0) at sgen-os-posix.c:153 #4 I thought this was related to GDB / pstack attaching, but it’s actually the suspend handling for the sgen collector, right? Is a thread suspending itself CPU-intensive? I see lots of threads with high CPU at any point, which seems to indicate that this wouldn’t just be the CPU usage of the collector thread itself. Do you have any suggestions for how to track down the large numbers of allocations? This isn’t very easy to reproduce, but now that I know what to look for, I might be able to get it to happen in a test environment. chris From: Ludovic Henry [mailto:ludo...@xamarin.com] Sent: Thursday, May 26, 2016 8:43 AM To: Chris Swiedler; mono-devel-list Subject: Re: [Mono-dev] High threadpool CPU usage Hi Chris, The first stacktrace you are observing is for threadpool thread parking. We use this technique for threads that are currently not doing anything, to keep them around for a little while (5-60 seconds) so if we have burst of usage, we do not end up destroying and creating threads all the time. The second stacktrace you are observing is, as you noted, when performing a garbage collection, when the GC thread is suspending all the running threads. So if you are witnessing this trace very often, it means a thread is allocating memory very quickly triggering GC collection very often. So I would need more information to tell you exactly why it would use 100% CPU. Also which version of Mono are you running? Thank you very much, Ludovic On Wed, May 25, 2016 at 8:30 PM Chris Swiedler > wrote: We have a server app which is periodically going into a mode where all threadpool threads start running at very high CPU. I've run pstack when it's in this mode, and every time I do it, nearly all the threadpool threads have this stack: #0 pthread_cond_timedwait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 #1 0x00618817 in mono_cond_timedwait_ms (cond=cond@entry=0x7fe7ee1fddc0, mutex=0x241eb78, timeout_ms=) at mono-mutex.c:181 #2 0x00586f28 in worker_park () at threadpool-ms.c:509 #3 worker_thread (data=) at threadpool-ms.c:607 #4 0x005841e9 in start_wrapper_internal (data=) at threads.c:725 #5 start_wrapper (data=) at threads.c:772 #6 0x00621026 in inner_start_thread (arg=0x7fe831df4650) at mono-threads-posix.c:97 #7 0x7fe88a55edf5 in start_thread (arg=0x7fe7ee1fe700) at pthread_create.c:308 #8 0x7fe88a28c1ad in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 Usually one thread will have a stack like this: #0 sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:85 #1 0x0061aa37 in mono_sem_wait (sem=0x9542c0 , alertable=alertable@entry=0) at mono-semaphore.c:107 #2 0x005c77bd in sgen_wait_for_suspend_ack (count=count@entry=18) at sgen-os-posix.c:188 #3 0x005c78f9 in sgen_thread_handshake (suspend=suspend@entry=1) at sgen-os-posix.c:224 #4 0x005c7e92 in sgen_client_stop_world (generation=generation@entry=0) at sgen-stw.c:234 #5 0x005e6aca in sgen_stop_world (generation=0) at sgen-gc.c:3389 #6 0x005e6c29 in sgen_perform_collection (requested_size=4096, generation_to_collect=0, reason=0x6d9595 "Nursery full", wait_to_finish=0) at sgen-gc.c:2322#7 0x005da62a in sgen_alloc_obj_nolock (vtable=vtable@entry=0x7fe85c0343c0, size=size@entry=128) at sgen-alloc.c:291 #8 0x005da913 in sgen_alloc_obj (vtable=vtable@entry=0x7fe85c0343c0, size=128) at sgen-alloc.c:457 #9 0x005c86e9 in mono_gc_alloc_obj (vtable=vtable@entry=0x7fe85c0343c0, size=) at sgen-mono.c:936 #10 0x005a8b54 in mono_object_new_alloc_specific (vtable=vtable@entry=0x7fe85c0343c0) at object.c:4385 #11 0x005a8bf0 in mono_object_new_specific (vtable=0x7fe85c0343c0) at object.c:4379 #12 0x005a8c8c in mono_object_new (domain=domain@entry=0x1ded1c0, klass=) at object.c:4318 #13 0x005ac1c9 in mono_async_result_new (domain=domain@entry=0x1ded1c0, handle=handle@entry=0x0, state=0x0, data=data@entry=0x0, object_data=object_data@entry=0x7fe8838af020) at object.c:5768 #14 0x005887ff in mono_threadpool_ms_begin_invoke (domain=0x1ded1c0,
Re: [Mono-dev] High threadpool CPU usage
Hi Chris, The first stacktrace you are observing is for threadpool thread parking. We use this technique for threads that are currently not doing anything, to keep them around for a little while (5-60 seconds) so if we have burst of usage, we do not end up destroying and creating threads all the time. The second stacktrace you are observing is, as you noted, when performing a garbage collection, when the GC thread is suspending all the running threads. So if you are witnessing this trace very often, it means a thread is allocating memory very quickly triggering GC collection very often. So I would need more information to tell you exactly why it would use 100% CPU. Also which version of Mono are you running? Thank you very much, Ludovic On Wed, May 25, 2016 at 8:30 PM Chris Swiedlerwrote: > We have a server app which is periodically going into a mode where all > threadpool threads start running at very high CPU. I've run pstack when > it's in this mode, and every time I do it, nearly all the threadpool > threads have this stack: > > #0 pthread_cond_timedwait@@GLIBC_2.3.2 () at > ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 > #1 0x00618817 in mono_cond_timedwait_ms > (cond=cond@entry=0x7fe7ee1fddc0, > mutex=0x241eb78, timeout_ms=) at mono-mutex.c:181 > #2 0x00586f28 in worker_park () at threadpool-ms.c:509 > #3 worker_thread (data=) at threadpool-ms.c:607 > #4 0x005841e9 in start_wrapper_internal (data=) at > threads.c:725 > #5 start_wrapper (data=) at threads.c:772 > #6 0x00621026 in inner_start_thread (arg=0x7fe831df4650) at > mono-threads-posix.c:97 > #7 0x7fe88a55edf5 in start_thread (arg=0x7fe7ee1fe700) at > pthread_create.c:308 > #8 0x7fe88a28c1ad in clone () at > ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 > > Usually one thread will have a stack like this: > > #0 sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:85 > #1 0x0061aa37 in mono_sem_wait (sem=0x9542c0 > , alertable=alertable@entry=0) at > mono-semaphore.c:107 > #2 0x005c77bd in sgen_wait_for_suspend_ack (count=count@entry=18) > at sgen-os-posix.c:188 > #3 0x005c78f9 in sgen_thread_handshake (suspend=suspend@entry=1) > at sgen-os-posix.c:224 > #4 0x005c7e92 in sgen_client_stop_world > (generation=generation@entry=0) at sgen-stw.c:234 > #5 0x005e6aca in sgen_stop_world (generation=0) at sgen-gc.c:3389 > #6 0x005e6c29 in sgen_perform_collection (requested_size=4096, > generation_to_collect=0, reason=0x6d9595 "Nursery full", wait_to_finish=0) > at sgen-gc.c:2322#7 0x005da62a in sgen_alloc_obj_nolock > (vtable=vtable@entry=0x7fe85c0343c0, size=size@entry=128) at > sgen-alloc.c:291 > #8 0x005da913 in sgen_alloc_obj (vtable=vtable@entry=0x7fe85c0343c0, > size=128) at sgen-alloc.c:457 > #9 0x005c86e9 in mono_gc_alloc_obj > (vtable=vtable@entry=0x7fe85c0343c0, > size=) at sgen-mono.c:936 > #10 0x005a8b54 in mono_object_new_alloc_specific > (vtable=vtable@entry=0x7fe85c0343c0) at object.c:4385 > #11 0x005a8bf0 in mono_object_new_specific (vtable=0x7fe85c0343c0) > at object.c:4379 > #12 0x005a8c8c in mono_object_new (domain=domain@entry=0x1ded1c0, > klass=) at object.c:4318 > #13 0x005ac1c9 in mono_async_result_new > (domain=domain@entry=0x1ded1c0, > handle=handle@entry=0x0, state=0x0, data=data@entry=0x0, > object_data=object_data@entry=0x7fe8838af020) at object.c:5768 > #14 0x005887ff in mono_threadpool_ms_begin_invoke > (domain=0x1ded1c0, target=target@entry=0x7fe8838aee38, > method=method@entry=0x2963d28, > params=params@entry=0x7fe7ed9f8f10) at threadpool-ms.c:1300 > #15 0x0054b547 in mono_delegate_begin_invoke > (delegate=0x7fe8838aee38, params=0x7fe7ed9f8f10) at marshal.c:2111 > #16 0x416d29d8 in ?? () > #17 0x in ?? () > > Just from reading the first stack, it doesn't look like > mono_cond_timedwait_ms would spin, at least as long as the timeout_ms > wasn't 0. For the second stack, I don't know whether that's a normal > garbage collection pass or (since we see it frequently) a sign that garbage > collection is happening too often. > > Can anyone give me some pointers for where to dig more deeply? > > Thanks, > chris > ___ > Mono-devel-list mailing list > Mono-devel-list@lists.ximian.com > http://lists.ximian.com/mailman/listinfo/mono-devel-list > ___ Mono-devel-list mailing list Mono-devel-list@lists.ximian.com http://lists.ximian.com/mailman/listinfo/mono-devel-list
[Mono-dev] TCP connects
In looking through a recent issue, when a node disappears we end up with: "Threadpool worker" at <0x> at (wrapper managed-to-native) System.Net.Sockets.Socket.Connect_internal (intptr,System.Net.SocketAddress,int&) <0x> at System.Net.Sockets.Socket.Connect (System.Net.EndPoint) <0x00135> at System.Net.WebConnection.Connect (System.Net.HttpWebRequest) <0x00615> at System.Net.WebConnection.InitConnection (object) <0x0031a> at System.Net.WebConnection.m__0 (object) <0x00024> at (wrapper runtime-invoke) .runtime_invoke_void__this___object (object,intptr,intptr,intptr) <0x> "Threadpool worker" at <0x> at (wrapper managed-to-native) System.Net.Sockets.Socket.Connect_internal (intptr,System.Net.SocketAddress,int&) <0x> at System.Net.Sockets.Socket.Connect (System.Net.EndPoint) <0x00135> at System.Net.WebConnection.Connect (System.Net.HttpWebRequest) <0x00615> at System.Net.WebConnection.InitConnection (object) <0x0031a> at System.Net.WebConnection.m__0 (object) <0x00024> at (wrapper runtime-invoke) .runtime_invoke_void__this___object (object,intptr,intptr,intptr) <0x> On every thread from our async code. As the connection is timing out the entire ThreadPool very quickly blocks. https://github.com/mono/mono/blob/master/mcs/class/System/System.Net/WebConnection.cs#L195 I am guessing the right fix here would be to use async connect? Cheers, Greg -- Studying for the Turing test ___ Mono-devel-list mailing list Mono-devel-list@lists.ximian.com http://lists.ximian.com/mailman/listinfo/mono-devel-list