Public bug reported:

I noticed that systemd on my idle Wily desktop is creating very short
lived threads at 1Hz.  While these aren't doing much, it still consumes
power doing wakeups to create these periodic threads.

Showing thread creation with forkstat:

$ sudo forkstat 
Time     Event  PID  Info  Duration Process
13:43:07 clone     1 parent          /sbin/init splash
13:43:07 clone  7483 thread          /sbin/init splash
13:43:07 exit   7483      0    0.001 /sbin/init splash
13:43:08 clone     1 parent          /sbin/init splash
13:43:08 clone  7484 thread          /sbin/init splash
13:43:08 exit   7484      0    0.000 /sbin/init splash
13:43:10 clone     1 parent          /sbin/init splash
13:43:10 clone  7485 thread          /sbin/init splash
13:43:10 exit   7485      0    0.000 /sbin/init splash
13:43:11 clone     1 parent          /sbin/init splash
13:43:11 clone  7486 thread          /sbin/init splash
13:43:11 exit   7486      0    0.000 /sbin/init splash
13:43:12 clone     1 parent          /sbin/init splash
13:43:12 clone  7487 thread          /sbin/init splash
13:43:12 exit   7487      0    0.000 /sbin/init splash
13:43:13 clone     1 parent          /sbin/init splash
13:43:13 clone  7488 thread          /sbin/init splash
13:43:13 exit   7488      0    0.000 /sbin/init splash
13:43:15 clone     1 parent          /sbin/init splash
13:43:15 clone  7489 thread          /sbin/init splash
13:43:15 exit   7489      0    0.000 /sbin/init splash
13:43:16 clone     1 parent          /sbin/init splash
13:43:16 clone  7490 thread          /sbin/init splash
13:43:16 exit   7490      0    0.000 /sbin/init splash
13:43:17 clone     1 parent          /sbin/init splash
13:43:17 clone  7491 thread          /sbin/init splash
13:43:17 exit   7491      0    0.000 /sbin/init splash

And it's consuming some cycles over time:

$ sudo perf stat -p 1
^C
 Performance counter stats for process id '1':

          7.519868      task-clock (msec)         #    0.000 CPUs utilized      
    
                41      context-switches          #    0.005 M/sec              
    
                39      cpu-migrations            #    0.005 M/sec              
    
                 3      page-faults               #    0.399 K/sec              
    
        12,107,977      cycles                    #    1.610 GHz                
    
        10,597,101      stalled-cycles-frontend   #   87.52% frontend cycles 
idle   
                 0      stalled-cycles-backend    #    0.00% backend  cycles 
idle   
         2,285,818      instructions              #    0.19  insns per cycle    
    
                                                  #    4.64  stalled cycles per 
insn
           457,133      branches                  #   60.790 M/sec              
    
            69,444      branch-misses             #   15.19% of all branches    
    

      46.099593011 seconds time elapsed

The thread is just doing the following:

clock_gettime(0x7 /* CLOCK_??? */, {52592, 947682919}) = 0
read(14, "\1\0\0\0\0\0\0\0", 8)         = 8
fcntl(30, F_DUPFD_CLOEXEC, 3)           = 15
ioctl(30, 0xc0189374, 0x7ffeaf311470)   = 0
fcntl(16, F_GETFD)                      = 0x1 (flags FD_CLOEXEC)
clone(Process 7466 attached
child_stack=0x7f97c3580e30, 
flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID,
 parent_tidptr=0x7f97c35819d0, tls=0x7f97c3581700, child_tidptr=0x7f97c35819d0) 
= 7466
[pid  7466] set_robust_list(0x7f97c35819e0, 24) = 0
[pid     1] timerfd_settime(14, TFD_TIMER_ABSTIME, {it_interval={0, 0}, 
it_value={52594, 197493000}}, NULL <unfinished ...>
[pid  7466] ioctl(15, 0xc018937c <unfinished ...>
[pid     1] <... timerfd_settime resumed> ) = 0
[pid  7466] <... ioctl resumed> , 0x7f97c3580d60) = -1 EAGAIN (Resource 
temporarily unavailable)
[pid     1] epoll_wait(4,  <unfinished ...>
[pid  7466] close(15)                   = 0
[pid  7466] close(16)                   = 0
[pid  7466] madvise(0x7f97c2d81000, 8368128, MADV_DONTNEED) = 0
[pid  7466] _exit(0)                    = ?
[pid  7466] +++ exited with 0 +++
<... epoll_wait resumed> {{EPOLLIN, {u32=3, u64=3}}}, 34, -1) = 1

** Affects: systemd (Ubuntu)
     Importance: Undecided
         Status: New

** Summary changed:

- systemd on wily desktop generating short lived threads every second in a 
quite system
+ systemd on wily desktop generating short lived threads every second in a 
quiet system

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1470845

Title:
  systemd on wily desktop generating short lived threads every second in
  a quiet system

Status in systemd package in Ubuntu:
  New

Bug description:
  I noticed that systemd on my idle Wily desktop is creating very short
  lived threads at 1Hz.  While these aren't doing much, it still
  consumes power doing wakeups to create these periodic threads.

  Showing thread creation with forkstat:

  $ sudo forkstat 
  Time     Event  PID  Info  Duration Process
  13:43:07 clone     1 parent          /sbin/init splash
  13:43:07 clone  7483 thread          /sbin/init splash
  13:43:07 exit   7483      0    0.001 /sbin/init splash
  13:43:08 clone     1 parent          /sbin/init splash
  13:43:08 clone  7484 thread          /sbin/init splash
  13:43:08 exit   7484      0    0.000 /sbin/init splash
  13:43:10 clone     1 parent          /sbin/init splash
  13:43:10 clone  7485 thread          /sbin/init splash
  13:43:10 exit   7485      0    0.000 /sbin/init splash
  13:43:11 clone     1 parent          /sbin/init splash
  13:43:11 clone  7486 thread          /sbin/init splash
  13:43:11 exit   7486      0    0.000 /sbin/init splash
  13:43:12 clone     1 parent          /sbin/init splash
  13:43:12 clone  7487 thread          /sbin/init splash
  13:43:12 exit   7487      0    0.000 /sbin/init splash
  13:43:13 clone     1 parent          /sbin/init splash
  13:43:13 clone  7488 thread          /sbin/init splash
  13:43:13 exit   7488      0    0.000 /sbin/init splash
  13:43:15 clone     1 parent          /sbin/init splash
  13:43:15 clone  7489 thread          /sbin/init splash
  13:43:15 exit   7489      0    0.000 /sbin/init splash
  13:43:16 clone     1 parent          /sbin/init splash
  13:43:16 clone  7490 thread          /sbin/init splash
  13:43:16 exit   7490      0    0.000 /sbin/init splash
  13:43:17 clone     1 parent          /sbin/init splash
  13:43:17 clone  7491 thread          /sbin/init splash
  13:43:17 exit   7491      0    0.000 /sbin/init splash

  And it's consuming some cycles over time:

  $ sudo perf stat -p 1
  ^C
   Performance counter stats for process id '1':

            7.519868      task-clock (msec)         #    0.000 CPUs utilized    
      
                  41      context-switches          #    0.005 M/sec            
      
                  39      cpu-migrations            #    0.005 M/sec            
      
                   3      page-faults               #    0.399 K/sec            
      
          12,107,977      cycles                    #    1.610 GHz              
      
          10,597,101      stalled-cycles-frontend   #   87.52% frontend cycles 
idle   
                   0      stalled-cycles-backend    #    0.00% backend  cycles 
idle   
           2,285,818      instructions              #    0.19  insns per cycle  
      
                                                    #    4.64  stalled cycles 
per insn
             457,133      branches                  #   60.790 M/sec            
      
              69,444      branch-misses             #   15.19% of all branches  
      

        46.099593011 seconds time elapsed

  The thread is just doing the following:

  clock_gettime(0x7 /* CLOCK_??? */, {52592, 947682919}) = 0
  read(14, "\1\0\0\0\0\0\0\0", 8)         = 8
  fcntl(30, F_DUPFD_CLOEXEC, 3)           = 15
  ioctl(30, 0xc0189374, 0x7ffeaf311470)   = 0
  fcntl(16, F_GETFD)                      = 0x1 (flags FD_CLOEXEC)
  clone(Process 7466 attached
  child_stack=0x7f97c3580e30, 
flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID,
 parent_tidptr=0x7f97c35819d0, tls=0x7f97c3581700, child_tidptr=0x7f97c35819d0) 
= 7466
  [pid  7466] set_robust_list(0x7f97c35819e0, 24) = 0
  [pid     1] timerfd_settime(14, TFD_TIMER_ABSTIME, {it_interval={0, 0}, 
it_value={52594, 197493000}}, NULL <unfinished ...>
  [pid  7466] ioctl(15, 0xc018937c <unfinished ...>
  [pid     1] <... timerfd_settime resumed> ) = 0
  [pid  7466] <... ioctl resumed> , 0x7f97c3580d60) = -1 EAGAIN (Resource 
temporarily unavailable)
  [pid     1] epoll_wait(4,  <unfinished ...>
  [pid  7466] close(15)                   = 0
  [pid  7466] close(16)                   = 0
  [pid  7466] madvise(0x7f97c2d81000, 8368128, MADV_DONTNEED) = 0
  [pid  7466] _exit(0)                    = ?
  [pid  7466] +++ exited with 0 +++
  <... epoll_wait resumed> {{EPOLLIN, {u32=3, u64=3}}}, 34, -1) = 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1470845/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to     : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to