On Mon, 10 Jul 2017, Jan Henning Thorsen wrote:

> Charlie,
> 
> I think I was tricked by the "ps" output you had. What is 
> "/etc/e-smith/web/functions/index.daemon.cgi" ? Is that the mojo app or the 
> cgi script?

That's the mojo app.

> Can you show some code?

Not easily.

> The reason why you're seeing the polling is because of the implementation 
> in the CGI plugin. You can see it 
> here: 
hfnnttps://github.com/jhthorsen/mojolicious-plugin-cgi/blob/master/lib/Mojolicious/Plugin/CGI.pm#L35

That's 404. But I think you are referring to:

...
use constant CHECK_CHILD_INTERVAL => $ENV{CHECK_CHILD_INTERVAL} || 0.01;
...
  $app->{'mojolicious_plugin_cgi.tid'}
    ||= Mojo::IOLoop->recurring(CHECK_CHILD_INTERVAL, sub { local ($?, 
$!); _waitpids($pids); });
...

> Btw: I'm the author of the CGI plugin. I have not had any issues with it 
> using a lot of CPU, even with that polling interval, but you can try to 

You probably aren't running code on a 32bit PPC CPU running at 400Mhz :-)

> tweak it if you like, by 
> setting 
> https://github.com/jhthorsen/mojolicious-plugin-cgi/blob/master/lib/Mojolicious/Plugin/CGI.pm#L13

I can try that, but I'm trying to avoid the CGI code by porting each 
sub-app to Mojolicious, and using the Mount plugin.

> It could probably be fixed by the same code I added to ReadWriteFork. I 
> will take a PR if anyone implements the change, where it uses EV::child(), 
> if 
> available: 
> https://github.com/jhthorsen/mojo-ioloop-readwritefork/commit/42a579d5b78eedb0d01b7db25036ca1726819f18
> 
> 
> On Friday, June 30, 2017 at 7:15:17 AM UTC+2, Henry Foolman wrote:
> >
> > Hi Charly,
> > missing some details.
> > 1.) OS Version
> > 2.) Kernel Version
> > 3.) mojo version.
> > 4.) Do you use cpan(m) to install the lastest version or are you using OS 
> > packages ?
> >
> > I had similar problems but switched to a new mojo version and latest EV. ( 
> > cpanm EV).
> > For EV it should be kernel > 4.5
> > It seems that you started your app via daemon.
> > Try to use hypnotoad and check if there are changes. (got to the base dir 
> > and enter hypnotoad -f script/yourscript).
> > Some hints about reading and writing to a process.
> > I use Mojo::IOLoop::ReadWriteFork togehter with the delay helper 
> > ($c->delay), which works perfectly.
> > Perhaps it's this your're looking for.
> >
> > Rgds.
> > Hans
> >
> >
> > Am Donnerstag, 29. Juni 2017 16:10:46 UTC+2 schrieb Charlie Brady:
> >>
> >>
> >> On Thu, 29 Jun 2017, Jan Henning Thorsen wrote: 
> >>
> >> > Hey, 
> >> > 
> >> > It doesn't look like "daemon" mode. It looks like you used "morbo" to 
> >> start 
> >> > the application. 
> >>
> >> I don't know what makes you say that. I definitely was using "daemon" 
> >> mode 
> >> and was not using "morbo". 
> >>
> >> I didn't track down exactly what code was making the difference in the 
> >> poll timeout, but worked out that it was somewhere in the CGI plugin. 
> >> I've 
> >> eliminated the CGI plugin and now find that the poll timeout is back to 
> >> 1000 ms, which a corresponding reduction in idle CPU. 
> >>
> >> I don't know exactly where the fork() was which lead to the pair of 
> >> processes. I did have some code with an open() reading from a pipe, but I 
> >> wouldn't expect to see poll() in the child process in that case. 
> >>
> >> Mystery not quite solved, but now that I am not using the CGI plugin I at 
> >> least don't have an ongoing problem. 
> >>
> >> Thanks for commenting. If you want to try to identify the problem with 
> >> the 
> >> CGI plugin I can re-insert that code. 
> >>
> >> > 
> >> > 
> >> > On Wednesday, June 14, 2017 at 11:11:31 PM UTC+2, Charlie Brady wrote: 
> >> > > 
> >> > > 
> >> > > I'm running a Mojolicious UI in daemon mode on a PowerPC embedded 
> >> system 
> >> > > (running Wind River Linux, perl 5.22.0, kernel 4.1.21) and notice 
> >> > > surprisingly high CPU usage on a totally idle UI. 
> >> > > 
> >> > > I can see that there are two processes. Each process is in a polling 
> >> loop 
> >> > > with 10ms timeout. I don't expect to see two processes here, so I 
> >> wonder 
> >> > > whether this is normal or abnormal behaviour. If it is normal 
> >> behaviour, 
> >> > > can I tune down the 10ms timeout to something longer? I would like to 
> >> see 
> >> > > the process block until there is actual work to do. 
> >> > > 
> >> > > root@10:/service# ps fax | grep index.daemon-5 
> >> > >  1418 pts/1    S+     0:00      \_ grep index.daemon 
> >> > > 19303 ?        Ssl   24:24 
> >> /etc/e-smith/web/functions/index.daemon.cgi 
> >> > > 24397 ?        Ssl   20:54  \_ 
> >> /etc/e-smith/web/functions/index.daemon.cgi 
> >> > > root@10:/service# strace -p19303 -tt 2>&1 | head -5 
> >> > > Process 19303 attached 
> >> > > 17:05:08.095154 restart_syscall(<... resuming interrupted call ...>) 
> >> = 0 
> >> > > 17:05:08.104465 poll([{fd=21, events=POLLIN|POLLPRI|POLLOUT}], 1, 9) 
> >> = 0 
> >> > > (Timeout) 
> >> > > 17:05:08.116478 poll([{fd=21, events=POLLIN|POLLPRI|POLLOUT}], 1, 10) 
> >> = 0 
> >> > > (Timeout) 
> >> > > 17:05:08.130120 poll([{fd=21, events=POLLIN|POLLPRI|POLLOUT}], 1, 9) 
> >> = 0 
> >> > > (Timeout) 
> >> > > root@10:/service# strace -p24397 -tt 2>&1 | head -5 
> >> > > Process 24397 attached 
> >> > > 17:05:12.888643 restart_syscall(<... resuming interrupted call ...>) 
> >> = 0 
> >> > > 17:05:12.894814 poll([{fd=21, events=POLLIN|POLLPRI|POLLOUT}], 1, 10) 
> >> = 0 
> >> > > (Timeout) 
> >> > > 17:05:12.908082 poll([{fd=21, events=POLLIN|POLLPRI|POLLOUT}], 1, 10) 
> >> = 0 
> >> > > (Timeout) 
> >> > > 17:05:12.920771 poll([{fd=21, events=POLLIN|POLLPRI|POLLOUT}], 1, 10) 
> >> = 0 
> >> > > (Timeout) 
> >> > > root@10:/service# top | head -12 
> >> > > top - 17:09:31 up 9 days,  7:59,  4 users,  load average: 0.73, 0.59, 
> >> 0.60 
> >> > > Tasks:  98 total,   1 running,  95 sleeping,   0 stopped,   2 zombie 
> >> > > %Cpu(s): 12.9 us,  5.7 sy,  3.8 ni, 75.7 id,  1.8 wa,  0.0 hi,  0.1 
> >> si,   
> >> > > 0.0 st 
> >> > > KiB Mem :   995536 total,     5428 free,   317100 used,   673008 
> >> > > buff/cache 
> >> > > KiB Swap:    65532 total,    65448 free,       84 used.   594360 
> >> avail Mem 
> >> > > 
> >> > >   PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ 
> >> COMMAND 
> >> > >                   
> >> > >  1440 root      20   0    3852   2280   1924 R 24.0  0.2   0:00.13 
> >> top     
> >> > >                   
> >> > >   508 root      20   0  134908  69156  13896 S 12.0  6.9 465:50.05 
> >> > > call_control             
> >> > > 19303 root      20   0   87812  37720   4148 S  8.0  3.8  24:45.37 
> >> > > /etc/e-smith/we           
> >> > > 24397 root      20   0   87788  37184   3552 S  8.0  3.7  21:13.65 
> >> > > /etc/e-smith/we           
> >> > >   425 root      20   0   16068  10284   3640 S  4.0  1.0  33:32.22 
> >> snmpd   
> >> > >                   
> >> > > 
> >> > 
> >>
> >

Reply via email to