On Tue, Mar 12, 2013 at 7:28 AM, Mark Seger <[email protected]> wrote:
>>>  I just tried monitoring the start of collectl with another
>>> copy running at a monitoring interval of .1 and watching all CPUs with
>>> -sC.  I did see one hit 100% but only for 1 cycle.
>>
>> When I use the .1 s interval, the precision is much lower, everything
>> is a multiple of 10 and the instance of collectl that is doing the
>> monitoring takes about 30% of the CPU.  Most intervals have 100% user,
>> but the few sys spikes are much higher than before, up to 50%.
>>
>
> I'd think the precision would be much higher since you're not taking
> 10 samples/sec

>From the precision, it looks like I'm taking 10 samples every .1 s or
100 samples / second.  This is more than sufficient for my needs.

>>>  I don't think I
>>> could easily make some of the code conditionally load because it's too
>>> intertwined.
>>
>> OK, thanks for considering it and for all the suggestions above.
>>
>> By scattering a few print statements throughout collectl, I see that
>> most of the time is spent loading and parsing the files collectl
>> (before first line of code) and formatit.ph (require
>> "$ReqDir/formatit.ph").  This is why I was hoping that separate files
>> could be loaded depending on the usage; reading system information,
>> writing daemon raw files, reading daemon raw files, outputting to
>> console.
>>
>
> have you timed collectl running as a daemon?  when I first wrote
> collectl over 10 years ago I was running on much smaller boxes and
> performance issues were a bigger issue, but even then things really
> went fast.  my initial thought was to put everything collectl needed
> for formatting output into formatit and everything else in the main
> section so we are on the same page.  as it turned out the main init
> routine gets called in both places so formatit has to get loaded
> anyways AND I've strayed from my original plan and have stuck other
> routines in formatit that may not be related to printing, again
> because perl has always shown to be so fast.

I'm always amazed as how fast perl runs its scripts.  However the
problem here is the time required to load the source and internally
compile it or whatever perl does.

Those two files are quite large and on start-up perl requires a lot of
CPU to process them.

# wc collectl.pl formatit.ph
  6611  31696 245152 collectl.pl
  9867  32637 363110 formatit.ph
 16478  64333 608262 total

Looking back to the 2.2-7 release which is the oldest I could find on
sourceforge, these current files are larger than they were but not
significantly.  Only about 50% larger.

Another confirmation that the issue is just the time required for perl
to load these scripts.

# time perl -c collectl.pl
collectl.pl syntax OK

real    0m11.752s
user    0m11.440s
sys     0m0.260s

# time perl -c formatit.ph
formatit.ph syntax OK

real    0m10.955s
user    0m10.540s
sys     0m0.170s


> all that said, have you see the --utime switch (I've got switches for
> everything).  you can use it to tell collectl to write special
> timestamps into the raw file to allow you to literally see how much
> time to the usec is spent in the data collection section.  might prove
> illuminating for you.  or not...

Thanks, for now, for me, the runtime aspects of the program are more
than sufficient at my fairly long recording interval.  I'll perform
more investigation using the --utime switch if the CPU from running
the daemon ever becomes an issue.  For now I'll just have to accept
the long start-up time, it's not a blocking issue.  I've even further
minimized the impact by having the start-stop-daemon automatically
start it in the background rather than waiting for collectl to
background itself, so that the rest of the start sequence can
continue.

Thanks again for the assistance.

--
Chris

------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
_______________________________________________
Collectl-interest mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/collectl-interest

Reply via email to