On Thu, Feb 04, 2021 at 10:37:20PM +0800, Weiping Zhang wrote:
> On Thu, Feb 4, 2021 at 6:20 PM Balbir Singh <bsinghar...@gmail.com> wrote:
> >
> > On Sun, Jan 31, 2021 at 05:16:47PM +0800, Weiping Zhang wrote:
> > > On Wed, Jan 27, 2021 at 7:13 PM Balbir Singh <bsinghar...@gmail.com> 
> > > wrote:
> > > >
> > > > On Fri, Jan 22, 2021 at 10:07:50PM +0800, Weiping Zhang wrote:
> > > > > Hello Balbir Singh,
> > > > >
> > > > > Could you help review this patch, thanks
> > > > >
> > > > > On Mon, Dec 28, 2020 at 10:10 PM Weiping Zhang <zwp10...@gmail.com> 
> > > > > wrote:
> > > > > >
> > > > > > Hi David,
> > > > > >
> > > > > > Could you help review this patch ?
> > > > > >
> > > > > > thanks
> > > > > >
> > > > > > On Fri, Dec 18, 2020 at 1:24 AM Weiping Zhang
> > > > > > <zhangweip...@didiglobal.com> wrote:
> > > > > > >
> > > > > > > If a program needs monitor lots of process's status, it needs two
> > > > > > > syscalls for every process. The first one is telling kernel which
> > > > > > > pid/tgid should be monitored by send a command(write socket) to 
> > > > > > > kernel.
> > > > > > > The second one is read the statistics by read socket. This patch 
> > > > > > > add
> > > > > > > a new interface /proc/taskstats to reduce two syscalls to one 
> > > > > > > ioctl.
> > > > > > > The user just set the target pid/tgid to the struct 
> > > > > > > taskstats.ac_pid,
> > > > > > > then kernel will collect statistics for that pid/tgid.
> > > > > > >
> > > > > > > Signed-off-by: Weiping Zhang <zhangweip...@didiglobal.com>
> > > >
> > > > Could you elaborate on the overhead your seeing for the syscalls? I am 
> > > > not
> > > > in favour of adding new IOCTL's.
> > > >
> > > > Balbir Singh.
> > >
> > > Hello Balbir Singh,
> > >
> > > Sorry for late reply,
> > >
> > > I do a performance test between netlink mode and ioctl mode,
> > > monitor 1000 and 10000 sleep processes,
> > > the netlink mode cost more time than ioctl mode, that is to say
> > > ioctl mode can save some cpu resource and has a quickly reponse
> > > especially when monitor lot of process.
> > >
> > > proccess-count    netlink         ioctl
> > > ---------------------------------------------
> > > 1000              0.004446851     0.001553733
> > > 10000             0.047024986     0.023290664
> > >
> > > you can get the test demo code from the following link
> > > https://github.com/dublio/tools/tree/master/c/taskstat
> > >
> >
> > Let me try it out, I am opposed to adding the new IOCTL interface
> > you propose. How frequently do you monitor this data and how much
> > time in spent in making decision on the data? I presume the data
> > mentioned is the cost per call in seconds?
> >
> This program just read every process's taskstats from kernel and do not
> any extra data calculation, that is to say it just test the time spend on
> these syscalls. It read data every 1 second, the output is delta time spend to
> read all 1000 or 10000 processes's taskstat.
> 
> t1 = clock_gettime();
> for_each_pid /* 1000 or 10000 */
>         read_pid_taskstat
> t2 = clock_gettime();
> 
> delta = t2 - t1.
> 
> > > proccess-count    netlink         ioctl
> > > ---------------------------------------------
> > > 1000              0.004446851     0.001553733
> > > 10000             0.047024986     0.023290664
> 
> Since netlink mode needs two syscall and ioctl mode needs one syscall
> the test result shows  netlink cost double time compare to ioctl.
> So I want to add this interface to reduce the time cost by syscall.
> 
> You can get the test script from:
> https://github.com/dublio/tools/tree/master/c/taskstat#test-the-performance-between-netlink-and-ioctl-mode
> 
> Thanks
>

Have you looked at the listener interface in taskstats, where one
can register to listen on a cpumask against all exiting processes?

That provides a register once and listen and filter interface (based
on pids/tgids returned) and lets the task be done on exit as opposed
to polling for data.

Balbir Singh. 

Reply via email to