On 25.08.2016 17:44, Stephen Hemminger wrote:
> On Wed, 24 Aug 2016 20:08:25 +0200
> Hannes Frederic Sowa <han...@stressinduktion.org> wrote:
> 
>> Show which processes are using which tun/tap devices, e.g.:
>>
>> $ ip -d tuntap
>> tun0: tun
>>      Attached to processes: vpnc(9531)
>> vnet0: tap vnet_hdr
>>      Attached to processes: qemu-system-x86(10442)
>> virbr0-nic: tap UNKNOWN_FLAGS:800
>>      Attached to processes:
>>
>> Signed-off-by: Hannes Frederic Sowa <han...@stressinduktion.org>
> 
> I think reading all of /proc like this will scale really badly on large 
> systems.

Yes, it is quite heavy weight but so far I didn't see any major latency
on even more busy systems (tens of thousands of fds). Also this code is
only entered if details are requested. I don't see a cheaper way to
determine those information currently without further patches (which
probably all will have the same n:m problem, as fds can be shared
between multiple processes).

> Why reinvent lsof?

lsof currently doesn't provide this information and would also have to
brute force in the same way (it scans all fds anyway, thought). In my
opinion it is just very helpful on virtualization boxes to quickly
identify the vm associated with an interface.

I would propose that in case we see people reporting latency issues to
speed it up in possibly two ways:

a) not glob over the whole pid/fd space, but just take one number at a
time, e.g. /proc/1*/fd/*, /proc/2*/fd/* etc.

b) or/and we just push everything into an internal hash table so for
multiple interfaces we don't need to walk again.

I would just try to not add the complexity if it is not necessary.

Thoughts?

Bye,
Hannes

Reply via email to