On 12/03/2015 16:19, Natanael Copa wrote:
netlink listener code that needs to be in memory all the time:
http://git.alpinelinux.org/cgit/ncopa/nldev/tree/nldev.c

 A few comments:

 - disableoom(): it's a generic operation for any daemon you don't
want to have killed, so you can factorize it: you can remove it from
the always-in-memory code, and run nldev under a program such as
perp's runchoom. http://b0llix.net/perp/site.cgi?page=runchoom.8

 - spawn_handler(): would benefit from being implemented via
posix_spawn(), to accommodate noMMU. On the other hand, you'd
need two elements in the posix_spawn_file_actions_t, so depending
on the implementation, that could pull in malloc(). It's a
trade-off.

 - sighandler()/initsignals(): trapping SIGKILL is a waste of code.

 - init_netlink_socket(): listfd needs to be close-on-exec.

 - in sighandler() and at the end of main(): shutting down a socket
before closing it is a waste of code when you know there are no
copies of the socket. Shutting it down or closing it right before
exit()ing is also a waste of code. :)

 - It may be worth it to write functions similar to die(), edie()
and dbg() that don't use stdio, and call _exit() instead of exit().
If you manage to scrap stdio, you gain 2 pages of text (with the
musl implementation).

 - I understand that Alpine is runit-based. Save even more space
by scrapping all the dofork nonsense and saying that nldev should
be supervised.

 - line 213: ew. Please don't do defensive programming.

 - a pipe does not separate messages like a datagram socket does.
If you get a burst of events and nldev writes them all to the pipe,
nldev-handler won't be able to separate them. You need a message
terminator. In s6-uevent-listener, I simply used an extra \0 as
a message terminator. If you want to do the same:
   * line 225: iov.iov_len = sizeof(buf)-1;
   * insert after line 273: buf[len++] = 0;

 - do you really mean to set the kernel buffer to 128 MB ?


handler helper code:
http://git.alpinelinux.org/cgit/ncopa/nldev/tree/nldev-handler.c

 More comments later ;)

--
 Laurent

_______________________________________________
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox

Reply via email to