On (03/11/16 20:07), Tom Herbert wrote: > You are describing your deployment of RDS, not kernel implementation. > What I see is a very rigid implementation that would make it hard for > many us to ever even consider deploying. The abilities of applications > to tune TCP connections is well understood, very prevalent, and really > fundamental in making TCP based datacenters at large scale.
sorry, historically, OS DDI/DKI's for kernel modules have always had some way to set up startup parameters at module load time. And clusters have been around for a while. So let's just focus on the technical question around module config here, which involves more than TCP sockets, btw. > Any way, it was just an idea... ;-) Thank you. Moving on, > Maybe add one module parameter that indicates the module should just > load but not start, configure whatever is needed via netlink, and then > send one more netlink command to start operations. Even that needs an extra daemon. Without getting into the vast number of questions that it raises (such as every module with startup params now needs a uspace counterpart? modprobe-r behavior, namespace behavior? Why netlink for every kernel module? etc).. One module parameter is as much a "distribution management" problem as 10 of them, yes? I hope you see that I dont need that module param and daemon baggage- I can just use sysctl to set up all my params, including one bit for module_can_start_now to achieve the same thing. But it is still more than the handful of lines of code in my patch, so it would be nice to understand what is the "distribution" issue. Stepping back, how do we make sysctl fully namespace friendly? btw setting kernel socket keepalive params via sysctl is not a problem to implement at all, if it ever shows up as a requirement for customers. --Sowmini