Nicolai Johannes writes: > After I introduced two new privileges as suggested on the list (see > attachment with new priv_defs file for semantic), I started to dig > into the network kernel code to discover where to test these > privileges (new secpolicy functions are in progress).
Though I know you're implementing someone else's suggestion, I do have a couple of questions about that. I'm not sure that the privileges are practical. For instance, disabling PRIV_NET_INITIATE would likely also damage syslog() and any SNMP instrumentation from that process. It sounds to me like this is a rather large hammer to be taking to this problem, and that something finer might be needed. (Something like limiting the set of ports and protocols available to an application, though I don't know the best way to do that.) The other question is about how these new privileges function. In particular, what does it mean to restrict recv*() for UDP? The analogous function for TCP is just read(). UDP doesn't have anything that really looks like TCP's accept() interface. Why should revoking PRIV_NET_ACCEPT cause a UDP application to stop receiving data, but allow it to transmit? (And what happens to the inbound data? Does it just queue up?) For TCP, is the privilege associated with bind(), listen(), or accept()? I'd have guessed listen(), despite the name, but perhaps there's more to this. (Breaking accept() or recv*() means that a reduced privilege service could never be a wait-type inetd service. Is that intentional?) How do these privileges map into SCTP? That has a slightly different set of functions associated with it. > The privileges have to be probed before a connect, a listen, a bind, an > accept or a send*/recv* for unconnected UDP is made. The description of the functions in priv_defs makes no mention of bind(). What's the exact list of checks needed? > Basically, I figured out three potential places where to place new privilege > tests: > > 1. At system call entry level (socksyscalls.c): I am not sure whether a user > may circumvent the check if he e. g. uses putmsg and a bind request Yes, a user could do that. > 2. At vnode-socketops level (socktpi.c and socksctp.c): By the way: Is the > sctp-implementation compatible with the TPI-model (e. g. its connect function > is directly called and not realized in a STREAM module that receives > T_CONN_REQ, isn't it?) No, SCTP in Solaris has only limited TLI functionality. You can't use TLI applications with SCTP. > May checks at socketops level be circumvented by a user if SS_DIRECT is set? For TCP and UDP, the user can just open the /dev/tcp or /dev/udp node and connect away. Socketopts isn't involved. > 3. At TPI module/driver level (tcp.c, udp.c, tl.c (seems as if Unix Domain > socket functionality is implemented here) and in sctp_bind.c, sctp_output.c > and various other files That's where I believe it needs to be done. > I would prefer chosing possibility 1 or 2 but looking for other privilege > checks like PRIV_NET_PRIVADDR, I realized that these tests use possibility 3. > Perhaps even a mixed approach should be applied. That sounds like a design question. Yes, a mixed approach could be used, perhaps to make the error handling cases in sockfs simpler, but I think there needs to be exactly one layer where all of the checks are made. If that's not done, there'll be holes. I'm not really sure if sockfs needs any checks. > While looking through the code, I found some pieces about NetaTalk > and NCA. Are these protocols relevant for my job? AppleTalk is an add-on product. It'd probably be ok if that (and others of its kind, like SNA) were to fend for themselves. I don't see how much could be done for them here. (And it's unclear to me whether checking in sockfs would be effective for them, either. I expect they have the same TLI issues.) NCA is probably relevant here. NCA behaves like a special address family that caches web pages in the kernel. It's able to receive connections. > Is there a figure or something that describes what PTI > modules/drivers/stream head are used when a communication via > TCP/UDP/SCTP takes place? Besides the code, I think we'd have to look into what documents we could get you. (Darren?) The old Mentat documentation is somewhat helpful, but I don't know if we have rights to redistribute it. > At the moment, I am a little bit confused about the terms PTI, TLI, > XTI, stream head, sockfs, vnodes and devpaths, I do not exactly know > how many possibilities there are to do the same thing but using > another system call or way through the various layers. I've never seen the term "PTI" used in the code. (Did you mean TPI?) TLI and XTI are programming interfaces that applications can use. They're System V-ish things and are the subject of several standards to which Solaris conforms. They're roughly equivalent to BSD sockets, but use a message passing model instead, and are around 20 times more difficult to use. (Plus or minus a small amount.) The stream head is part of STREAMS -- strdata and stwdata. It's the top end of a stream is normally terminated and converted into the familiar syscall interface. STREAMS allows other entities inside the kernel to behave as stream heads, but when you're talking about "the stream head" in general and without reference to a particular stream configuration, you're referring to os/streamio.c. Sockfs is a BSD sockets system call interface using a file system implementation (inheriting the read/write semantics of files). It can use TPI messages (the in-kernel STREAMS messaging version of TLI) or direct function calls to reach the internal transport functions. It emulates an old STREAMS module implementation -- "sockmod" -- that can be popped off by applications that want to manipulate the stream underneath. When (or really "if") that happens, many of the direct-call optimizations have to go away. Vnodes are part of the file system implementation. They roughly represent open files (as opposed to open file _descriptors_). Devpaths have to do with how a device is named within the system. I'm not sure what the context of that question might be. > First I thought that everything goes over traditional Unix system > calls (like connect, bind, ...), Not everything. Standards-conforming applications can use XTI, which doesn't have those calls. In particular, RPC (and thus NFS) doesn't use BSD sockets. > then over routines in socksubr.c to a vnode (defined either in > socktpi.c or socksctp.c) and then over TPI messages to the concrete > transport protocol. Sort of. There may be two vnodes here, but that's the rough idea. > sctp seems not to use PTI when it calls its connect function. SCTP is different. There are no standards-conforming applications that can speak to SCTP, there are no TLI/XTI extensions defined to handle the new SCTP semantics, and our efforts have been devoted to making BSD sockets work better (not TLI/XTI), so the project team chose to make it available via sockets only. > Furthermore, I learned from the code that one could use putmsg > instead of connect to connect a socket (I have no idea how this > should work with sctp). It doesn't. > Then I discovered the possibility to use > t_open and /dev/tcp and a file named /etc/netconfig and things like > SS_DIRECT. For what purpose /dev/ip, /dev/tcp, ... are actually > needed? It's for support of those standards-conforming applications. > How this relates to sockfs? It doesn't. > Things get harder for me when examining how data is received or > sent. I understand how read and writes are transferred to socket ops > (sockvnops.c and socksctpvnops.c) but they use different approaches > and different pathes through the layers. Indeed. The code forks here due to TLI/XTI. > First I thought, this would be done with T_DATA_REQ messages but > this seems to be only one possibility (e. g. strwrite and direct > calls seems to be another). Yes. > Perhaps these details are not relevant for my job, at the moment, I > am not sure in what detail I have to understand the networking code. I don't think they're relevant, because I would expect the relevant controls to be placed in each transport layer, where enough context is available. > To sum it up: I am looking for the earliest places, I can intercept > requests in order to probe the required privileges where there is no > possibility to circumvent the place where the tests are located. The > deeper these places are, the harder it is to catch all valid pathes > and to react to future changes. There are multiple paths in and multiple paths out, so focusing on the eariest place may not be the right answer. If you implement it in sockfs, then you'll need to solve the problem differently for TLI/XTI, which just goes through the regular stream head directly into the transport layers. It seems to me that the place in the system that has the right sort of context to make a decision like this is the transport layer itself. Only the transport layer knows whether a given series of actions represents an inbound or outbound connection, which is what I _think_ is being addressed here, and it's common among all of the access methods. The unfortunate thing is that there are multiple transport layer protocols, so you'll need to hit all of them. (And there may be more in the future, so this project sets future requirements.) > I have no further experience in the Solaris kernel, so please > forgive me if answers to my questions are too obvious. If there is > any freely available documentation about the network part in the > kernel, I should read, please tell me (at the moment, I only had a > look into the STREAMS Programming Guide). That's a good place to start. You can also get documents related to the standards (if you're interested) at www.opengroup.org. -- James Carlson, KISS Network <[EMAIL PROTECTED]> Sun Microsystems / 1 Network Drive 71.232W Vox +1 781 442 2084 MS UBUR02-212 / Burlington MA 01803-2757 42.496N Fax +1 781 442 1677 _______________________________________________ networking-discuss mailing list [email protected]
