Thank you for your detailed answer.

I first try to answer your questions:

Concerning the new privileges:
If you have a better suggestion concerning naming and responsibility, please do 
not hesitate to tell me, I am open for every idea.

As I have understand the PRIV_NET_* proposal, the two privileges should do the 
following:

PRIV_NET_ACCEPT: Should restrict accept() and listen() and for recv* for not 
connected sockets (that are possible for UDP as well). I caused confusion with 
mentioning bind(), this is not part of the privilege.

PRIV_NET_INITIATE: Should cover connect() and send* for not connected sockets.

Please note that both privileges should not cover Unix Domain Sockets (these 
should be restricted according to the proposal with future capabilities 
(PRIV_IPC_*) together with doors, message queues, semaphores, files, ...) 

To your problems mentioned with this approach:
Something like limiting the set of ports and
protocols available is at least very hard if you chose to do this with new 
privileges (I know RSBAC, a set of Linux kernel patches, where it is possible 
to associate a kind of network protocol/operation and port template with a 
process, role or user).
The privileges should only be dropped by processes that really want to disable 
new network activity at all, that various problems will occur if one drop these 
was already discussed on the list.

Personally I am not really sure whether one should differentiate between 
PRIV_NET_INITIATE and PRIV_NET_ACCEPT or should merge them together.
I interpreted the proposal that incoming traffic should be queued (perhaps as 
long as the privilege reappears in the effective set).

Concerning the syslog/SNMP and inetd issue: I am not involved deep enough in 
both daemons that I understand how the privileges would break conformity. Is 
syslog TCP/IP related (thought of Unix domain sockets).

Concerning the SCTP operatioons: For me it looks as if sctp supports all 
operations, the new privileges should restrict. Please correct me if I'm wrong.

Cooncerning further documents: It would be really helpful if I may get some 
other things in addition to the pure code.

In the mean time, I continued my investigations and some details are now clear 
to me.
The biggest problem I have is that I am only familiar to the the traditional 
socket operations but not (yet) with things like XTI/TLI.

I think I have already found the correct place for the connect() checks (seems 
as if I have to do the checks on module level in order not to miss a 
possibility how connecting is possible in Solaris):

udp_connect in udp.c (should I return TACCES with udp_err_ack in case of 
unsufficiant privileges?)
tcp_connect in tcp.c (should I return TAproCCES with tcp_err_ack?)
sctp_connect in sctp_conn.c (this check would be at layer 2 (sockfs) and not on 
STREAM module layer(is there a sctp STREAM module? How to deal with 
SOCK_SEQPACKET type sctp sockets?)
icmp_connect in icmp.c (should I handle this protocol at all?)
Up to this point, I have not focused my investigations on NCA, so I will add 
suggestions concerning this protocol later.

According to the proposals on the list, I should not restrict bind() at all. 
This first sounds strange to me but after I have realized, that every connect 
requires a bind this makes sense.

For the sendv* method for unconnected UDP, I think the correct place is
udp_output in udp.c
To my mind, I have to check for  udp->udp_state != TS_DATA_XFER in order to 
find out whether I had to deal with connected or unconnected UDP. The checks 
have to be done for M_DATA and for T_UNIT_DATA_REQ also I am not totally sure 
why one have to differentiate both message types.

Currently, I have no clue how to restrict unconnected udp from receiving 
datagrams.
Of course I could block sotpi_recvmsg in case of unconnected sockets or even 
introduce checks in socksyscalls.c but the STREAM API still allows to receive 
messages via getmsg. Both code only reads from a message queue that is filled 
by UDP. I am not sure whether I should additonally change sock_getmsg or 
strgetmsg.

If I understood the code, listen is only defined as an operation that changes 
the backlog of a bound socket. There is only sotpi_listen for UDP and IP and no 
equivalent at STREAM module layer (here one can find only bind). Should I 
modify sotpi_listen so that it checks the permission when the flag 
_SOBIND_LISTEN is set? For SCTP I would change sosctp_listen in socksctp.c
I am not sure whether other mechanism may intercept the layer where I intend to 
introduce the checks.

To restrict accepting new connections, I have only found sotpi_accept in 
socktpi.c for UDP and TCP and sosctp_accept for SCTP. I am not sure whether one 
could bypass these functions with another API (XTI/TLI), perhaps I should 
intercept accept calls one layer deeper in sowaitconnind in sockstr.c?

Lots of questions, hopefully some answers 8-)

Thanks in advance

Johannes



-----Ursprüngliche Nachricht-----
Von: James Carlson [mailto:[EMAIL PROTECTED]
Gesendet: Mo 12.06.2006 14:05
An: Nicolai Johannes
Cc: [email protected]; [EMAIL PROTECTED]
Betreff: Re: [networking-discuss] Entry points for checks related with the 
newcapabilities PRIV_NET_ACCEPT and PRIV_NET_INITIATE
 
Nicolai Johannes writes:
> After I introduced two new privileges as suggested on the list (see
> attachment with new priv_defs file for semantic), I started to dig
> into the network kernel code to discover where to test these
> privileges (new secpolicy functions are in progress).

Though I know you're implementing someone else's suggestion, I do have
a couple of questions about that.

I'm not sure that the privileges are practical.  For instance,
disabling PRIV_NET_INITIATE would likely also damage syslog() and any
SNMP instrumentation from that process.  It sounds to me like this is
a rather large hammer to be taking to this problem, and that something
finer might be needed.  (Something like limiting the set of ports and
protocols available to an application, though I don't know the best
way to do that.)

The other question is about how these new privileges function.  In
particular, what does it mean to restrict recv*() for UDP?  The
analogous function for TCP is just read().  UDP doesn't have anything
that really looks like TCP's accept() interface.  Why should revoking
PRIV_NET_ACCEPT cause a UDP application to stop receiving data, but
allow it to transmit?  (And what happens to the inbound data?  Does it
just queue up?)  For TCP, is the privilege associated with bind(),
listen(), or accept()?  I'd have guessed listen(), despite the name,
but perhaps there's more to this.  (Breaking accept() or recv*() means
that a reduced privilege service could never be a wait-type inetd
service.  Is that intentional?)

How do these privileges map into SCTP?  That has a slightly different
set of functions associated with it.

> The privileges have to be probed before a connect, a listen, a bind, an 
> accept or a send*/recv* for unconnected UDP is made.

The description of the functions in priv_defs makes no mention of
bind().  What's the exact list of checks needed?

> Basically, I figured out three potential places where to place new privilege 
> tests:
> 
> 1. At system call entry level (socksyscalls.c): I am not sure whether a user 
> may circumvent the check if he e. g. uses putmsg and a bind request

Yes, a user could do that.

> 2. At vnode-socketops level (socktpi.c and socksctp.c): By the way: Is the 
> sctp-implementation compatible with the TPI-model (e. g. its connect function 
> is directly called and not realized in a STREAM module that receives 
> T_CONN_REQ, isn't it?)

No, SCTP in Solaris has only limited TLI functionality.  You can't
use TLI applications with SCTP.

> May checks at socketops level be circumvented by a user if SS_DIRECT is set?

For TCP and UDP, the user can just open the /dev/tcp or /dev/udp node
and connect away.  Socketopts isn't involved.

> 3. At TPI module/driver level (tcp.c, udp.c, tl.c (seems as if Unix Domain 
> socket functionality is implemented here) and in sctp_bind.c, sctp_output.c 
> and various other files

That's where I believe it needs to be done.

> I would prefer chosing possibility 1 or 2 but looking for other privilege 
> checks like PRIV_NET_PRIVADDR, I realized that these tests use possibility 3.
> Perhaps even a mixed approach should be applied.

That sounds like a design question.  Yes, a mixed approach could be
used, perhaps to make the error handling cases in sockfs simpler, but
I think there needs to be exactly one layer where all of the checks
are made.  If that's not done, there'll be holes.

I'm not really sure if sockfs needs any checks.

> While looking through the code, I found some pieces about NetaTalk
> and NCA. Are these protocols relevant for my job?

AppleTalk is an add-on product.  It'd probably be ok if that (and
others of its kind, like SNA) were to fend for themselves.  I don't
see how much could be done for them here.  (And it's unclear to me
whether checking in sockfs would be effective for them, either.  I
expect they have the same TLI issues.)

NCA is probably relevant here.  NCA behaves like a special address
family that caches web pages in the kernel.  It's able to receive
connections.

> Is there a figure or something that describes what PTI
> modules/drivers/stream head are used when a communication via
> TCP/UDP/SCTP takes place?

Besides the code, I think we'd have to look into what documents we
could get you.  (Darren?)  The old Mentat documentation is somewhat
helpful, but I don't know if we have rights to redistribute it.

> At the moment, I am a little bit confused about the terms PTI, TLI,
> XTI, stream head, sockfs, vnodes and devpaths, I do not exactly know
> how many possibilities there are to do the same thing but using
> another system call or way through the various layers.

I've never seen the term "PTI" used in the code.  (Did you mean TPI?)

TLI and XTI are programming interfaces that applications can use.
They're System V-ish things and are the subject of several standards
to which Solaris conforms.  They're roughly equivalent to BSD sockets,
but use a message passing model instead, and are around 20 times more
difficult to use.  (Plus or minus a small amount.)

The stream head is part of STREAMS -- strdata and stwdata.  It's the
top end of a stream is normally terminated and converted into the
familiar syscall interface.  STREAMS allows other entities inside the
kernel to behave as stream heads, but when you're talking about "the
stream head" in general and without reference to a particular stream
configuration, you're referring to os/streamio.c.

Sockfs is a BSD sockets system call interface using a file system
implementation (inheriting the read/write semantics of files).  It can
use TPI messages (the in-kernel STREAMS messaging version of TLI) or
direct function calls to reach the internal transport functions.  It
emulates an old STREAMS module implementation -- "sockmod" -- that can
be popped off by applications that want to manipulate the stream
underneath.  When (or really "if") that happens, many of the
direct-call optimizations have to go away.

Vnodes are part of the file system implementation.  They roughly
represent open files (as opposed to open file _descriptors_).

Devpaths have to do with how a device is named within the system.  I'm
not sure what the context of that question might be.

> First I thought that everything goes over traditional Unix system
> calls (like connect, bind, ...),

Not everything.  Standards-conforming applications can use XTI, which
doesn't have those calls.

In particular, RPC (and thus NFS) doesn't use BSD sockets.

> then over routines in socksubr.c to a vnode (defined either in
> socktpi.c or socksctp.c) and then over TPI messages to the concrete
> transport protocol.

Sort of.  There may be two vnodes here, but that's the rough idea.

> sctp seems not to use PTI when it calls its connect function.

SCTP is different.  There are no standards-conforming applications
that can speak to SCTP, there are no TLI/XTI extensions defined to
handle the new SCTP semantics, and our efforts have been devoted to
making BSD sockets work better (not TLI/XTI), so the project team
chose to make it available via sockets only.

> Furthermore, I learned from the code that one could use putmsg
> instead of connect to connect a socket (I have no idea how this
> should work with sctp).

It doesn't.

> Then I discovered the possibility to use
> t_open and /dev/tcp and a file named /etc/netconfig and things like
> SS_DIRECT. For what purpose /dev/ip, /dev/tcp, ... are actually
> needed?

It's for support of those standards-conforming applications.

> How this relates to sockfs?

It doesn't.

> Things get harder for me when examining how data is received or
> sent. I understand how read and writes are transferred to socket ops
> (sockvnops.c and socksctpvnops.c) but they use different approaches
> and different pathes through the layers.

Indeed.  The code forks here due to TLI/XTI.

> First I thought, this would be done with T_DATA_REQ messages but
> this seems to be only one possibility (e. g. strwrite and direct
> calls seems to be another).

Yes.

> Perhaps these details are not relevant for my job, at the moment, I
> am not sure in what detail I have to understand the networking code.

I don't think they're relevant, because I would expect the relevant
controls to be placed in each transport layer, where enough context is
available.

> To sum it up: I am looking for the earliest places, I can intercept
> requests in order to probe the required privileges where there is no
> possibility to circumvent the place where the tests are located. The
> deeper these places are, the harder it is to catch all valid pathes
> and to react to future changes.

There are multiple paths in and multiple paths out, so focusing on the
eariest place may not be the right answer.  If you implement it in
sockfs, then you'll need to solve the problem differently for TLI/XTI,
which just goes through the regular stream head directly into the
transport layers.

It seems to me that the place in the system that has the right sort of
context to make a decision like this is the transport layer itself.
Only the transport layer knows whether a given series of actions
represents an inbound or outbound connection, which is what I _think_
is being addressed here, and it's common among all of the access
methods.

The unfortunate thing is that there are multiple transport layer
protocols, so you'll need to hit all of them.  (And there may be more
in the future, so this project sets future requirements.)

> I have no further experience in the Solaris kernel, so please
> forgive me if answers to my questions are too obvious. If there is
> any freely available documentation about the network part in the
> kernel, I should read, please tell me (at the moment, I only had a
> look into the STREAMS Programming Guide).

That's a good place to start.  You can also get documents related to
the standards (if you're interested) at www.opengroup.org.

-- 
James Carlson, KISS Network                    <[EMAIL PROTECTED]>
Sun Microsystems / 1 Network Drive         71.232W   Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757   42.496N   Fax +1 781 442 1677

_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to