On 3/17/2014 10:13 AM, Gergely Risko wrote: > On Sat, 15 Mar 2014 23:01:15 -0400, Jeffrey Altman > <[email protected]> writes: > >> Gergely, >> >> I'm going to prune the majority of the content because I would like to >> focus on the threats you wish to protect against. > > Thank you for the very detailed response, I'll try to address the issues > raised in your response. > >> You have proposed a mechanism for locking down some of the RPCs on the >> VOL and VL services based upon: >> system:anyuser (the current behavior) >> system:authuser >> system:administrator >> >> I believe that such broad controls on the RPCs that are not used by the >> cache managers are reasonable. Doing so will not violate the agreement >> with IBM on the use of the AFS protocol. However, I'm not sure that >> doing so will address your specific threats. >> >> I also believe there needs to be an additional level to permit >> system:authuser + authenticated foreign users. > > Forgive my unfamiliarity with foreign users in AFS, but is there already > some mechanism to have "friendly zones", because just allowing anyone > with an AFS ticket to any zone doesn't seem to be fruitful (it's easy to > install a fake zone for yourself).
I'm not sure what you mean by "zone" although I believe you mean the same thing as "cell" which is the AFS terminology. In AFS a cell is an administrative domain containing a collection of: * servers * volumes * protection users / ids * protection groups / ids Each cell can have one or more Kerberos realms as a local authentication source. When more than one realm is *local* it is required that the non-realm components of the Kerberos principal names when present in multiple realms must all refer to the same entity. When Kerberos realms are configured for cross-realm authentication it is possible for AFS cells to leverage the cross-realm authentication to permit foreign (or remote or cross-realm) users to be assigned an AFS ID in the cell. This foreign principal name to AFS ID mapping is automated by aklog unless -noprdb is used. Foreign user registration is only permitted if a system:authuser@realm group is defined in the the protection database. Arbitrary users cannot obtain an AFS token for a cell. Even when they do, only principal names that are assigned AFS IDs in the protection database are not anonymous. > Also, I agree with the comments in this thread to reuse the already > existing terminology of AFS, so I will call my options: > - anyuser (default) > - authuser > - administrator > - ??? (how should we call your authuser + foreign user class?) authuser+foreign > What should be the sysadm interface for this feature? A vlserver config > that can be added in /etc/openafs/BosConfig? Or a new file in > /etc/openafs/server? Or a dynamic setting that can be changed and > queried through a vos RPC? Configuration options that are set as part of the BosConfig. >> There are a variety of methods by which spammers do this today: >> >> 1. They scan the contents of the "home", "usr", "user", etc. tree in the >> cell's file system name space. The list of mount points is more >> often then not system:anyuser "l" or at best system:authuser "l" >> in order to permit users to see each others home directories and >> because machines they login into must be able able to access the >> home directories before the user's authentication tokens have been >> obtained. > > In my setting I don't plan to give system:anyuser access to the user > store. If users want to publish data in AFS, we will have separate > volumes for that which will not contain their username (nor in the > volume name, nor in the path that the volume is mounted on). > > But yes, in already existing installations, where public space is > e.g. provided at locations like /afs/elte.hu/user/e/errge/public my fix > is kind of pointless, because it's obvious that there is an errge user. Requiring system:authuser for home directory access is fine provided that machines do not require anonymous access to the home directories for access to .k5login files or the .ssh directory or other data that is used to decide when a user is permitted to login. Windows systems for example require the ability to read the profile directory before authentication has been performed. >> 2. "vos listvldb" can be used to obtain the list of all volumes. The >> user names can often be extracted from the volume names. > > Yes, I want to fix this one. > >> 3. "vos listaddr" to obtain the list of all file servers combined with >> "vos listvol" can be used to obtain a list of all volume names. > > And this one too. > >> There is little benefit to locking down the vlserver and the volserver >> if the file system can be searched. > > But it can't be in a lot of cases. Also, I don't really want to defend > against system:authuser. Because it's very hard to do that. They can > use unix "last" or "w" command on shell servers to mine email > addresses in a standard university setting. On the other hand doing so > they risk being punished for actions like this. At the same time you > can't punish random chinese IP addresses sending vos listvol RPCs to > your servers. The chances of being caught in a university environment are very slim. There have been a number of articles published in the last two years regarding e-mail lists being sold to spammers from insiders. I agree that reducing access to anonymous users and blocking end users from modifying ACLs to restore anyuser access is critical. >>> - spammers can confirm based on the stats the list of users that are >>> actually active on a computer system, >> >> The cache manager debug interface (cmdebug) is implemented by all >> existing AFS cache managers. This interface can be used to obtain the >> list of FIDs in the cache including the active set of callbacks. The >> FIDs indicate the cell and the volume by ID. The ID can be converted to >> a volume name using VL_GetEntryByName*() RPCs that must be open to >> permit cache managers to lookup the file server/partitions on which a >> volume is located. >> >> The "vos examine" reported statistics are not necessary. There is no >> authentication on the cache manager debugging interface because there is >> no mechanism for keying the service. The "volume stats" also are not >> collected for a specific "computer or device" but for the cell as a whole. > > This cache manager information leak is interesting, thanks for pointing > it out. Is this true for local users or also remotely talking to the > cache manager? So is the debug interface open for remote connections? > > I plan to use AFS with client laptops where every laptop has one user > and don't plan to give shell access to big shared shell servers. This > is why my question is relevant. All AFS and RX debugging and statistics interfaces are wide open to the world unless the local machine restricts access via firewall rules. The cache manager and RX debugging / statistics interfaces do not have any security class so there is no possibility for using authentication and access controls based upon group memberships. These interfaces also do not know about the protection database and adding such a dependency would be undesirable. >>> - from the vol stats people can monitor and figure out if someone is >>> at the computer using AFS which can be part of a bigger social >>> attack or harrasment scenarios. >> >> The volume statistics can indicate which volumes are more actively used. > > Yes, exactly that was my point to, that's why I'd like to get rid of the > public availability of those RPCs if not needed by cache managers. They are not required by cache managers. > I'd like to elaborate a bit more on this sentence from your email: >> There is little benefit to locking down the vlserver and the volserver >> if the file system can be searched. > > This is true when we're designing a new security system and I of course > hold myself to this principle when I design new systems through my > everyday work. This case is on the other hand is a bit different. If > we don't start to take care of these issues at least when it's easy, > then we will always be adding new (or leaving open old) holes with the > reasoning seen here: > > http://lists.openafs.org/pipermail/openafs-info/2012-July/038333.html > > "I think it is fine to skip access control checks on this call > entirely. As you point out, the information available via this RPC is > also available to unauthenticated clients via the volserver." You are misinterpreting the issue. That e-mail is discussing data which is required to be available to the cache manager under the same criteria as the ability to list the root of a volume but was not. The Unix AFS cache manager does not expose AFS volumes as individual devices. The Windows cache manager does. As such it needs to be able to determine the volume properties at the time the device is constructed. We relaxed the authentication check because if the root of the volume can be accessed anonymously (and all volume root directories can be queried for status info anonymously) then the volume properties must also be. > Security is not black and white, if we fix one leak then we're a little > bit better already, I think. Of course, it's not optimal but we should > start somewhere. > > If you don't think that I'm really going in a bad direction with this > proposal then I'd appreciate your help in designing and implementing > what is reasonable now and maybe fixing more later. I believe that what you are implementing is a reasonable step. I have already provided advice and will continue to do so. The point of my e-mail was to ensure that you understood the specific threats that you were attempting to protect against. Jeffrey Altman
smime.p7s
Description: S/MIME Cryptographic Signature
