> . and .. *are* hard links, modulo the magic that happens at device boundaries > on unicies. A direct effetc of this is that you can compute hte number of However, in HTTP URLs, they do not have this meaning. The only guaranteed semantics are that, when left to right scanning the concatenation of the base URL, with everything after the last / (home pages strictly have a trailing one), .. causes the deletion of right most remaining / delimited component. Excess ..'s do not cause the deletion of the site name, but nor are they ignored. They are simply illegal. However, they are so common that standard browser error recover is to delete them without cancelling any component of the URL. I can't remember how . is handled, but, if it has a special meaining, it is rarely used. The rules for FTP URLs may be different, as they assume a file system, not an abitrary hierarchical name space. I'd need to check, but I think excess ones are passed through, and each one should cause the server to change one directory up. I'm not sure if internal ones are collapsed by the browser; even on Unix, the results of these two options can differ. In particular, FTP URLs are not directly filenames. The server must behave as though e.g. / delimited component before the last caused a single step in the directory tree. In some cases this may be the same as just using the name, but not always. There are RFCs on URLs and relative URLs which give the details accurately enough for implementors. Authors should avoid the complex cases. ; To UNSUBSCRIBE: Send "unsubscribe lynx-dev" to [EMAIL PROTECTED]
