Linux-Advocacy Digest #312, Volume #27           Sat, 24 Jun 00 14:13:05 EDT

Contents:
  Re: Would a M$ Voluntary Split Save It? ("Daniel Johnson")
  Re: Would a M$ Voluntary Split Save It? ("Daniel Johnson")

----------------------------------------------------------------------------

From: "Daniel Johnson" <[EMAIL PROTECTED]>
Crossposted-To: 
comp.os.os2.advocacy,comp.os.ms-windows.nt.advocacy,comp.sys.mac.advocacy
Subject: Re: Would a M$ Voluntary Split Save It?
Date: Sat, 24 Jun 2000 18:07:32 GMT

"Leslie Mikesell" <[EMAIL PROTECTED]> wrote in message
news:8ik6pu$brd$[EMAIL PROTECTED]...
> In article <Y1S25.10265$[EMAIL PROTECTED]>,
> Daniel Johnson <[EMAIL PROTECTED]> wrote:
[snip]
> >> You've demonstrated how much you know about this.
> >
> >Haven't I, though? :D  <-- shit eating grin, there
>
> Yes, it is extremely hard to support such a claim when you
> think the RFC's stopped in the 700's.

Oh, now that's cheap! Just cuz I was able to find the telnet
'standard', and it didn't mandate the feature you wanted to
be mandated...

> >> Civilization is built on standards and agreements.  Ignoring
> >> them leads to war and destruction.
> >
> >I disagree. Competition is *good*. Progress is *good*; nailing
> >everything down and disallowing change would be bad.
>
> Of course - it is establishing standard wire protocols that
> allows multiple innovative implementations to coexist and
> progress independently.

As you point out, it restricts innovation to *implementation*; no
new features, just faster/stabler implementations of the old one.

>  Relying on single-vendor proprietary
> protocols is what keeps you from ever changing and eliminates
> competition.

Not at all, there are way to cope with multiple protocols; using
them gives you flexibility.

> >By the way, where did "agreements" get into it? Did MS "agree"
> >to these standards you are trying to force down their throat- and
> >indeed, did *I* agree to them? You are telling me that *I*
> >should be forced to use these things, too.
>
> If they want to claim internet interoperability they should be
> required to meet the standards.

What is "internet interoperability" and who claimed it? And why
should thye be required to meet *all* the standards? Does
*anyone* meet all of them?

>   And note that no one except
> MS is forcing anyone to use anything.

Sure, but you *would* force MS to use Unix protocols
if you *could*. No?

I mean, if you'd force MS to sell Internet Explorer separately
and for a higher price, why *not*? Why's this different?

> They do, by making
> it an integral part of the OS, claiming it can't be removed.

Tha doesn't force you to use it. Hell, you don't even have to
buy their product, if you so dislike it.

Unless of course it does thinks you need, and your favority
tools are inadequate.

[snip]
> >You can argue that most Unix software is 'incorrect'
> >because it does not adhere to POSIX.1 and never
> >did. But I bet you won't. :D
>
> That argument would be the same as arguing that
> current windows software is incorrect because it
> does not adhere to the windows 1 spec.  Neither
> is correct, nor is the claim that the specs haven't
> changed.

Well, that's true. But much Unix software does not
strictly adhere to the POSIX standard at all- it uses
X to produce a GUI, and this goes beyond POSIX.

[snip]
> >> Then why is some non-portable flavor of basic getting embedded
> >> in everything from MS?
> >
> >Because some do. Still, they don't make it so you *need*
> >visual basic for applications for everyday stuff.
>
> Nor do you need to do shell scripting for everday stuff
> on current Linux distributions.

That's true, there are again alternatives. Still, this is
devolving into a boring GUI flamewar. :( Permit me
to opt out.

[snip]
> >It's not likely to do that very often, though, given that you have
> >to get the sysadmin to write you a litle script for
> >everything.
>
> Like?

I dunno; having the sysadm write scripts was your idea,
wasn't it?

[snip]
> >This isn't actually true in principle- Apple has a scripting
> >engine that can record your action and make a script.
>
> How does it separate the context-sensitive parts of the action
> (like chosing a filename that may be different next time) from
> the fixed parts like starting a program whose name will always
> be the same?

It doesn't; it records what you do and then you edit the resulting
script.

[snip]
> >I'm not at all sure what you mean by this.
>
> Timed operations,

What's this?

> file transfers, generating charts from data,
> and assorted other things that take no or minimal human
> interaction.  Anything that can be completely scripted gains
> nothing from GUI operation.

I don't think most of the file transfers, charts and "assorted
other things" really can be so easily scripted; they vary
over time too much.

[snip]
> >What bad experiences in particular?
>
> I've already covered some with Microsoft.  A similar situation
> with AT&T back in the days before TCP was that their uucp
> (dial-up file transfer program) only understood their own
> modems and really only worked correctly with the type that
> had a separate serial port controlling a dialer.

I see.

Well, what I think you should appreciate is that there is
more than one solution to this (real!) problem. Standardizing
the protocols is one way, but I have been arguing that it
is not the best way.

For instance, using a standard model protocol (ie, Hayes)
solves your UUCP problem there. But it does so at the
cost of hindering the development of things like cheap-o
win modems.

The alternative I favor- using drivers and like plug ins- also
solves the problem. But because you can substitute a new
driver in, you can still have cheap-o win modems without
modifying software.

The difference is not great in modems- Hayes-to-winmodem
translation is in princinple possible anyway. It's more
pronounced with printers.

[snip- various specific modem issues with UUCP]
> >I doubt the bugs were a design choice. It's not
> >like telnet doesnt' look bad next to Windows,
> >even when it's working properly.
>
> Refusing to update the telnet to one that actually works is
> a choice.

:D Well, refusing to include a better telnet is a choice,
certainly.

>  If you ever try to do anything with
> windows over a low bandwidth connection, you would find
> that telnet looks very good by comparison.

How low bandwidth is low? I use RAS over 28.8 modems
regularly, and greatly prefer it to telnet.

[snip]
> >Since this is done by making your product better
> >than theirs, this would seem like a good thing to me.
>
> No, it is done by doing things differently, not better.

It doesn't work that way. MS somes does it 'different
but worse' and then they make themselves look bad;
but usually its 'different and better', and their competitors
look bad.

> >But bugs in telnet make MS, not Unix, look bad.
>
> People only know that after they have a working
> version.  Most people don't have a working version,
> they have the one MS bundled.

People who use telnet do know better, or they have
their system set up by someone who does: you can't
just have naive users telnetting into a Unix system and
expect them to be able to *do* anything without instruction.

*Someone* has to be the Unix guru at some point, and
he'll know the difference. He'll know enough to provide
a better telnet, if one is needed.

[snip]
> >Bundling is perfectly ordinary, and arguing that *no-one*
> >should be able to get a package just because you don't
> >want it is, well, not very neighborly.
>
> It is not perfectly ordinary to bundle something that
> breaks your competitors products.

Indeed it is not. But MS doesn't do that. "Having features
your competitor has not" isn't the same thing as "breaking
your compeittors products".

And having bugs your competitor has not is breaking
your *own* product, not his.

[snip]
> >Well, no, define important as "relevant to the real problems
> >of real users" in this context.
> >
> >Interoperability may be, but often isn't, relevant. Standards
> >never are; they are at most a means to an end.
>
> It may not be important to you today.  However in any non-trivial
> setup, choosing anything that uses a non-standard protocol will
> make it almost impossible to ever change since you will have
> to replace all components at once to replace it.

That's not so. It may be that putting *Unix* into such a
system would be infeasible- but that's just because Unix's
interoperability is so weak. Operating systems that are
really trying to be interoperable- like Windows- can adapt
to the protocol that is acutally being used- even if it isn't
the one protocol Unix likes.

Interoperability is an *advantage* for Microsoft- it means you can
upgrade from Unix to Windows piecemeal.

>  If you are
> close to retiring, I suppose you can leave that as someone else's
> problem.

I see no reason to make special provision for Unix unless we
*expect* to switch to it.

>  The 'end' that standards provide is interoperability
> among components so you are always free to replace any one
> with the current best without disrupting operation with the existing
> base.

You can only do this is the standards already support the feature
you want.

With a more flexibly approach, like Windows, you are not limiited
to this.

> >[snip]
> >> The government is a very large customer.  And now very much
> >> vendor-locked.
> >
> >Well, I quesiton how 'vendor locked' something as big and diverse
> >as the govermnet *can* be.
>
> The bigger you are, the harder it is to change something that does
> not follow standards.

I think this assumes that the government is fully interconnected;
and I doubt that's possible. They are too big to be consistant.

> >But it's hardly the same as selling "on the promise of something
> >that wasn't usable".
>
> Was posix a requirement?  Was the implementation usable?

POSIX wasn't needed to do the job; it was one of those 'requirements'
that isn't really required.

The implementation was usable for one thing only- getting it past
the beauracracy.

> >You've got to establish that the government agencies though
> >they could run Unix software on NT, or something like that.
>
> OK, how do you explain the requirement for posix conformance
> if you don't think it was to be able to run existing software
> or to be able to move software from one platform to another?

I think it was probably won by lobbying for it.

> Did they get it?

Sure. They can move Software from Windows 98 to Windows NT. :P

[snip]
> >>  It is supposed to give you the
> >> ability to compile and link to the X library.
> >
> >Only C, not POSIX, is required for this.
>
> Except that you have to be able to connect your I/O stream
> to the right place.

No, the *X Library* has to do that. You don't. You don't need
to know how the X library does it.

[snip]
> >The problem is that on Unix, X is what you get. On
> >Windows, the GDI is what you get.
>
> No, X does not have much to do with unix.  It is an
> application program facility on the client (program)
> side, and a device specific facility on the server
> (display) side which may not even be running an OS.
> Unix just provides the socket and shared memory
> mechanisms that are normally used to connect them.

One can say the same thing of the GDI with as much truth:
It's just a shared library that connects a user program to
a device driver; the Windows just provides library-linking
and kernel-access services.

But of course, on Windows you get GDI and on Unix you get
X. And that's what matters.

[snip]
> >> Nor do you need to.
> >
> >You certainly do need such a thing, for a great many
> >apps. You don't need X *per se*, but you need some
> >equavent.
>
> I mean you don't need a new one, and you don't need a bundled
> one.  The X protocol is done, several widget sets are done
> and they certainly don't need to be done in OS-flavored ways.

Sure thing. But this is what I mean: you have to go beyond
POSIX; you have to program for *Unix*.

On Unix you can expect to get X. POSIX promises no such thing.

[snip]
> >> you just copy in the source
> >> you already have and recompile.
> >
> >That's pretty much never realistic.
>
> Yes it has, at least since SysVr4 merged in BSD networking
> and it wasn't all that bad even before.  If you don't use
> the sysv-specific STREAMS code you can compile most things
> across most current flavors of unix and a lot would probably
> work on OS/2.

Copy the source and recompile works only between very
closely related OSes, like the different Unixes and the
different Windows.

> The differences that exist are pretty well
> known, so tools like autoconf can be used to deal even with
> the older more diverse variations.

It's possible to write source than will work on many
machines, but it's not as simple as just ftp'ing over
the code. Generally, you can isolate the code that
isn't portable and then produce variants of that.

That done, you can ftp over the code and compile. But
you really can't leave out that step- even on Unix,
never mind on *different* operating systems.

POSIX helps minimize the need for this on Unix
by standardizing more of the API, and a certain
amount of tradition also helps.

But it really is a Unix thing.

[snip]
> >Standards often leave things unspecified so that an
> >implementor can do it 'the easy way'.
>
> Yes, but the assumption is that the implementation will
> work.

If the standard doesn't tell you what it means to "work",
there's little use to it.

[snip]
> >> It is not difficult at all to integrate anything that follows
> >> protocol standards correctly regardless of the OS.
> >
> >IE, Unix stuff.
>
> No, cross-platform, among the platforms that follow standards.

"the platforms that follow standards" is Unix and a few
oddities like Linux. That's because "standards" means
"Unix technology' when you say it.

Let me put it this way. Is COM a standard?

No, of course not. We all know it isn't.

Why not? It has a standards body.

Sure it does- but it's not an accepted Unix tool,
and *that* is the critical thing.

> >The point I keep on hammering on here is that
> >Unix stuff is stil Unix stuff, even when standardized.
>
> And you are wrong every time.

Hey, at least I'm consistant! :P

[snip]
> >Think of it this way. C++ is standardized. So should
> >(compliant) C++ program be accepted by a FORTRAN
> >compiler? How about a Java compiler?
>
> I don't see any relationship here.  The issue is equivalent to
> whether you should be able to transparently replace one vendor's
> C++ compiler with one that you think is better without having
> anything break as a result, or whether you should be able to compile
> java to bytecode on one machine and run it on a different JVM.

Well, the C++ standard only standardizes C++. It does not standardize
FORTRAN.

In the same manner, the POSIX standard only standardizes POSIX-
it does not standardize Win32.

One does not complain that a FORTRAN compiler is bad because
it does not accept conforming C++ programs; one says it is not
a C++ compiler.

One should not complain that Windows is bad because it does
not accept conforming Unix programs; one should say
it is not a Unix.

Is that sufficiently clear?

> MS doesn't seem to think you should.

Nor anyone else. You *do* realise that g++ is simply
chock full of incompatible extensions, don't you?

This "follow the standards- and no extensions" thing is
really just you. Even Unix developers do not do that.




------------------------------

From: "Daniel Johnson" <[EMAIL PROTECTED]>
Crossposted-To: 
comp.os.os2.advocacy,comp.os.ms-windows.nt.advocacy,comp.sys.mac.advocacy
Subject: Re: Would a M$ Voluntary Split Save It?
Date: Sat, 24 Jun 2000 18:07:36 GMT

"Grant Fischer" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> On Sat, 17 Jun 2000 21:26:07 GMT, Daniel Johnson
> <[EMAIL PROTECTED]> wrote:
> >"Grant Fischer" <[EMAIL PROTECTED]> wrote in message
[snip]
> >Why is that the "essense" of interoperability? Just because you
> >prefer it?
>
> No, because it allows computers of different designs
> to interoperate.

It isn't the only way to do it. So why are *wire protocols*
the 'essense'?

It almost sounds like you are *defining* interoperability
to be "common wire protocols"; is that really what you
have in mind?

> Stable API's are great for keeping
> application compatibility, but a stable wire protocol
> is useful for interoperability. Interoperability between
> different computers is inherently useful in any largish
> computer installation; chances are very good they'll have
> different types of installations that need to cooperate.
>
> It is also useful for people connecting to large networks,
> such as the Internet.

It's an approach that works to a point; it works if there
actually *is* a common wire protocol. But merely insisting
that everyone else use your protocol is often futile.

[snip]
> >If I understand correctly, DCE does the wrong thing- it provides
> >a secure RPC mechanism. Fine as far as it goes, but not
> >general purpose.
>
> DCE has RPC's, but I'm talking about the security service.
> The security service was designed to do both authentication
> and authorization.

I've done a web search and found some details about this.
Mostly in German, though. :(

> It was your argument that MS has to extend UNIX protocols
> because they are insufficent; for your Kerberos example
> I'm pointing out that Kerberos is only one part of a
> complete security framework like DCE. What was wrong
> with DCE? Couldn't MS work with the Open Software Foundation
> to fit their flexible security API's to use the DCE
> security service?

Yes, now that I look at it, I must admit they could.

They'd have had to make extensions to DCE as well. DCE
supports a security model but it's not the NT security
model; their list of allowed access modes is fixed, like
with Unix files. NT's access modes aren't.

I'm not sure that DCE could be extended in a way that
would still be compatible with Unix client, however. I
suspect adding addition access modes would make
the resources using them inaccessible to Unix clients.

That may be the reason why they did Kerberos rather
than DCE; their Kerberos extensions do not prevent
a Unix client from authenticating.

> By the time they are finished dealing
> with DCE's security service, they'll have the identity
> and the authorization info, just as you need to go
> on and use NT's services.

I'm afraid not it's quite a match; it may be possible
to extend it the way they did with Kerberos, so
NT clients get what they need, and Unix clients
still work. It's just immediately obvious to me
how.

My main objection is to the "MS *must* use Unix
technology because we do, darn it, and we don't
want to have to both to support MS technologies!";
I can't respect that. If you want to interoperate,
why is it that MS has to do the work?

[snip]
> >And that's the unfortunate thing about many of these
> >"open standards"; they are really *Unix* standards,
> >and encode Unix-isms directly into their 'standards';
> >those standards cannot then be used by anyone else
> >very effectively.
> >
> >A good example of such a Unix-ism is the assumption that
> >a security identity is just a user name.
>
> That's a common security distinction, and is not
> limited to Kerberos. Traditional UNIX does what you're
> talking about -- a user logs in and has group privileges
> assigned to his login session before he even attempts
> to access any files or run any programs.

Okay. I'm not saying that either NT security or Unix are
unique.

> Kerberos didn't embed the UNIX style groups into the
> identity to avoid being UNIX specific. The network standards
> I'm familiar with take pains to avoid being UNIX-specific.

They do try, but it is a very hard problem, and it is very easy
to make assumptions about how things are done that
aren't universal.

I don't think that's so awful. What I think is so unreasonable
is insisting that since you've made these assumptions and
put them in a standard, *everyone* else is obliged to match
them.

NT's security system makes assumptions too and they are
at least as grevious as the ones Kerberos is making. But
they are different.

> You're really stretching.

I don't think I am; I think its stretching to say that everyone else
should adopt technolgies developeds on, by and for
Unix simply so Unix developers won't have to do any
work to be interoperable.

And that's what I'm hearing. I don't hear any arguments
that the Unix approach to these problem is *better*,
only that if MS won't follow them it's a pain for Unix
vendors. And so it is, but if MS does follow them its
a pain for MS.

> >> The rest is authorization.
> >
> >Authorization is the process of establishing your
> >identity. The question is, what *is* your identity?
>
> I think this is the real basis of why we're
> talking at cross-purposes. We don't agree on a very
> basic security term. This terminology isn't specific
> to UNIX.

That's true. But I think we understand each other.

> Identity is who a person is. That is established with
> authentication. It doesn't change no matter what
> the person intends to do on the computer network.
>
> A person is allowed to use certain services; that
> is a function of both the person's identity and the
> service they intend to use. That interaction is
> called authorization.

Very clean. NT bundles some authorization information-
cheifly a group list- into the user's identifying SID list;
one can argue that it *shouldn't* do this. But it does
do it. And that won't go away just because there's
a contrary standard out there.

Interoperability has to be concerned with how
systems actually work, not how they ought to work.

That, in my opinion, is what a standardized wire
protocol is not a good solution to the problem of
interoperability; it's incomplete. It works only if
everyone plays the same game. But often they don't.

> Kerberos is a secure authentication service. It left
> some empty fields for carrying some authorization
> info. Other schemes, such as DCE, used this field.

Okay. NT conforms to this system, but it has to reassmble
the authorization and authentication info when the
client gets it. The software plug-in layer can do this
transparently.

It's kinda kludgy, but it does work.

[snip]
> >Which is swell. It's not the architecture MS went with. It
> >makes no sense to say MS should be restricted to the
> >design choice made by Unix designers like this one;
> >that would put the kibosh on all progress. You have
> >to be allowed to do something different.
>
> However, it is not the design of Kerberos that is causing
> problems here. MS chose their own design, and has had
> to modify Kerberos to make it work. Fortunately, that
> open-style design that you so denigrate was flexible
> enough to handle MS's choice.

Now hang on there. The "open style design" I so denigrate is
the one that says you must adhere only to standards, and
not do your own thing. That it is wrong from MS to put their
own propietary data in a kerberos field because nobody
else knows how to read it.

That's not a restriction Kerberos incorporates; but it is
the restriction I object to.

> It doesn't seem to me that the layered approach favoured
> by "UNIX designers" is the one that stifles progress.

It isn't. I didn't mean to suggest it was. Progress is not,
in fact, being stifled (except by the DoJ, but that's a whole
'nother flame war! :D );

If it *were* the case that you had to stick strictly to the standards,
*that* would stifle progress.

But of course it is not really the case.

[snip]
> >I don't agree. I think that to be worthwhile you must not only
> >document the wire protocol but also make it stable- so it wont
> >change between versions. And you have to commit to it- commit
> >to not replacing it with something else.
>
> An extensible wire protocol is essential to interoperability
> between different computers, and gives the best shot of having
> different application versions talk to each other.

I don't agree. It's an approach, yes, but I don't think its the
best approach. Extensible wire protocols can help, but
they do leave older clients just ignoring the information
they don't understand. It would be better to be able to upgrade
the older clients in place; a system of plug-ins permit this.

And if you have such a thing, you get other benefits as well;
you can interoperate with systems that don't conform to
the wire protocol at all by using a plug in for them. This isn't
always important, but interoperability often means interoperating
with old, legacy systems- perhaps while replacing them. You
can't just assume these guys speak The Chosen Protocol.
The plug-in approach gives you a possible solution in this
case.

Further, it frees you from the design restrictions a fixed
protocol imposes. Sure, you can make it extensible, but
that demands a fair amount of foresight; it's easy to miss
something you'll later need to extend.

It's not that a fixed wire protocol is a completely unworkable
solution; but I feel it's not the best one, if interoperability
is the problem.

> Before
> MS deigned to implement SMTP in their mail products,
> how were other vendors expected to design products
> to echange mail with an MS Mail system?

Now I don't know, but I've looked up enough stuff that I feel
that I can ask someone else to do the research on this one.

But I would caution you not to assume that it was simply
impossible to do that, because it could not be done the
way Unix did it.

[snip]
> >But if you do that you've lost the benefits of having an API
> >and plug in modules in the first place. You can't use
> >a different protocol even if its better, or even if you need it
> >for interoperability with someone else. If you do, you
> >are violating your own rules.
>
> I didn't say it was easy to develop an extensible wire protocol.
> Imagine how small the internet would be if it was based
> on non-extensible wire protocols. Everyone would have had
> to upgrade simultaneously from Internet V1 to V2.

Well, no. That would be true if *no* interoperability solution
existed; but wire protocols are not the old way to go.

You need *some* solution to this problem, clearly. But
that does not automatically mean it has to be Unix's.

[snip]
> >Why not? MS's appoach seems to work; you support plug
> >in modules that let it support whatever wire protocols you want.
>
> Creating a closed system without the wire protocol. You can
> talk, but only to yourself. That's easy. That's not
> interoperability.

I say you *can* talk to others. You can talk to anyone whose plug
in you can install. Windows actually does ship with several,
to boot.

[snip]
> >That's what you are doing if you have your NT server doing
> >Active Directory *now*. Having a separate Kerberos server
> >doing only the 'username' part of it is useless if you are
> >going to have Active Directory anyway.
> >
> >Mind you, it is possible in theory with the plug in architecture
> >to do what you want. I don't think it'll happen- no point.
>
> No point for you maybe; I'm trying to convey that others have different
> needs than your own.

I appreciate that. I'm not the one who is saying everyone has
to use my favorite technology.

> A more flexible design helps meet more
> people's needs.

I agree.

> MS pumps up their Kerberos use as laying
> the foundation for interoperating with other Kerberos
> installations, but it is a one-way street.

If it is, it's because the Unix systems out there aren't
as flexible about interoperability as Windows is.

[snip]
> >Hmmm. It seems to me that it adheres to that notion; Kerberos
> >still identifies a user. The extension just provides enough information
> >to identify the user to NT, as well as Unix.
> >
> >Unix still works as before, no?
>
> Yes, that's what they're doing now. You can slave a UNIX Kerberos
> client to an NT kerberos server. The UNIX client will ignore
> the NT PAC (hey! an extensible protocol! what a concept!),

I did not ever object to extensible protocols; I object to prohibiting
extensions that arent Standards Body Approved.

> and go do authorization using other means.
>
> However, the NT systems cannot handle slaving off of a non-NT
> server (unless the vendor has licensed the code from MS to make
> this work.)

NT clients can, in fact, slave off of Unix Kerberos servers. They do not
get propery domain security, so you have to map Kerberos users
to NT users manually. But they can do it.

If you want to make them slave to a DCE security server, you'll
have to write the plug in for that. MS hasn't done that one.

> The PAC hasn't really been published, and the
> active directory service needs to run on the same server anyway,
> from what I understand.

Yeah, active directory is one of those big integrated things with
lots of connected services. They are trying to make it easy to use,
but I personally don't think its easy enough yet.

> >> A Kerberized application running on a mainframe would have no
> >> use for the NT authorization info; it is a waste of time
> >> to even try to look it up.
> >
> >That is so. But I doubt it is a very large sort of waste of time, and
> >the mainframe can ignore it and just use the user id.
>
> Suppose that you've got a variety of different computing domains?
> You should try to look up all the possible authorization information
> and package it up to send it to each service to let it pick and
> choose?

Yeah. Can Kerberos do that? That's probably the best option.

> Suppose you don't want Service A to know what permissions
> the user has in Service B?

Service A cannot determine what permissions a user has by looking
at a users SID-list. If DCE security exposes this information, well,
that's not MS's fault. :(





------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.advocacy) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Advocacy Digest
******************************

Reply via email to