Re: mail sandbox wall authority, inward and outbound

2000-05-12 Thread Leonid Yegoshin

From: Jon Crowcroft [EMAIL PROTECTED]

the problem with sandboxes is that they are monolithic as is this
discussion of mail - if i have a notion of a compartmentalized system
with users, and access rights (like almost all operating systems from the
late 60s onwards, but not like
simple desk top single user executives as found on many personal
computers today unfortuantely),
then i can have mail agents run scripts, but with the authorities of
the user, perhaps restricted further by some context, and i can then
configure arbitrary rights w.r.t each possible tool that the script
might invoke - some of these can be gathered togethre under the
headings of "file input, output, exectution, creation etc", and others
under the rights of "audio/video/mouse/itneraction with user",

  You right ... if we don't take into account the purpose of E-mail
exchange/web browsing. E-mails/web communication services a presentation
purpose. In strict point it is nothing to story/change in my permanent
files beyond E-mail archivation itself. Of course, it may _use_ public
resources on my host but not change it. If E-mail/HTTP hit wants to
store something it may have separate box archived with E-mail.
Three questions:

   - send files or something to outside (another user): it should be
 controlled by user approval for this particular mail/web site.
 This approval may be saved for future.

   - change/upgrade files on system (mail/HTTP upgrade):
 it should be approved by signature first and user approval second.
 Cookie/auth data may be considered as E-mail/HTTP private and
 it may be contained in sand-box itself.

   - Data extraction from E-mail/web page: it is difficult problem
 for security purpose because user may copy virus incapsulated
 in data object. It highly depends from object design. But at least
 it is not automatic and user should be seduced to do so.
 Love viruses are not final example of nature or dark invention.
 However it is possible to separate objects on script/execs
 and simple screen/voice presentation and warn user about difference
 during copy/extraction. Screen and sound speakers are also some kind of
 sand-box :-) (We may do not consider tools for pirat recording of
 played music or latest movie)

"network i/o to such and such an address (list)", etc
for conveneicnce and expressiveness in the ACL system (other
management tools like user, other, groups etc help scale the task)
and then i can design a set of sensible securioty policies for a site,
and employ an expert to configure things for everyone - typically,
with good systems, defaults and default operating system notions of
user, file permissions, sudo type access etc, will suffice...

   Centralized rights configuration can't solve a problem.
The protection problem can be formulated in simple terms which are
clear and understandable for user. And end user decides about risk.
Nobody knows better about user trustees than user himself.
But to do so it is need to draw security boundary in convenient way for
end user.

iff you start with a decent system;
otherwise, forget it - someone will always find a way to set things up
disastrously wrong, because it will be the only way to get work done
this is a standad problem with systems that impose all or nothing
security - either they leak like a sive or users find them
unusable...

  It depends on design. If high security is not huge unconvenience for
user then virus replication performance decrease dramatic and we lower
the number of people who wants to write them.

   - Leonid Yegoshin.

so the solution is to ditch indecent systems.

In message [EMAIL PROTECTED], Leonid Yegoshin typed
:

 From: "James P. Salsman" [EMAIL PROTECTED]
 
 A MUA might ask the console operator for permission to proceed when:
 
 1. A mail message wants to run a program.  (e.g., ECMAscripts.)
 
 2. An attachment is executable. (Nearly universal practice.)
 
 3. A program wants to write to a file.  (Usually not trapped more
 than once per execution if at all.)
 
 4. A program wants to read your address book.  (Does any mail system
 that offers this functionality limit it at all?)
 
 5. A program wants to send mail.  (e.g., having MAPI's Send notify
 the user and queue the proposed message as a draft instead of sending.)
 
  6. A program wants to send a file to somewhere. Or any permanently stored
 information (like cookie but not limited).




Re: mail sandbox wall authority, inward and outbound

2000-05-12 Thread Leonid Yegoshin

From: Markku Savela [EMAIL PROTECTED]

I think we should "turn around the view" (maybe you were saying this
in another way).

That is, instead of ACL type protection, where a resource is
associated with a list of allowed users and uses, we should have a
list of allowed resources and uses attaced to each program
(exectutable or active object).

And by default, a program could not access any resources at all.

In case of mail attachment containing an executable, we could quite
safely try to run it, and the system would just inform that it tries
to open this or that file (do you want to allow it?), trying to
open TCP connection to port 25 (do you want to allow it?), or tries to
execute another program (do you want to allow it?).

   I hope you joke. How many users know what means
"TCP connection to port 25" ?
And how many Windows users know "attached program wants to open
file C:\windows\cpl32.xxx:  is it legitimate ?"

  Predicted reaction after month or two is - press "OK".

   - Leonid Yegoshin.




Re: mail sandbox wall authority, inward and outbound

2000-05-11 Thread Leonid Yegoshin

From: "James P. Salsman" [EMAIL PROTECTED]

A MUA might ask the console operator for permission to proceed when:

1. A mail message wants to run a program.  (e.g., ECMAscripts.)

2. An attachment is executable. (Nearly universal practice.)

3. A program wants to write to a file.  (Usually not trapped more
than once per execution if at all.)

4. A program wants to read your address book.  (Does any mail system
that offers this functionality limit it at all?)

5. A program wants to send mail.  (e.g., having MAPI's Send notify
the user and queue the proposed message as a draft instead of sending.)

 6. A program wants to send a file to somewhere. Or any permanently stored
information (like cookie but not limited).

       - Leonid Yegoshin.




Re: NAT-IPv6

2000-04-26 Thread Leonid Yegoshin

From: "Steven M. Bellovin" [EMAIL PROTECTED]

In message 001501bfaf43$127e4d00$[EMAIL PROTECTED], "Eliot Lear" writes:
It is a complete fallacy that NAT provides any sort of security.  It does
no such thing.  Security is provide by a firewall, and (more importantly)
by strong security policies that are policed and enforced.

Eliot is absolutely right.  A NAT box *might* be part of a firewall, but by
itself it isn't one.  It's no more secure, and often less so, than an
application-level firewall.

   You both right ... from strong point of view. But if intruder
can't hook target host simply because he does not know - how he can open
TCP to it then it is also part of security.

The myth that NATs per se provide strong security is one of the greatest
barriers to their elimination.

   It is not a myth. It is level of thinking. If you setup only firewall
and you are not very good network engineer you can't understand where could
be the next threat. Your TCP stack/firewall/etc may have a bug, some new
protocol may have a misdesign. But anybody clear understand that if your
internal hosts do not have a public address then all attacks may be
only static - wait until internal host open TCP to somewhere. And this
kind of attack may be at least investigated and compromised external host
may be found.

  I am not NAT defender but I recognize how IS dept thinks.
I prefer a mixed solution like uniq host system ID + some controllable
route address.

       - Leonid Yegoshin, LY22




Re: NAT-IPv6

2000-04-26 Thread Leonid Yegoshin

From: Greg Hudson [EMAIL PROTECTED]

 But anybody clear understand that if your internal hosts do not have
 a public address then all attacks may be only static - wait until
 internal host open TCP to somewhere.

This is a naive understanding.  Source-routing would let me get
packets through to an internal address unless your NAT also acts as a
firewall.

   Let's try. Today most of hosts have "IP-forwarding" switch off.
Because security reason.

(Granted, I think it turns out that pretty much all NATs do this kind
of firewalling in all cases.  But there's no reason why a firewall
allowing only outgoing connections should be any more error-prone than
a NAT gateway.)

   Greg, how you determine outgoing RTP connection like VoIP, for exam ?
UDP often has not clear "open" packet and difficult to control in classic
firewall. Fortunately VoIP may have H.323 or SIP negotiation first
but do you sure about another protocols ?

       - Leonid Yegoshin, LY22




Re: draft-ietf-nat-protocol-complications-02.txt

2000-04-25 Thread Leonid Yegoshin

From: Keith Moore [EMAIL PROTECTED]

 even if you do this the end system identifier needs to be globally
 scoped, and you need to be able to use the end system identifier
 from anywhere in the net, as a means to reach that end system.

   DNS is a bright and successfull example of such deal.

actually, DNS is slow, unreliable, and often out of sync with reality.

DNS reverse lookup tables (PTR) are not as well maintained as forward
lookup tables (A) so they're even less reliable.

  (Bill Manning has another mind here - read it, please)

hosts often don't know their own DNS names, so they wouldn't know
their connection endpoint names either.

DNS names are often ambiguous - because a single DNS name corresponds
to multiple hosts (all implementing the same service) or because a
single host supports multiple DNS domains (a different name for each
service) or both.

   Yes, it is a reason why we can't use today domain names as system ID.
But it means that we should shift consideration to appropriate level -
- IP addresses. The second level of indirection arises but it is not
bad as long as we do not decrease a setup speed.

the binding between a DNS name and an address is not the same thing
as the binding between an address and a connection endpoint.  the
two have different purposes and different lifetimes.

  I agree here.

when people say "DNS can do the job" they may be saying different
things, e.g.
(a) they are thinking in terms of using existing DNS servers,
(b) they are thinking in terms of using the DNS protocol, having
translation occur at the boundaries between routing goop realms
(similar to NAT's DNS ALG), or
(c) they are thinking in terms of a DNS-like system, but not DNS

   I speak about (c). Routing addresses may be handled by providers
but not end-user. It should be

   (1) more accurate
   (2) more fast
   (3) independent in terms "rule of game" (may be chanded later
   without rewriting all)

with separate servers (whether they are existing DNS servers or not)
there is the problem of keeping the servers updated as to the
current condition of the network.

with DNS at translation boundaraies there is the problem of
"call setup overhead" (having queries propagate through
multiple layers of translation until they reach their destination
network) also in this case DNS becomes a routing protocol of
sorts, since the thing you advertise from one realm to another
becomes a DNS suffix rather than than an address prefix.
it's not at all clear that this scales.

  Keith, it depends from design. I can propose
 (1) caching,
 (2) implicit router resolution on each DNS A? query (each TCP setup
 precedes DNS A? query or using cached values) - "router DNS" may track
 each DNS query/response and append routing prefix to response.
 In this case there isn't any time lag... at least up to TTL expiration.
 (3) Separate network datagram service addresses from connection-oriented.
 We have two real network datagram services now - DNS and may be NTP
 (Voice RTP traffic or like UDP is in real a connection-oriented srvs)

But I don't afraid "call setup overhead" (see previous) because I afraid
only call setup speed decrease and support issues. As long as call setup
speed is not changed and support simple I like it.

in either case having to do a DNS-like query before you can transmit
is slow compared to just sending a packet to an IP address.

   We can implement technics which do both in the same time.
We do not need to replicate DNS again. We should admit that DNS is
network service in conrast with end-customer srvs and handle it different.
It means that different service rules and may be different set of addresses
may be used for this. The same is in BGP - there is a practice to hide
inter-router addresses of provider backbone, for exam to increase security
and give additional flexibility of backbone configuration.

Keith

p.s. if there's ever going to be a split between endpoint names and
routing goop, I'm convinced that endpoint names have to be usable
by themselves

  Yes.

  (perhaps with some speed penalty), that the
mapping between endpoint names and routing goop needs to be maintained
by the routing infrastructure rather than in some separate database,

  Yes ! Yes !

and that the lookup needs to be able to be done implicitly (as a side
effect of sending packets without routing goop to their destination)
rather than explicitly.  I think such a separation might be a good idea,
because our current means of propagating reachability information
and computing routes does have limitations.  but I don't see any need
to change the packet format seen by hosts (from IPv6), or any need to
change end hosts at all, in order to do this.

   Keith, sorry, didn't read this p.s. before I wrote answer on main
mail body. You absolutely understand me, thank you.

   - Leonid Yegoshin, LY22




Re: NAT-IPv6

2000-04-25 Thread Leonid Yegoshin

From: "Charles E. Perkins" [EMAIL PROTECTED]

If we get to a model where large new domains use IPv6 addressing
with NAT to global IPv4 address space, that would be quite useful.
Before too long, services will appear on the IPv6 network that
can't get the IPv4 global addresses they need.

   I asked my friends who manages corporate network - "how long" ?
He answered - "why ? I have 3 big outside servers and 1000 desktops.
I need only 5 not NATted Internet addresses and 128 NATted...
And NAT is very power security firewall for me  - I don't need to 
keep eye on desktops!"

   - Leonid Yegoshin, LY22




Re: draft-ietf-nat-protocol-complications-02.txt

2000-04-25 Thread Leonid Yegoshin

From: Keith Moore [EMAIL PROTECTED]

 If people's livelihood depends on something, they're more likely to insure
 it actually works.

that's a good point.  but it's one thing to make sure that DNS mappings
for "major" services are correct, and quite another to make sure that
the DNS mappings are correct in both directions for every single host.

even the DNS names for major services may not be well maintained.
at one time I did a survey of the reasons for mail bounces
for one of my larger mailing lists.  about half of the mail bounces
seemed to be due to configuration errors.  about half of those
seemed to be due to DNS configuration errors - e.g. MX records pointing
to the wrong host, zone replicas not being kept in sync, zone
replicas which were different but with the same serial number.

 In your view, what is it in the DNS protocol(s) that results in a lack of
 reliability?

the reliability problems are mostly not the protocol...though the protocol
does have limitations if you want to use it (as some have proposed) to
support host or process mobility.  and in the face of even moderate
packet losses DNS queries can take a very long time.

mainly it's the fact that DNS is maintained as a separate entity.
if you really want it to be in sync with reality, you need some
mechanisms to ensure that updates happen automagically, and/or that
configuration errors are automatically and quickly detected and
the information about the error gets to the person who can fix it.

  Problems with DNS maintainance arise from fact that DNS should be
maintained manual. If we return back to routing addresses resolution
it can be deployed on maintainless basis because all needed information
already concentrated in routers. There is not any human-like names here
and allocation of route addresses may be done automatic (on tree base
strategy from some root point in network).

       - Leonid Yegoshin, LY22




Re: IPv6: Past mistakes repeated?

2000-04-24 Thread Leonid Yegoshin

From: "Steven M. Bellovin" [EMAIL PROTECTED]

In message BB2831D3689AD211B14C00104B14623B1E7569@HAZEN04, "David A Higginbot
ham" writes:
I agree! Why create a finite anything when an infinite possibility exists?
On another note, I have heard the argument that a unique identifier already
exists in the form of a MAC address why not make further use of it?

Would it surprise anyone to hear that all of that was considered and
discussed, ad nauseum, in the IPng directorate?  That's right -- we weren't
stupid or ignorant of technological history.  There were proponents for
several different schemes, including fixed-length addresses of 64 and later
128 bits, addresses where the two high-order bits denoted the multiple of 64
to be used (that was my preference), or CLNP, where addresses could be quite
variable in length (I forget the maximum).

But the first thing to remember is that there are tradeoffs.  Yes, infinitely
long addresses are nice, but they're much harder to store in programs (you can
no longer use a simple fixed-size structure for any tuple that includes an
address) and (more importantly) route, since the router has to use the entire
address in making its decision.  Furthermore, if it's a variable-length
address, the router has to know where the end is, in order to look at the next
field.  (Even if the destination address comes first, routers have to look at
the source address because of ACLs -- though you don't want address-based
security (and you shouldn't want it), you still need anti-spoofing filters.)
I should add, btw, that there's a considerable advantage to having addresses
be a multiple of the bus width in size, since that simplifies fetching the
next field.)

   Routers may use the different addresses for routing. Outbound router
may assign "route address" to keep intermediate route tables small.

   It is not the same as NAT because original and real destination address
never replaced.

   - Leonid Yegoshin.




Re: Internet SYN Flooding, spoofing attacks

2000-02-15 Thread Leonid Yegoshin

From: Vernon Schryver [EMAIL PROTECTED]

 ...
 The basic idea then would be to trace back bad packets that
 conform to some typically innocent, but occasionally troublesome,
 profiles.  The profiles will become self-evident with experience,
 and once people know they will be caught by this traceback
 system they will think twice before spreading their crap around.

If I were building a DDoS engine today, I'd write a conventional
(Microsoft) DOS virus that does nothing except once every 3 minutes do
the equivalent of:

echo "GET /index.html HTTP/1.0"; echo) | telnet -r $1 80

(maybe instead with a random request instead of /index.html)

After a few 1,000,000 desktops have been infected by familiar virus
vectors, the victim might notice the traffic.
How would you filter for them?  Even if you could give routers
enough processing power, what would you learn from the filtering
that you'd care to apply?

  It is possible to get more big bang in this case: virus may asks
DNS servers about some generated domain names. Algoritm to fight
negative caching effect is simple. With millions names in .com
there is a long way to keep asking.

       - Leonid Yegoshin, LY22