On Thu, Aug 11, 2011 at 10:22 PM, BGB <cr88...@gmail.com> wrote:

> if the alteration would make the language unfamiliar to people;
>

It is true that some people would rather work around a familiar, flawed
language than accept an improvement. But here's the gotcha, BGB: *those
people are not part of your audience*. For example, they will never try BGB
script, no matter what improvements it purports.



fundamental design changes could impact any non-trivial amount of said
> code. for example, for a single developer, a fundamental redesign in a 750
> kloc project is not a small task, and much easier is to find more quick and
> dirty ways to patch up problems as they arise
>

One should also account for accumulated technical debt, from all those
'quick and dirty ways to patch up problems as they arise', and the resulting
need to globally refactor anyway.


>
> find a few good (strategic and/or centralized) locations to add security
> checks, rather than a strategy which would essentially require lots of
> changes all over the place.
>

When getting started with capabilities, there are often a few strategic
places one can introduce caps - i.e. at module boundaries, and to replace
direct access to the toplevel. I've converted a lot of C++ projects to a
more capability-oriented style for simple reasons: it's easier to test a
program when you can mock up the environment with a set of objects.

No security checks are required.


>
> a simple example would be a login style system:
> malware author has, say, GoodApp they want to get the login key from;
> they make a dummy or hacked version of the VM (Bad VM), and run the good
> app with this;
> GoodApp does its thing, and authorizes itself;
> malware author takes this key, and puts it into "BadApp";
> BadApp, when run on GoodVM, gives out GoodApp's key, and so can do whatever
> GoodApp can do.
>

You've developed some bad security for GoodApp.

If GoodApp does "authorize itself", it would first authenticate the VM. I.e.
BadVM would be unable to run the app. This is a DRM-like model you've
envisioned. More traditionally, GoodApp just has a signed hash, i.e. a
'certificate', and the VM can authenticate GoodApp. Either way, Bad VM
cannot extract the key.

In my own vision, code distributed to your machine would have capabilities.
But they still couldn't be stolen because the distributed code (and
capabilities) would be specialized to each client - i.e. a lot more code
generation tends to undermine use of certificates. The supposed malware
developer would only be able to 'steal' keys to his own device, which is
somewhat pointless since he doesn't need to steal them.


> just 'passing the blame' to the user is a poor justification for computer
>> security.
>
>
> this is, however, how it is commonly handled with things like Windows.
>

It's a poor justification there, too.


>
> the only "real" alternative is to assume that the user is "too stupid for
> their own good", and essentially disallow them from using the software
> outright
>

There are *many* other "real" alternatives, ranging from running the app in
a hardware-virtualized sandbox to use of powerbox for authority.



>
> optional features are very common though in most systems
>

They're still bad for languages.


>
> code which doesn't need to use pointers, shouldn't use pointers, and if
> people choose not to use them
>

Electing to not use pointers is not the same as turning off support for
pointers in the compiler (which, for example, would break all the libraries
that use pointers).


>
> in an "ideal" world, it would be able to be usable in a similar abstraction
> domain roughly between C++ and JavaScript.
>

The elegant simplicity of C++ and the blazing speed of JavaScript?


>
> the permissions are currently intended as the underlying model by which a
> lot of the above can be achieved.
>

They aren't necessary.
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to