On 8/12/2011 12:26 AM, David Barbour wrote:
On Thu, Aug 11, 2011 at 10:22 PM, BGB <cr88...@gmail.com <mailto:cr88...@gmail.com>> wrote:

    if the alteration would make the language unfamiliar to people;


It is true that some people would rather work around a familiar, flawed language than accept an improvement. But here's the gotcha, BGB: *those people are not part of your audience*. For example, they will never try BGB script, no matter what improvements it purports.


but, whether or not they use it, or care that it exists, is irrelevant...

but, anyways, FWIW, I am myself a part of the audience of people who would use my language, so I am free to design it how I want it designed, which happens to largely coincide with common practices.


    fundamental design changes could impact any non-trivial amount of
    said code. for example, for a single developer, a fundamental
    redesign in a 750 kloc project is not a small task, and much
    easier is to find more quick and dirty ways to patch up problems
    as they arise


One should also account for accumulated technical debt, from all those 'quick and dirty ways to patch up problems as they arise', and the resulting need to globally refactor anyway.

well, but for now it works...
fixing these sorts of issues is a task for another day...


it would be like:
I "could" also go and patch up all the internal cruft currently residing within my object system (more cleanly merging the implementations of Prototype and Class/Instance objects, developing a more orthogonal/unified object model, ...).

however, this itself doesn't really matter for now, and so I can put it off until later and work on things of more immediate relevance (and rely more on the fact that most of the details are glossed over by the API).



    find a few good (strategic and/or centralized) locations to add
    security checks, rather than a strategy which would essentially
    require lots of changes all over the place.


When getting started with capabilities, there are often a few strategic places one can introduce caps - i.e. at module boundaries, and to replace direct access to the toplevel. I've converted a lot of C++ projects to a more capability-oriented style for simple reasons: it's easier to test a program when you can mock up the environment with a set of objects.

No security checks are required.

but, security checks seem like less up-front effort to bolt on to the VM...

in this case, the HLL looks and behaves almost exactly the same as it did before, but now can reject code which violates security and similar.

for the most part, one doesn't have to care whether or not the security model exists or not, much like as is the case when using an OS like Linux or Windows: yes, theoretically the security model has hair, and changing file ownership/permissions/... would be a pain, ... but for the most part, the OS makes it all work fairly transparently, so that the user can largely ignore that it all exists.

it is not clear that users can so easily ignore the existence of a capability-oriented system.


also, the toplevel is very convinient when entering code interactively from the console, or for use with "target_eval" entities in my 3D engine (when triggered by something, they evaluate an expression), ...



    a simple example would be a login style system:
    malware author has, say, GoodApp they want to get the login key from;
    they make a dummy or hacked version of the VM (Bad VM), and run
    the good app with this;
    GoodApp does its thing, and authorizes itself;
    malware author takes this key, and puts it into "BadApp";
    BadApp, when run on GoodVM, gives out GoodApp's key, and so can do
    whatever GoodApp can do.


You've developed some bad security for GoodApp.

If GoodApp does "authorize itself", it would first authenticate the VM. I.e. BadVM would be unable to run the app. This is a DRM-like model you've envisioned. More traditionally, GoodApp just has a signed hash, i.e. a 'certificate', and the VM can authenticate GoodApp. Either way, Bad VM cannot extract the key.


I have seen stuff like this before (along with apps which stick magic numbers into the registry, ...).

but, anyways, DRM and security are sort of interrelated anyways, just the intended purpose of the validation and who is friend or enemy differs some.


In my own vision, code distributed to your machine would have capabilities. But they still couldn't be stolen because the distributed code (and capabilities) would be specialized to each client - i.e. a lot more code generation tends to undermine use of certificates. The supposed malware developer would only be able to 'steal' keys to his own device, which is somewhat pointless since he doesn't need to steal them.


there are systems in existence based on the above system, and I was basically pointing out a weakness of doing app authentication in the way I was describing there, which is why I probably wouldn't do it that way.

my own imagined methodology was likely to make use of program signatures (file hashes for the programs' image file), with user confirmation as to whether or not they want to give certain rights to an app. this would be because it is a little harder to forge a file hash than it is to capture things like "magic keys" or "passphrases".

very likely, programs/modules would be distributed as a ZIP variant (sort of like JAR or APK), and probably validated using a file hash (for the whole archive). vaguely similar was already used in my case for caching compiled binary modules (for my C compiler), although IIRC I was using Adler32 or similar for this.

maybe if security were the concern, something like MD6 or SHA-2 could be used instead (stuff I have read elsewhere shows that MD5 is infact fairly solidly broken now, as apparently people have gotten algos written which can break MD5 in as little as 64 bytes and with <1 minute of execution time).


that or just "pull a rabbit out of a hat" and have a security model based on the assumption that hopefully people would not give a crap enough to bother trying to break the hash.

although, I guess with this as well there is always the risk that someone would figure it out and write a program to break it in around 3 lines of Perl code or similar...



        just 'passing the blame' to the user is a poor justification
for computer security.

    this is, however, how it is commonly handled with things like Windows.


It's a poor justification there, too.

well, it works though, and a lot of people still use Windows...



    the only "real" alternative is to assume that the user is "too
    stupid for their own good", and essentially disallow them from
    using the software outright


There are /many/ other "real" alternatives, ranging from running the app in a hardware-virtualized sandbox to use of powerbox for authority.

I guess this is possible.

however, virtualization is a fairly heavy-weight solution to the problem.


but, yeah, seeing as how many people end up rooting/jailbreaking their Android phones or iPads, trying to enforce security is not ideal, as it will make annoyance for the users and force them to resort to potentially drastic means.

it is much like GD-ROM didn't manage to prevent people from figuring out how to make burnt CD-R's work in the Dreamcast, ...

and people often end up jailbreaking their Wii/PS3/XBox360 as well, ...



    optional features are very common though in most systems


They're still bad for languages.

maybe...

however, C1X has optional features (VLA's and complex numbers in C99 were downgraded to optional features, and several new features were introduced, such as generics, ...).



    code which doesn't need to use pointers, shouldn't use pointers,
    and if people choose not to use them


Electing to not use pointers is not the same as turning off support for pointers in the compiler (which, for example, would break all the libraries that use pointers).

well, note that the enabled/disabled status for features would likely exist on a per-module basis.

for example, the modules containing code running as "root" would have access to pointers, and the code running in "untrusted" modules would not (they either have to request status elevation, or trying to use the feature results in an exception or thread termination).

likewise for the FFI:
it could work, or fail, depending on who tries to call it...



    in an "ideal" world, it would be able to be usable in a similar
    abstraction domain roughly between C++ and JavaScript.


The elegant simplicity of C++ and the blazing speed of JavaScript?

maybe...

it has a lot of common features with C# (Java and C# were also design influences), and with a C FFI, and taking some amount of language features from C as well.

so, sort of like if Java, C#, JavaScript, ActionScript, C, ... all sort of got into bed together... with some lingering parts of Scheme and Self and similar in the mix as well...

like, bits and pieces from all of them...

however, its applicability domain doesn't really directly overlap with any of them though.


sadly, several major sets of C++ features are lacking:
    templates;
multiple inheritance (in a usable form, I once tried to hack it on but this didn't get very far); ability to directly use C++ classes or object instances (the C++ ABI/... gives some fear); user-defined operator overloading (partially implemented, not finished); ability to overload toplevel or package-level functions (implementation issue);
    ability to be used standalone (the VM is necessary);
    comparable performance (performance is not currently its strong area);
    ...

however, like C, one can break out the structs and the pointers...
also it has a basic ifdef/ifndef mechanism, reference arguments (sort of, more C-like), ...

structs could (conceptually) be used in a manner similar to the C++ RAII practice (not really tested though, should probably work though).

it has the "delete" keyword (serves to free objects immediately).


and there is "load()" and "eval()".
and it runs in a VM with a Garbage Collector...
and there are closures, ...



    the permissions are currently intended as the underlying model by
    which a lot of the above can be achieved.


They aren't necessary.

well, otherwise it would likely require altering semantics or similar. using permissions as a semantics and scope-altering feature seems to make some sense...

object is not writable: why?... because we don't have write access...
otherwise, one would need to use a "readonly" modifier or similar, but this would apply to everyone, rather than be like "well, I can use this read/write, but everyone else gets it as read-only".

probably it is computationally cheaper than using properties as well, ...


_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to