Dear readers: this is a mailing list and as such supposed to exchange half backed messages, ideas etc.
Hence I dare to send this message out.

Contemplating how to answer Simons question I understood that I might need more understand questions you see as open to formulate a comprehensive reply.

Am 21.11.2013 14:22, schrieb "Jörg F. Wittenberger":
Am 20.11.2013 01:25, schrieb Simon Hirscher:


So let me suggest: no matter what you don't like about this "BALL" and what
you'd like to do better, PLEASE try to follow the principle of inalienable
privileges. Don't run into that trap again an so many did before.
This might be a stupid question

Granted: it's definitely NOT a stupid question.

Just I'm scratching my head how to answer it. Now that my beard has turned grey I'm in the same position as my teacher was, when he taught me the importance of questions.

but I've been following this whole
conversation and still don't get what's so magic about inalienable
human rights from a software usage case perspective.

We'll sure come back to use case perspectives over time.

However to get started we should probably focus on "doing things right".

[OFFTOPIC]
To put is another way: there is little point in beating FB or any other "social network" (quotes for a reason) feature wise. Fixing their short comings is the only point of importance. Though even that will take time, since most (let's pick a number without discussion the reason - which in turn is there - about) 95% of the users will never understand the difference.

I posit that doing yet another social network software - typically some info sharing - will just inspire the established companies to copycat the feature. Furthermore: anybody who hopes that by working on "social" software, they'll be able to eventually sell their work to one of the big players should stop reading here. The rest is going to waste your time. The only reason to "buy those ideas" would be to kill them after denouncing as anti-<fashionable-favourite-friendly-term>.
[/OFFTOPIC]

The problem I perceive these days is roughly the desire to do some _social_ software from a merely technical background.

The associated risk is kinda subtle: the better you're feature wise[use case], the more influential the software is going to be. However: that's in turn the best trick to sneak wrong assumptions into the mind of the audience.

What's often missing (any every so often skipped or "delayed until we have something to show" is a proper analysis of the *actual* system structure and requirements. Furthermore the idea of "adding security later", which generally does not work.

((( I had to learn the hard way, that this is not taught in the comp sci dept of your university; you need to attend philosophy [Enlightenment], sociology [N.Luhmann etc.], ethics and similar arcane science - PLUS keep your math and comp sci attitude to boil a body of knowledge down into simple constructive (as opposed to statistical/analytic) formulae.)))

FB is great an example here as I tried to point out with my sarcastic remark asking for the problem it solves; I know how easy it makes it to share utterances; just the security guy in me cries foul: "That's the manifested Malory, how comes you people talk to each other using that channel?" ((( ((( Can you read "p0wnd"? ))) )))

The subtle risk is in accustoming people to wrong assumptions - especially the perceived need to trust - just because their daily life if filled up with them. Let me give just few examples here:

1) legal protection for DRM (prohibition of circumvention of "*effective* access control measurements")

Pardon me: how is this different from asking people to first read up what they are supposed to understand before they claim that they are to stupid to understand the fact? After all ask a child "Is ROT13 effective?" before and after telling them what ROT13 is.

See also these comments http://askemos.org/index.html?_v=footnote&_id=656 on the (important) difference of two components of "DRM" systems: "logical attribution of warrant" and "actual restriction management": "Related considerations lead to partially conflicting goals./LAW/is by definition independent of factual power, while/ARM/seeks to be incircumventable (falsely sometimes breaking the LAW)."

2) (A German problem) DeMail: this is a mail system, which - by law and only by law - is defined to be legally binding.(((sorry, if I'm not using the correct legal scholar's term))). It's kinda encrypted+signed between MUA and MTA. But at the MTA level it's plain text equivalent. Some "checks and balances" ;-) are "in place" ;-) to assure that no bad guy will be able to tamper with the message you sent out. The downside: insider attacks are simple. ((In technical terms: DeMail is a trusted system. According to the DoD definition, "a trusted system is a system, which CAN violate security assumptions".))

Pardon me: such a system should never be assumed to be legally binding. Period.

3) Another German problem which applies elsewhere in analogous way: The law here basically treats "digital signature" as if there where only public key based signature Schemes. (Those rely on some secret TRUSTED to be kept secret. A rather weak assumption.)

We have this table comparing signature Schemes here http://askemos.org/index.html?_v=footnote&_id=2158 (which was discusses to quite some extend with Ian Grigg whom you will find if you follow the link "Ricardian Hash" to here http://askemos.org/index.html?_v=search&_id=1275)

You see: those signatures in the last row are better: they don't require a trusted system and are applicable not only to messages but also to ongoing processes.

Just as the lawyers opinion found: "The law unfairly prefers the weaker type." Basically because there is no lobbying and not such a big business in that? That's what I'd guess.

4) Look at bitcoin. What a strange hype right now. The "value"absolutely is speculative no connected to any economic reason. How is this any better than Dollars? But it's already about to creep into "official" acceptance. What we would really need was to have a "pre-money" - basically an electronic bill of exchange (German: Wechsel). Why? Because that one creates a link between the money's value and actual real-world assets backing up this value.




In general: shortcomings of the implementation feed back into the legal system AND mind of the general public. (Up to the point that some ordinary people outright believe that it's just a matter of personal smartness to break any existing cryptographic encoding by tomorrow - really, you can meet many of those people at the street here and I just can't believe this town is an exception!)

We need social software, which fixes these structural issue. No single company CAN do that. (At least not those publicly traded companies, their accountants and board will prevent them to solve the issue in fear of damaging their monopoly position.) That's the chance for free software. Be it supported by communities or via governments - which at least in theory could/should want the issue fixed.



The problem: I'm often reading "at one point you just MUST TRUST the system". Even in serious texts.

*No.*

We need a system, which works *without trusted parts*.

Period.

You know: if you hear from your spouse "you MUST trust me", it's time to split up.

This holds true from trusting compilers (see http://www.dwheeler.com/trusting-trust/ how to guard against the famous problem formulated by Ken Thompson). And stretches to runtime, operating systems etc.

It's possible: you need two components: a logical proof that you can't loose (which is useless without some guarantee that the logic is held up "in court" so to speak) and some way to define what's "seen in court". (Are there more required components?)

Not convinced? Ask questions! Just keep you brain open: the promise of the **social network** software - here without quotes, bold instead - is that (iff done right) it does NOT require you to trust in anything. If you find something you have to trust, you found something worth to be improved.


The rest is _almost_ untouched and kept here just for reference. After all, I need some sleep and I want your replies rather sooner than later. (To help me for a real reply.) But a few comments are in between.

  What am I going
to do with my freedom from torture on Facebook? What about my right to
bear arms? (At least if I were to live in the US.)

Let me prepare another reply just to this question. (May take some time.)
For the time being and since your mail address tells me you can read German, may I recommend lectures 2-3 from this course: http://wwwm.htwk-leipzig.de/~m6bast/RIVL06/RIVL06.html <http://wwwm.htwk-leipzig.de/%7Em6bast/RIVL06/RIVL06.html> I could safe me a lot of work. After all, he's better at that topic than I am.

Short answer: If you software would watch your rights, then Pishing was impossible.

Median answer: Note that human rights are not (yet) everywhere recognized. While most of the principles where eventually established about 300 years ago, only the past 60years have seen their formal adoption. Except for some important parts. Those parts make the difference between feudal structures (as modelled by Unix, facebook etc.) and societies of free individuals (without omnipowerful rulers; their legal system is based recursively on contracts ending in the special half-sided contract "constitution").

By today we are still faced with a lot of idiosyncratic exceptions.

Do we really have to go into these exceptions? 1) Weapons are no problem, shooting may be (but still depends on intention, killing in accident or self defence is no murder...) 2) Death penalty is widely seen justified in US and elsewhere - however that's a real problem wrt. human rights because any judge and even a jury can make mistakes and while one could try to set a reasonable amount to pay some innocent as reparation for years lost in jail; it's really hard to bring them back to live as prerequisite to the payment. I'd say: wrong list for these topics here.

But let me promise so much: the contract based approach is much harder to break. Centralized structures are susceptible to intrusion, abuse intrigues etc.

Or more technically speaking: if one was to break systems into cells small enough they are much easier to defend, since their value to the attacker depends on the possible gain, which is limited by the size of the cells.

The sad thing we're observing these days is that the failure of implementing an abundant amount of feudal structures in our computer systems begins to feed back into the legal system. (Think of prohibition of circumvention DRM and stupid stuff like this.)


On a slightly different note: What's that trap you are talking about?


Build an ad hoc system for provisioning access control, trust etc. and then hope it's going to be safe.

The whole Askemos system had only two roots. One going to be talked about in the promised posting, and the orinal one was even more simple: I wanted to have a formal criterion to help proofing systems resistant to covert attacks (at the logical level). To find the latter was the "scientific contribution". Using the criterion we can distinguish between systems known to be non-corruptible and those for which we can not proof that they are non-corruptible. (Unfortunately this is not a proof that they are corruptible.)

Informally the criterion boils down to "there MUST be NEITHER transitive NOR wholesale transfer of <rights>"; whereby <rights> is whatever means you system uses to symbolically _express_ key values used in the permission control decision.

Applying is in practice happened to rule basically everything (from unix to database systems etc.) out as potentially corruptible. That lead to the desire to prototype something better.

Best Regards

/Jörg





-- [email protected]
    https://lists.tgbit.net/mailman/listinfo.cgi/secu-share

-- [email protected]
   https://lists.tgbit.net/mailman/listinfo.cgi/secu-share

Reply via email to