On Sunday, 2 September 2018 at 21:07:20 UTC, Nick Sabalausky (Abscissa) wrote:
GUI programming has been attempted a lot. (See Scratch for one of the latest, possibly most successful attempts). But there are real, practical reasons it's never made significant in-roads (yet).

There are really two main, but largely independent, aspects to what you're describing: Visual representation, and physical interface:

A. Visual representation:
-------------------------

By visual representation, I mean "some kind of text, or UML-ish diagrams, or 3D environment, etc".

What's important to keep in mind here is: The *fundamental concepts* involved in programming are inherently abstract, and thus equally applicable to whatever visual representation is used.

If you're going to make a diagram-based or VR-based programming tool, it will still be using the same fundamental concepts that are already established in text-based programming: Imperative loops, conditionals and variables. Functional/declarative immutability, purity and high-order funcs. Encapsulation. Pipelines (like ranges). Etc. And indeed, all GUI based programming tools have worked this way. Because how *else* are they going to work?

They say the main difficulty for non-programmers is control flow, not type system, one system was reported usable where control flow was represented visually, but sequential statements were left as plain C. E.g. we have a system administrator here who has no problem with powershell, but has absolutely no idea how to start with C#.

B. Physical interface:
----------------------

By this I mean both actual input devices (keyboards, controllers, pointing devices) and also the mappings from their affordances (ie, what you can do with them: push button x, tilt stick's axis Y, point, move, rotate...) to specific actions taken on the visual representation (navigate, modify, etc.)

Hardware engineers are like the primary target audience for visual programming :)
https://en.wikipedia.org/wiki/Labview

Reply via email to