On 28.02.2016 07:34, Steven D'Aprano wrote:
I think that's out-and-out wrong, and harmful to the developer community. I
think that we're stuck in the equivalent of the pre-WYSIWYG days of word
processing: you can format documents as nicely as you like, but you have to
use a separate mode to see it.

Good point.

Drag-and-drop GUI builders have the same advantages over code as Python has
over languages with distinct compile/execute steps: rapid development,
prototyping, exploration and discovery. Of course, any decent modern
builder won't limit you to literally drag-and-drop, but will offer
functionality like duplicating elements, aligning them, magnetic guides,
etc.

Another good point. I will get to this later.

GUI elements are by definition graphical in nature, and like other graphical
elements, manipulation by hand is superior to command-based manipulation.
Graphical interfaces for manipulating graphics have won the UI war so
effectively that some people have forgotten there ever was a war. Can you
imagine using Photoshop without drag and drop?
(you can measure this by counting the numbers of replies to a thread)

That's whole different topic. What is Photoshop manipulating? Layers of pixels. That's an extremely simplified model. There is no dynamic behavior as there is with GUIs.

And yet programming those graphical interfaces is an exception. There, with
very few exceptions, we still *require* a command interface. Not just a
command interface, but an *off-line* command interface, where you batch up
all your commands then run them at once, as if we were Neanderthals living
in a cave.

Not sure if I agree with you here.

Let's ask ourselves, what is so different about, say, a complex mathematical function and a complex GUI? In other words: why do you can live with a text representation of that function whereas you cannot live with a text representation of a GUI?

One difference is the number of interactions you can do with a function and a GUI. A function takes some numbers whereas a GUI takes some complex text/mouse/finger/voice interactions. So, I've never heard of any complains when it comes to mathematical functions represented in some source code. But, I've heard a lot of complains regarding GUI design and interaction tests (even when they are done graphically) -- also in WPF.

Both text representations are abstract descriptions of the real thing (function and GUI). You need some imagination to get them right, to debug them, to maintain them, to change them. We could blame Python here but it's due to the problem realm and to the people working there:

Functions -> mathematicians/computer scientists, work regularly with highly abstract objects GUI -> designers, never really got the same education for programming/abstraction as the former group has

So, (and I know that from where I am involved with) GUI research (development, evaluation etc.) is not a topic considered closed. No serious computer scientist really knows the "right" way. But, hey, people are working on it at least.

Usually, you start out simple. As the time flies, you put in more and more features and things become more and more complex (we all know that all non-toy projects will). And so does a GUI. At a certain point, there is no other way than going into the code and do something nasty by utilizing the Turing-completeness of the underlying language. Generated code always looks creepy, bloaty with a lot of boilerplate. If you really really need to dig deeper, you will have a hard time finding out what of the boilerplate is really needed and what was added by the code-generator. In the end, you might even break the "drag-n-drop"ability. :-(

That is the reason, why traditional CASE tools never really got started, why we still need programmers, why we still have text. From my point of view (highly subjective), start by using general building blocks (text, functions, classes, ...) is better long-term; not by starting with a cage (the GUI) and subsequently adding more and more holes not fitting the original concept. History so far as agreed with this; professional software development always uses text tools for which LATER somebody built a GUI. I cannot remember it being the other way round.

Furthermore, I agree with Chris about the version control problem.

Last but not least, GUIs are a place for bike shedding because almost everybody is able to see them and can start having an opinion about them:
Who loves the new Windows modern UI? Either you like it or you hate it.
What about the Riemann zeta function? Anybody?


Best,
Sven

PS: another thought.

I recently introduced LaTeX to my girlfriend. LaTeX is quite ugly and it has this "distinct compile/execute step", so initially I hesitated to show it to her. But her MS Word experience got worse and worse the more complex (and especially larger) her workload became. Word became less responsive and results became even less reproducible (footnote numbering, styling, literature, etc.).

She needed to invest some time to learn LaTeX and to tweak the initial template to fit her needs. Though, in the end, she's much happier and get reproducible results. She still uses a GUI for writing LaTeX. It helps her avoiding mistakes.

So, I don't think it's GUI vs text but rather how can they complement each other.
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to