Re: [Semi OT] The programming language wars

2015-03-30 Thread Joakim via Digitalmars-d

On Sunday, 29 March 2015 at 21:17:26 UTC, Abdulhaq wrote:

On Sunday, 29 March 2015 at 18:27:51 UTC, Joakim wrote:

would suffice.


When you said I think rodent-based UIs will go the way of the 
dinosaur, you seemed to be talking about more than just 
programmers.





I'm still waiting for The Last One (from Feb 1981) to reach 
fruition:


http://www.tebbo.com/presshere/html/wf8104.htm

http://teblog.typepad.com/david_tebbutt/2007/07/the-last-one-pe.html

Once finished, there will be no more need to write any programs.


Heh, that article is pretty funny. :) In the comments for the 
second link, the lead programmer supposedly said, For me TLO 
remains the 1st ever programming wizard. Wrongly advertised and 
promoted, but inherentlyt a 'good idea'.  Considering how 
widespread wizards are in Windows these days, the idea has 
certainly done well.


I do think that that concept of non-technical users providing 
constraints and answering questions is the future of building 
software, it just can't be built by one isolated guy.  The 
configuration and glue code can be auto-generated, but there will 
likely always need to be core libraries written in a programming 
language by programmers.  But the same automation that has put 
most travel agents out of work will one day be applied to most 
programmers too.


Re: [Semi OT] The programming language wars

2015-03-30 Thread Abdulhaq via Digitalmars-d

On Monday, 30 March 2015 at 18:49:01 UTC, Joakim wrote:

On Sunday, 29 March 2015 at 21:17:26 UTC, Abdulhaq wrote:

On Sunday, 29 March 2015 at 18:27:51 UTC, Joakim wrote:

would suffice.


When you said I think rodent-based UIs will go the way of 
the dinosaur, you seemed to be talking about more than just 
programmers.





I'm still waiting for The Last One (from Feb 1981) to reach 
fruition:


http://www.tebbo.com/presshere/html/wf8104.htm

http://teblog.typepad.com/david_tebbutt/2007/07/the-last-one-pe.html

Once finished, there will be no more need to write any 
programs.


Heh, that article is pretty funny. :) In the comments for the 
second link, the lead programmer supposedly said, For me TLO 
remains the 1st ever programming wizard. Wrongly advertised and 
promoted, but inherentlyt a 'good idea'.  Considering how 
widespread wizards are in Windows these days, the idea has 
certainly done well.


I do think that that concept of non-technical users providing 
constraints and answering questions is the future of building 
software, it just can't be built by one isolated guy.  The 
configuration and glue code can be auto-generated, but there 
will likely always need to be core libraries written in a 
programming language by programmers.  But the same automation 
that has put most travel agents out of work will one day be 
applied to most programmers too.


It was such an exciting time back then, but most of us who had a 
clue knew that it certainly couldn't be done (at that time, 
anyway). Around about the same time there was another article in 
PCW (a great magazine by the way) about a data compression tool 
that you could rerun over and over again to make files smaller 
and smaller ;-). I wish we could read the back issues like we can 
with Byte (on archive,org). Even the adverts are great to read 
for us old hands.


As to whether we'll ever do it, I agree with previous comments 
that it's related to understanding language - context is 
everything, and that takes an understanding of life and its 
paraphernalia.


Re: [Semi OT] The programming language wars

2015-03-30 Thread ketmar via Digitalmars-d
On Mon, 30 Mar 2015 18:49:00 +, Joakim wrote:

 On Sunday, 29 March 2015 at 21:17:26 UTC, Abdulhaq wrote:
 On Sunday, 29 March 2015 at 18:27:51 UTC, Joakim wrote:
 would suffice.

 When you said I think rodent-based UIs will go the way of the
 dinosaur, you seemed to be talking about more than just programmers.



 I'm still waiting for The Last One (from Feb 1981) to reach fruition:

 http://www.tebbo.com/presshere/html/wf8104.htm

 http://teblog.typepad.com/david_tebbutt/2007/07/the-last-one-pe.html

 Once finished, there will be no more need to write any programs.
 
 Heh, that article is pretty funny. :) In the comments for the second
 link, the lead programmer supposedly said, For me TLO remains the 1st
 ever programming wizard. Wrongly advertised and promoted, but
 inherentlyt a 'good idea'.  Considering how widespread wizards are in
 Windows these days, the idea has certainly done well.
 
 I do think that that concept of non-technical users providing
 constraints and answering questions is the future of building software,
 it just can't be built by one isolated guy.  The configuration and glue
 code can be auto-generated, but there will likely always need to be core
 libraries written in a programming language by programmers.  But the
 same automation that has put most travel agents out of work will one day
 be applied to most programmers too.

that would be a big day!

signature.asc
Description: PGP signature


Re: [Semi OT] The programming language wars

2015-03-29 Thread Joakim via Digitalmars-d

On Saturday, 21 March 2015 at 20:51:52 UTC, FG wrote:
But if you look at my function definition, it doesn't have 
that, nor does it use parentheses, semicolons, etc., so it's 
voice-ready. My question is: at which point would that be 
considered an efficient method to define a program's component 
that we would choose to use instead of the current succinct 
symbolic notation?


We will probably keep some variation on the current textual 
symbolic notation but develop verbal shorthand and IDEs that 
enable faster programming than even keyboards allow today.


Yeah, right, people will create drawings with voice commands. 
:)  Every interface has its rightful domain and voice ain't 
best for everything. Or do you mean that touch will go away but 
instead people will be waving their hands around?


Yes, hand and fingers, as I said before, and you will even be 
able to paint in 3D:


https://www.thurrott.com/windows/windows-10/573/hands-microsoft-hololens

On Saturday, 21 March 2015 at 21:46:10 UTC, H. S. Teoh wrote:
Of course. But we're talking here about interfaces for 
*programmers*,
not for your average Joe, for whom a pretty GUI with a button 
or two

would suffice.


When you said I think rodent-based UIs will go the way of the 
dinosaur, you seemed to be talking about more than just 
programmers.


This is the unpopular opinion, but I'm skeptical if this day 
will ever
come. The problem with voice recognition is that it's based on 
natural
language, and natural language is inherently ambiguous. You say 
that
heuristics can solve this, I call BS on that. Heuristics are 
bug-prone
and unreliable (because otherwise they'd be algorithms!), 
precisely
because they fail to capture the essence of the problem, but 
are merely

crutches to get us mostly there in lieu of an actual solution.


You don't have to handle full natural language to handle voice 
input, you can constrain the user to a verbal shorthand for 
certain tasks.  Eventually, you can loosen that requirement as 
the recognition engines get better.  You can never have 
algorithms that handle all the complexity of human speech, 
especially since the speech recognition engine has no 
understanding of what the words actually mean.  But thousands 
upon thousands of heuristics might just do the job.


The inherent ambiguity in natural language comes not from some 
kind of
inherent flaw as most people tend to believe, but it's actually 
a
side-effect of the brain's ability at context-sensitive 
comprehension.
The exact same utterance, spoken in different contexts, can 
mean totally
different things, and the brain has no problem with that 
(provided it is
given sufficient context, of course). The brain is also 
constantly
optimizing itself -- if it can convey its intended meaning in 
fewer,
simpler words, it will prefer to do that instead of going 
through the
effort of uttering the full phrase. This is one of the main 
factors

behind language change, which happens over time and is mostly
unconscious.  Long, convoluted phrases, if spoken often enough, 
tend to
contract into shorter, sometimes ambiguous, utterances, as long 
as there
is sufficient context to disambiguate. This is why we have a 
tendency
toward acronyms -- the brain is optimizing away the long 
utterance in
preference to a short acronym, which, based on the context of a 
group of
speakers who mutually share similar contexts (e.g., computer 
lingo), is
unambiguous, but may very well be ambiguous in a wider context. 
If I
talk to you about UFCS, you'd immediately understand what I was 
talking
about, but if I said that to my wife, she would have no idea 
what I just
said -- she may not even realize it's an acronym, because it 
sounds like
a malformed sentence you  The only way to disambiguate 
this kind
of context-specific utterance is to *share* in that context in 
the first
place. Talk to a Java programmer about UFCS, and he probably 
wouldn't
know what you just said either, unless he has been reading up 
on D.


This acronym example is actually fairly easy for the computer to 
handle, given its great memory.  But yes, there are many contexts 
where the meaning of the words is necessary to disambiguate what 
is meant and without some sort of AI, you have to rely on various 
heuristics.


The only way speech recognition can acquire this level of 
context in
order to disambiguate is to customize itself to that specific 
user -- in
essence learn his personal lingo, pick up his (sub)culture, 
learn the
contexts associated with his areas of interest, even adapt to 
his
peculiarities of pronunciation. If software can get to this 
level, it
might as well pass the Turing test, 'cos then it'd have enough 
context
to carry out an essentially human conversation.  I'd say we're 
far, far

from that point today, and it's not clear we'd ever get there.


I'd say we're fairly close, given the vast computing power in 
even our mobile devices these days, and that is nowhere near the 
Turing test, as 

Re: [Semi OT] The programming language wars

2015-03-29 Thread Abdulhaq via Digitalmars-d

On Sunday, 29 March 2015 at 18:27:51 UTC, Joakim wrote:

would suffice.


When you said I think rodent-based UIs will go the way of the 
dinosaur, you seemed to be talking about more than just 
programmers.





I'm still waiting for The Last One (from Feb 1981) to reach 
fruition:


http://www.tebbo.com/presshere/html/wf8104.htm

http://teblog.typepad.com/david_tebbutt/2007/07/the-last-one-pe.html

Once finished, there will be no more need to write any programs.


Re: [Semi OT] The programming language wars

2015-03-23 Thread Paulo Pinto via Digitalmars-d

On Saturday, 21 March 2015 at 19:20:18 UTC, deadalnix wrote:

On Saturday, 21 March 2015 at 15:51:38 UTC, Paulo Pinto wrote:
I don't expect programming will remain so low level in the 
future. We are at the infancy of our skills, when comparing 
with engineerings with a fee centuries of progress.


For me the future lyes in something like Wolfram/Mathematic 
with natural voice processing.


People have been saying this for longer than I'm alive.


You missed my remark about the age of computing versus other arts 
from engineering.


We are still building bridges with wood and houses with clay.

-
Paulo


Re: [Semi OT] The programming language wars

2015-03-23 Thread Kagamin via Digitalmars-d

On Friday, 20 March 2015 at 07:37:04 UTC, Paulo  Pinto wrote:
An example that stuck with me was that languages that follow 
Algol/Pascal syntax lead themselves to less bugs, than those 
that follow C like syntax.


There are quite a few other examples. Also the mention that as 
far as the researcher is aware, only Microsoft is pursuing such 
studies for language features. Don Syme is in the audience and 
gives an example how they did it for .NET generics.


Anything wrong with VB.NET?


Re: [Semi OT] The programming language wars

2015-03-23 Thread deadalnix via Digitalmars-d

On Monday, 23 March 2015 at 10:40:12 UTC, Paulo Pinto wrote:

On Saturday, 21 March 2015 at 19:20:18 UTC, deadalnix wrote:

On Saturday, 21 March 2015 at 15:51:38 UTC, Paulo Pinto wrote:
I don't expect programming will remain so low level in the 
future. We are at the infancy of our skills, when comparing 
with engineerings with a fee centuries of progress.


For me the future lyes in something like Wolfram/Mathematic 
with natural voice processing.


People have been saying this for longer than I'm alive.


You missed my remark about the age of computing versus other 
arts from engineering.


We are still building bridges with wood and houses with clay.

-
Paulo


Maybe, and we will certainly see more of this in the future.

However, that future may be far away, considering it is promised 
for decades at this point and still not up and running.


Also, someone will have to code these systems, so the job we do 
now will still be necessary.


Re: [Semi OT] The programming language wars

2015-03-22 Thread via Digitalmars-d

On Saturday, 21 March 2015 at 23:58:18 UTC, Laeeth Isharc wrote:
HS Teoh is right about context, and the superiority of the 
written word for organizing and expressing thinking at a very 
high level.  The nature of human memory and perception means 
that is unlikely to change very soon, if ever.


Actually, the visual system is a lot more powerful than our very 
limited capability of dealing with abstract symbols.


But developing visual languages/tools for programming is very 
challenging and quite expensive given the foundation we have in 
math.


No doubt these techniques will continue to grow in usefulness 
(I certainly hope so, and am making that bet), but the ultimate 
implications depend on your conception of what creativity is.


Unfortunately progress will probably be defined by the industry 
desire to commoditize the programming profession, which basically 
will drive it more towards configuration than construction. 
We already see this? An incredible amount of websites are built 
on top of a technical pile of configurable dung, Wordpress (and 
Php).


Add to this that the new generations of nerds grow up with a 
different knowledge frame (ipads) than the programmers of the 80s 
who grew up with peeks, pokes and machine language. There is 
bound to be some shift in what the typical programmers do.


Re: [Semi OT] The programming language wars

2015-03-22 Thread deadalnix via Digitalmars-d

On Sunday, 22 March 2015 at 09:30:38 UTC, Atila Neves wrote:

On Friday, 20 March 2015 at 22:55:24 UTC, Laeeth Isharc wrote:

On Friday, 20 March 2015 at 07:37:04 UTC, Paulo  Pinto wrote:
Language features should be tested with real users using 
scientific validation processes, instead of being blindly 
added to a language.


There is nothing intrinsically more scientific about basing a 
decision on a study rather than experience and judgement 
(including aesthetic judgement), which is not to say that more 
data cannot be useful,



Of course there is. Experience and judgement aren't measurable. 
You don't have science without numbers.


Atila


But then, how can I keep my delusions and pretend they are fact 
because I have experience, judgment and, it goes without saying, 
a great sense of aesthetic  ?


Re: [Semi OT] The programming language wars

2015-03-22 Thread via Digitalmars-d

On Sunday, 22 March 2015 at 09:30:38 UTC, Atila Neves wrote:
Of course there is. Experience and judgement aren't measurable. 
You don't have science without numbers.


WTF?


Re: [Semi OT] The programming language wars

2015-03-22 Thread via Digitalmars-d

On Sunday, 22 March 2015 at 09:46:40 UTC, deadalnix wrote:
But then, how can I keep my delusions and pretend they are fact 
because I have experience, judgment and, it goes without 
saying, a great sense of aesthetic  ?


You will keep your delusions and pretend they are fact until you 
take a course on the philosophy of science. The debate about 
qualitative and quantitative methods is long forgone. If you want 
a retake on that you will have to travel 50 years back in time.





Re: [Semi OT] The programming language wars

2015-03-22 Thread FG via Digitalmars-d

On 2015-03-22 at 11:03, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
ola.fosheim.grostad+dl...@gmail.com wrote:

On Sunday, 22 March 2015 at 09:30:38 UTC, Atila Neves wrote:

Of course there is. Experience and judgement aren't measurable. You don't have 
science without numbers.


WTF?


Heh, everything is measurable, but sometimes the chosen metrics and analysis 
are just ridiculous and not worth the paper they are printed on, even though 
all rules of scientific reasoning were followed. :)


Re: [Semi OT] The programming language wars

2015-03-22 Thread Atila Neves via Digitalmars-d

On Friday, 20 March 2015 at 22:55:24 UTC, Laeeth Isharc wrote:

On Friday, 20 March 2015 at 07:37:04 UTC, Paulo  Pinto wrote:
Language features should be tested with real users using 
scientific validation processes, instead of being blindly 
added to a language.


There is nothing intrinsically more scientific about basing a 
decision on a study rather than experience and judgement 
(including aesthetic judgement), which is not to say that more 
data cannot be useful,



Of course there is. Experience and judgement aren't measurable. 
You don't have science without numbers.


Atila




Re: [Semi OT] The programming language wars

2015-03-22 Thread via Digitalmars-d

On Sunday, 22 March 2015 at 10:24:16 UTC, FG wrote:
On 2015-03-22 at 11:03, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
ola.fosheim.grostad+dl...@gmail.com wrote:

On Sunday, 22 March 2015 at 09:30:38 UTC, Atila Neves wrote:
Of course there is. Experience and judgement aren't 
measurable. You don't have science without numbers.


WTF?


Heh, everything is measurable, but sometimes the chosen metrics 
and analysis are just ridiculous and not worth the paper they 
are printed on, even though all rules of scientific reasoning 
were followed. :)


Almost right. Even a well conducted quantitative study might be 
misleading because it measures correlation and not causality. 
Causality is hard nut to crack and it will in the end hang on our 
beliefs in the methodology, the study, the tools, the objects 
being studied, the people conducting the studies and the already 
accepted assumptions in the field (established theories which 
might be wrong) etc. So in essence, science is a belief system 
(not all that different from religion, although the contrary is 
often claimed).


This all becomes easier to reason about if people give up the 
idea that science represents the truth. It does not, it 
presents models that are hypothetical in nature. These may be 
useful or not useful, but are usually incomplete and somewhat 
incorrect... In medical sciences correlation based models can be 
very useful, or very harmful (when incomplete on critical 
parameters such as negative effects of radiation).


In the design field the theories used are applied to a future 
unknown setting so correlation has very low value and insight in 
causality has a very high value. Meaning: a somewhat flawed high 
level model about how human beings think and react, about 
causality, might lead to better design than a more limited and 
correct low level model of how the brain works based on 
correlation.


Whether everything is measurable depends on what you mean. You 
might say that qualitative studies involves measuring because 
everything you perceive are measurements. In the real world, the 
data (what you have collected) will usually be inadequate for 
what is being claimed. After all, it is a society of publish 
or perish. So you need many independent studies to get something 
solid, but how many fields can produce that? Only the big ones, 
right?


Re: [Semi OT] The programming language wars

2015-03-21 Thread John Colvin via Digitalmars-d

On Friday, 20 March 2015 at 22:55:24 UTC, Laeeth Isharc wrote:
So one must be careful to avoid being dazzled by shiny 
'scientific' approaches when their value remains yet to be 
proven.


I sense a recursive problem here...


Re: [Semi OT] The programming language wars

2015-03-21 Thread John Colvin via Digitalmars-d

On Friday, 20 March 2015 at 17:25:54 UTC, H. S. Teoh wrote:
On Fri, Mar 20, 2015 at 05:04:20PM +, ketmar via 
Digitalmars-d wrote:

On Fri, 20 Mar 2015 13:28:45 +, Paulo  Pinto wrote:

 Given that I have been an IDE fan since the Amiga days, I 
 fully

 agree.
 
 Every time I am on UNIX I feel like a time travel to the 
 days of

 yore.

being on non-nix system is a torture. there aren't even gcc, 
let alone

emacs/vim.


Yeah, I've become so accustomed to the speed of keyboard-based 
controls
that every time I use my wife's Windows laptop, I feel so 
frustrated at
the rodent dependence and its slowness that I want to throw the 
thing

out the window.

But at another level, it's not even about keyboard vs. 
rodent... it's
about *scriptability*. It's about abstraction. Typing commands 
at the
CLI, while on the surface looks so tedious, actually has a 
powerful
advantage: you can abstract it. You can encapsulate it into a 
script.
Most well-designed CLI programs are scriptable, which means 
complex
operations can be encapsulated and then used as new primitives 
with

greater expressiveness.

Sure you can have keyboard shortcuts in GUI programs, but you 
can't
abstract a series of mouse clicks and drags or a series of 
keyboard
shortcuts into a single action. They will forever remain in the 
realm of
micromanagement -- click this menu, move mouse to item 6, open 
submenu,

click that, etc.. I have yet to see a successful attempt at
encapsulation a series of actions as a single meta-action (I've 
seen
attempts at it, but none that were compelling enough to be 
useful.) You
can't build meta-meta-actions from meta-actions. Everything is 
bound to
what-you-see-is-all-you-get. You can't parametrize a series of 
mouse
interactions the same way you can take a bash script and 
parametrize it
to do something far beyond what the original sequence of typed 
commands

did.

Ultimately, I think rodent-based UIs will go the way of the 
dinosaur.
It's a regression from the expressiveness of an actual language 
with
grammar and semantics back to caveman-style point-and-grunt. It 
may take
decades, maybe even centuries, before the current GUI 
trendiness fades
away, but eventually it will become obvious that there is no 
future in a
non-abstractible UI. Either CLIs will be proven by the test of 
time, or
something else altogether will come along to replace the rodent 
dead-end

with something more powerful. Something abstractible with the
expressiveness of language and semantics, not regressive
point-and-grunt.


T


In general I'm in agreement with you, but I think there *is* a 
place for more visual structure than a terminal editing a 
text-file can give you (essentially 1-D or maybe 1.5D, whatever 
that means). Some models/data/tasks are inherently more intuitive 
and quicker to work with in 2D.


Re: [Semi OT] The programming language wars

2015-03-21 Thread Piotrek via Digitalmars-d

On Saturday, 21 March 2015 at 14:07:28 UTC, FG wrote:
Now imagine the extra trouble if you mix languages. Also, how 
do you include meta-text control sequences in a message? By 
raising your voice or tilting your head when you say the magic 
words? Cf.:


There was this famous quote QUOTE to be or not to be END QUOTE 
on page six END PARAGRAPH...


Very awkward, if talking to oneself wasn't awkward already. 
Therefore I just cannot imagine voice being used anywhere where 
exact representation is required, especially in programming:


Define M1 as a function that takes in two arguments. The state 
of the machine labelled ES and an integer number in range 
between two and six inclusive labelled X. The result of M1 is a 
boolean. M1 shall return true if and only if the ES member 
labelled squat THATS SQUAT WITH A T AT THE END is equal to zero 
modulo B. OH SHIT IT WAS NOT B BUT X. SCRATCH EVERYTHING.


Just for fun. A visualization of the problem from 2007 (I doubt 
there was breakthrough meanwhile)


https://www.youtube.com/watch?v=MzJ0CytAsec

Piotrek


Re: [Semi OT] The programming language wars

2015-03-21 Thread Paulo Pinto via Digitalmars-d

On Saturday, 21 March 2015 at 14:07:28 UTC, FG wrote:

On 2015-03-21 at 06:30, H. S. Teoh via Digitalmars-d wrote:
On Sat, Mar 21, 2015 at 04:17:00AM +, Joakim via 
Digitalmars-d wrote:

[...]
What I was going to say too, neither CLI or GUI will win, 
speech
recognition will replace them both, by providing the best of 
both.
Rather than writing a script to scrape several shopping 
websites for
the price of a Galaxy S6, I'll simply tell the intelligent 
agent on my
computer Find me the best deal on a S6 and it will go find 
it.


I dunno, I find that I can express myself far more precisely 
and
concisely on the keyboard than I can verbally. Maybe for 
everyday tasks
like shopping for the best deals voice recognition is Good 
Enough(tm),
but for more complex tasks, I have yet to find something more 
expressive

than the keyboard.


Find me the best deal on a S6 is only a little more complex 
than make me a cup of coffee. Fine for doing predefined tasks 
but questionable as an ubiquitous input method. It's hard 
enough for mathematicians to dictate a theorem without using 
any symbolic notation. There is too much ambiguity and room for 
interpretation in speech to make it a reliable and easy input 
method for all tasks. Even in your example:


You say: Find me the best deal on a S6.
I hear: Fine me the best teal on A.S. six.
Computer: Are you looking for steel?

Now imagine the extra trouble if you mix languages. Also, how 
do you include meta-text control sequences in a message? By 
raising your voice or tilting your head when you say the magic 
words? Cf.:


There was this famous quote QUOTE to be or not to be END QUOTE 
on page six END PARAGRAPH...


Very awkward, if talking to oneself wasn't awkward already. 
Therefore I just cannot imagine voice being used anywhere where 
exact representation is required, especially in programming:


Define M1 as a function that takes in two arguments. The state 
of the machine labelled ES and an integer number in range 
between two and six inclusive labelled X. The result of M1 is a 
boolean. M1 shall return true if and only if the ES member 
labelled squat THATS SQUAT WITH A T AT THE END is equal to zero 
modulo B. OH SHIT IT WAS NOT B BUT X. SCRATCH EVERYTHING.



I don't expect programming will remain so low level in the 
future. We are at the infancy of our skills, when comparing with 
engineerings with a fee centuries of progress.


For me the future lyes in something like Wolfram/Mathematic with 
natural voice processing.





Re: [Semi OT] The programming language wars

2015-03-21 Thread H. S. Teoh via Digitalmars-d
On Sat, Mar 21, 2015 at 03:10:37PM +, John Colvin via Digitalmars-d wrote:
 On Friday, 20 March 2015 at 17:25:54 UTC, H. S. Teoh wrote:
[...]
 But at another level, it's not even about keyboard vs. rodent... it's
 about *scriptability*. It's about abstraction. Typing commands at the
 CLI, while on the surface looks so tedious, actually has a powerful
 advantage: you can abstract it. You can encapsulate it into a script.
 Most well-designed CLI programs are scriptable, which means complex
 operations can be encapsulated and then used as new primitives with
 greater expressiveness.
 
 Sure you can have keyboard shortcuts in GUI programs, but you can't
 abstract a series of mouse clicks and drags or a series of keyboard
 shortcuts into a single action. They will forever remain in the realm
 of micromanagement -- click this menu, move mouse to item 6, open
 submenu, click that, etc.. I have yet to see a successful attempt at
 encapsulation a series of actions as a single meta-action (I've seen
 attempts at it, but none that were compelling enough to be useful.)
 You can't build meta-meta-actions from meta-actions. Everything is
 bound to what-you-see-is-all-you-get. You can't parametrize a series
 of mouse interactions the same way you can take a bash script and
 parametrize it to do something far beyond what the original sequence
 of typed commands did.
 
 Ultimately, I think rodent-based UIs will go the way of the dinosaur.
 It's a regression from the expressiveness of an actual language with
 grammar and semantics back to caveman-style point-and-grunt. It may
 take decades, maybe even centuries, before the current GUI trendiness
 fades away, but eventually it will become obvious that there is no
 future in a non-abstractible UI. Either CLIs will be proven by the
 test of time, or something else altogether will come along to replace
 the rodent dead-end with something more powerful. Something
 abstractible with the expressiveness of language and semantics, not
 regressive point-and-grunt.
 
 
 T
 
 In general I'm in agreement with you, but I think there *is* a place
 for more visual structure than a terminal editing a text-file can give
 you (essentially 1-D or maybe 1.5D, whatever that means). Some
 models/data/tasks are inherently more intuitive and quicker to work
 with in 2D.

Certainly, some tasks are more suited for 2D, or even 3D, manipulation
than editing a text file, say. But just because task X is more
profitably manipulated with a 2D interface, does not imply that *every*
task is better manipulated the same way.

But at a more fundamental level, it's not really about text vs. graphics
or 1D (1.5D) vs. 2D. It's about the ability to abstract, that's
currently missing from today's ubiquitous GUIs. I would willingly leave
my text-based interfaces behind if you could show me a GUI that gives me
the same (or better) abstraction power as the expressiveness of a CLI
script, for example. Contemporary GUIs fail me on the following counts:

1) Expressiveness: there is no simple way of conveying complex ideas
like from here until the first line that contains the word 'END',
replace all occurrences of 'x' with 'y'. A single sed command could
accomplish this, whereas using contemporary GUI idioms you'd need to
invent a morass of hard-to-navigate nested submenus.

2) Speed: I can type the sed command in far less time than it takes to
move my hand to the mouse, move the cursor across the screen, and click
through said morass of nested submenus to select the requisite
checkboxes to express what I want to do.

3) Abstraction power: I can parametrize said sed command, and put a
whole collection of such commands into a script, that I can thereafter
refer to by name to execute the same commands again, *without having to
remember* the individual details of said commands.

4) Annotative power: As somebody else pointed out, I can add comments to
a script explaining what is needed to perform task X, and why the given
steps were chosen for that purpose. This alleviates the need to memorize
obscure details about the system that you don't really care about to get
your job done, as well as serve to jog your memory when something went
wrong and you need to recall why things were done this way and how you
might be able to fix it. I simply cannot see how these kinds of
meta-annotations can even remotely be shoehorned into contemporary GUI
idioms.

5) Precision: Even when working with graphical data, I prefer text-based
interfaces where practical, not because text is the best way to work
with them -- it's quite inefficient, in fact -- but because I can
specify the exact coordinates of object X and the exact displacement(s)
I desire, rather than fight with the inherently imprecise mouse movement
and getting myself a wrist aneurysm trying to position object X
precisely in a GUI. I have yet to see a GUI that allows you to specify
things in a precise way without essentially dropping back to a
text-based interface (e.g., an input 

Re: [Semi OT] The programming language wars

2015-03-21 Thread FG via Digitalmars-d

On 2015-03-21 at 06:30, H. S. Teoh via Digitalmars-d wrote:

On Sat, Mar 21, 2015 at 04:17:00AM +, Joakim via Digitalmars-d wrote:
[...]

What I was going to say too, neither CLI or GUI will win, speech
recognition will replace them both, by providing the best of both.
Rather than writing a script to scrape several shopping websites for
the price of a Galaxy S6, I'll simply tell the intelligent agent on my
computer Find me the best deal on a S6 and it will go find it.


I dunno, I find that I can express myself far more precisely and
concisely on the keyboard than I can verbally. Maybe for everyday tasks
like shopping for the best deals voice recognition is Good Enough(tm),
but for more complex tasks, I have yet to find something more expressive
than the keyboard.


Find me the best deal on a S6 is only a little more complex than make me a cup of 
coffee. Fine for doing predefined tasks but questionable as an ubiquitous input method. It's 
hard enough for mathematicians to dictate a theorem without using any symbolic notation. There is 
too much ambiguity and room for interpretation in speech to make it a reliable and easy input 
method for all tasks. Even in your example:

You say: Find me the best deal on a S6.
I hear: Fine me the best teal on A.S. six.
Computer: Are you looking for steel?

Now imagine the extra trouble if you mix languages. Also, how do you include 
meta-text control sequences in a message? By raising your voice or tilting your 
head when you say the magic words? Cf.:

There was this famous quote QUOTE to be or not to be END QUOTE on page six END 
PARAGRAPH...

Very awkward, if talking to oneself wasn't awkward already. Therefore I just 
cannot imagine voice being used anywhere where exact representation is 
required, especially in programming:

Define M1 as a function that takes in two arguments. The state of the machine 
labelled ES and an integer number in range between two and six inclusive labelled X. The 
result of M1 is a boolean. M1 shall return true if and only if the ES member labelled 
squat THATS SQUAT WITH A T AT THE END is equal to zero modulo B. OH SHIT IT WAS NOT B BUT 
X. SCRATCH EVERYTHING.



Re: [Semi OT] The programming language wars

2015-03-21 Thread FG via Digitalmars-d

On 2015-03-21 at 20:13, Joakim wrote:

Find me the best deal on a S6

[...]
Just tried it on google's voice search, it thought I said Find me the best deal on 
a last sex the first time I tried.


Obviously Google tries to converge the query with what is usually searched for. 
OTOH that's one of the areas you probably wouldn't want to browse through using 
a voice interface. :)



There was this famous quote QUOTE to be or not to be END QUOTE on page six END 
PARAGRAPH...

Just read that out normally and it'll be smart enough to know that the 
upper-case terms you highlighted are punctuation marks and not part of the 
sentence, by using various grammar and word frequency heuristics.  In the rare 
occurrence of real ambiguity, you'll be able to step down to a lower-level 
editing mode and correct it.


Yeah, I've exaggerated the problem. The deciding factor will be the required amount of stepping down to do 
low-level editing, even with a system supported by good machine learning, considering all the homonyms and 
words having many meanings. But let's assume that this was solved. I think the remaining END 
PARAGRAPH, OPEN BRACE or COMMA problem will go away with a compromise: people 
will just tell stories like they normally do, and let the punctuation be added automatically using AST, 
interval and intonation analyses. And the dying breed of writers who care about punctuation very much will 
continue using keyboards.



Therefore I just cannot imagine voice being used anywhere where exact 
representation is required, especially in programming:

Define M1 as a function that takes in two arguments. The state of the machine 
labelled ES and an integer number in range between two and six inclusive labelled X. The 
result of M1 is a boolean. M1 shall return true if and only if the ES member labelled 
squat THATS SQUAT WITH A T AT THE END is equal to zero modulo B. OH SHIT IT WAS NOT B BUT 
X. SCRATCH EVERYTHING.


As Paulo alludes to, the current textual representation of programming 
languages is optimized for keyboard entry. Programming languages themselves 
will change to allow fluid speech input.


That's true, programming languages will have to change. For example the distinction 
between lower and upper case is artificial and it was the biggest stumbling block in that 
video as well. That will have to go away along with other stuff. But if you look at my 
function definition, it doesn't have that, nor does it use parentheses, semicolons, etc., 
so it's voice-ready. My question is: at which point would that be considered 
an efficient method to define a program's component that we would choose to use instead 
of the current succinct symbolic notation?



We still have some work to do to get these speech recognition engines there, 
but once we do, the entire visual interface to your computer will have to be 
redone to best suit voice input and *nobody* will use touch, mice, _or_ 
keyboards after that.


Yeah, right, people will create drawings with voice commands. :)  Every 
interface has its rightful domain and voice ain't best for everything. Or do 
you mean that touch will go away but instead people will be waving their hands 
around?


Re: [Semi OT] The programming language wars

2015-03-21 Thread Joakim via Digitalmars-d

On Saturday, 21 March 2015 at 19:20:18 UTC, deadalnix wrote:

On Saturday, 21 March 2015 at 15:51:38 UTC, Paulo Pinto wrote:
I don't expect programming will remain so low level in the 
future. We are at the infancy of our skills, when comparing 
with engineerings with a fee centuries of progress.


For me the future lyes in something like Wolfram/Mathematic 
with natural voice processing.


People have been saying this for longer than I'm alive.


Unless you've been alive for a few centuries, they still could be 
right. ;)


Re: [Semi OT] The programming language wars

2015-03-21 Thread Joakim via Digitalmars-d

On Saturday, 21 March 2015 at 14:07:28 UTC, FG wrote:

On 2015-03-21 at 06:30, H. S. Teoh via Digitalmars-d wrote:
On Sat, Mar 21, 2015 at 04:17:00AM +, Joakim via 
Digitalmars-d wrote:

[...]
What I was going to say too, neither CLI or GUI will win, 
speech
recognition will replace them both, by providing the best of 
both.
Rather than writing a script to scrape several shopping 
websites for
the price of a Galaxy S6, I'll simply tell the intelligent 
agent on my
computer Find me the best deal on a S6 and it will go find 
it.


I dunno, I find that I can express myself far more precisely 
and
concisely on the keyboard than I can verbally. Maybe for 
everyday tasks
like shopping for the best deals voice recognition is Good 
Enough(tm),
but for more complex tasks, I have yet to find something more 
expressive

than the keyboard.


Find me the best deal on a S6 is only a little more complex 
than make me a cup of coffee. Fine for doing predefined tasks 
but questionable as an ubiquitous input method. It's hard 
enough for mathematicians to dictate a theorem without using 
any symbolic notation. There is too much ambiguity and room for 
interpretation in speech to make it a reliable and easy input 
method for all tasks. Even in your example:


You say: Find me the best deal on a S6.
I hear: Fine me the best teal on A.S. six.
Computer: Are you looking for steel?


Just tried it on google's voice search, it thought I said Find 
me the best deal on a last sex the first time I tried.  After 
3-4 more tries- a sex, nsx, etc- it finally got it right.  
But it never messed up anything before on, only the 
intentionally difficult S6, which requires context to understand. 
 Ask that question to the wrong person and they'd have no idea 
what you meant by S6 either.


My point is that the currently deployed, state-of-the-art systems 
are already much better than what you'd hear or what you think 
the computer would guess, and soon they will get that last bit 
right too.


Now imagine the extra trouble if you mix languages. Also, how 
do you include meta-text control sequences in a message? By 
raising your voice or tilting your head when you say the magic 
words? Cf.:


There was this famous quote QUOTE to be or not to be END QUOTE 
on page six END PARAGRAPH...


Just read that out normally and it'll be smart enough to know 
that the upper-case terms you highlighted are punctuation marks 
and not part of the sentence, by using various grammar and word 
frequency heuristics.  In the rare occurrence of real ambiguity, 
you'll be able to step down to a lower-level editing mode and 
correct it.


Mixing languages is already hellish with keyboards and will be a 
lot easier with speech recognition.



Very awkward, if talking to oneself wasn't awkward already.


Put a headset on and speak a bit lower and nobody watching will 
know what you're saying or who you're saying it to.


Therefore I just cannot imagine voice being used anywhere where 
exact representation is required, especially in programming:


Define M1 as a function that takes in two arguments. The state 
of the machine labelled ES and an integer number in range 
between two and six inclusive labelled X. The result of M1 is a 
boolean. M1 shall return true if and only if the ES member 
labelled squat THATS SQUAT WITH A T AT THE END is equal to zero 
modulo B. OH SHIT IT WAS NOT B BUT X. SCRATCH EVERYTHING.


As Paulo alludes to, the current textual representation of 
programming languages is optimized for keyboard entry.  
Programming languages themselves will change to allow fluid 
speech input.


On Saturday, 21 March 2015 at 15:13:13 UTC, Piotrek wrote:
Just for fun. A visualization of the problem from 2007 (I doubt 
there was breakthrough meanwhile)


https://www.youtube.com/watch?v=MzJ0CytAsec


Got a couple minutes into that before I knew current speech 
recognition is much better, as it has progressed by leaps and 
bounds over the intervening eight years.  Doesn't mean it's good 
enough to throw away your keyboard yet, but it's nowhere near 
that bad anymore.


On Saturday, 21 March 2015 at 15:47:14 UTC, H. S. Teoh wrote:

It's about the ability to abstract, that's
currently missing from today's ubiquitous GUIs. I would 
willingly leave
my text-based interfaces behind if you could show me a GUI that 
gives me
the same (or better) abstraction power as the expressiveness of 
a CLI
script, for example. Contemporary GUIs fail me on the following 
counts:


1) Expressiveness: there is no simple way of conveying complex

--snip--
5) Precision: Even when working with graphical data, I prefer 
text-based
interfaces where practical, not because text is the best way to 
work
with them -- it's quite inefficient, in fact -- but because I 
can
specify the exact coordinates of object X and the exact 
displacement(s)
I desire, rather than fight with the inherently imprecise mouse 
movement

and getting myself a wrist aneurysm trying to position object X
precisely in 

Re: [Semi OT] The programming language wars

2015-03-21 Thread deadalnix via Digitalmars-d

On Saturday, 21 March 2015 at 15:51:38 UTC, Paulo Pinto wrote:
I don't expect programming will remain so low level in the 
future. We are at the infancy of our skills, when comparing 
with engineerings with a fee centuries of progress.


For me the future lyes in something like Wolfram/Mathematic 
with natural voice processing.


People have been saying this for longer than I'm alive.


Re: [Semi OT] The programming language wars

2015-03-21 Thread H. S. Teoh via Digitalmars-d
On Sat, Mar 21, 2015 at 07:13:10PM +, Joakim via Digitalmars-d wrote:
[...]
 On Saturday, 21 March 2015 at 15:47:14 UTC, H. S. Teoh wrote:
 It's about the ability to abstract, that's currently missing from
 today's ubiquitous GUIs. I would willingly leave my text-based
 interfaces behind if you could show me a GUI that gives me the same
 (or better) abstraction power as the expressiveness of a CLI script,
 for example. Contemporary GUIs fail me on the following counts:
 
 1) Expressiveness: there is no simple way of conveying complex
 --snip--
 5) Precision: Even when working with graphical data, I prefer
 text-based interfaces where practical, not because text is the best
 way to work with them -- it's quite inefficient, in fact -- but
 because I can specify the exact coordinates of object X and the exact
 displacement(s) I desire, rather than fight with the inherently
 imprecise mouse movement and getting myself a wrist aneurysm trying
 to position object X precisely in a GUI. I have yet to see a GUI that
 allows you to specify things in a precise way without essentially
 dropping back to a text-based interface (e.g., an input field that
 requires you to type in numbers... which is actually not a bad
 solution; many GUIs don't even provide that, but instead give you the
 dreaded slider control which is inherently imprecise and extremely
 cumbersome to use. Or worse, the text box with the
 inconveniently-small 5-pixel up/down arrows that changes the value by
 0.1 per mouse click, thereby requiring an impractical number of
 clicks to get you to the right value -- if you're really unlucky, you
 can't even type in an explicit number but can only use those
 microscopic arrows to change it).
 
 A lot of this is simply that you are a different kind of computer user
 than the vast majority of computer users.  You want to drive a Mustang
 with a manual transmission and a beast of an engine, whereas most
 computer users are perfectly happy with their Taurus with automatic
 transmission.  A touch screen or WIMP GUI suits their mundane tasks
 best, while you need more expressiveness and control so you use the
 CLI.

Of course. But we're talking here about interfaces for *programmers*,
not for your average Joe, for whom a pretty GUI with a button or two
would suffice.


 The great promise of voice interfaces is that they will _both_ be
 simple enough for casual users and expressive enough for power users,
 while being very efficient and powerful for both.

Call me a skeptic, but I'll believe this promise when I see it.


 We still have some work to do to get these speech recognition engines
 there, but once we do, the entire visual interface to your computer
 will have to be redone to best suit voice input and nobody will use
 touch, mice, _or_ keyboards after that.

This is the unpopular opinion, but I'm skeptical if this day will ever
come. The problem with voice recognition is that it's based on natural
language, and natural language is inherently ambiguous. You say that
heuristics can solve this, I call BS on that. Heuristics are bug-prone
and unreliable (because otherwise they'd be algorithms!), precisely
because they fail to capture the essence of the problem, but are merely
crutches to get us mostly there in lieu of an actual solution.

The inherent ambiguity in natural language comes not from some kind of
inherent flaw as most people tend to believe, but it's actually a
side-effect of the brain's ability at context-sensitive comprehension.
The exact same utterance, spoken in different contexts, can mean totally
different things, and the brain has no problem with that (provided it is
given sufficient context, of course). The brain is also constantly
optimizing itself -- if it can convey its intended meaning in fewer,
simpler words, it will prefer to do that instead of going through the
effort of uttering the full phrase. This is one of the main factors
behind language change, which happens over time and is mostly
unconscious.  Long, convoluted phrases, if spoken often enough, tend to
contract into shorter, sometimes ambiguous, utterances, as long as there
is sufficient context to disambiguate. This is why we have a tendency
toward acronyms -- the brain is optimizing away the long utterance in
preference to a short acronym, which, based on the context of a group of
speakers who mutually share similar contexts (e.g., computer lingo), is
unambiguous, but may very well be ambiguous in a wider context. If I
talk to you about UFCS, you'd immediately understand what I was talking
about, but if I said that to my wife, she would have no idea what I just
said -- she may not even realize it's an acronym, because it sounds like
a malformed sentence you  The only way to disambiguate this kind
of context-specific utterance is to *share* in that context in the first
place. Talk to a Java programmer about UFCS, and he probably wouldn't
know what you just said either, unless he has been reading up on D.

The only way 

Re: [Semi OT] The programming language wars

2015-03-21 Thread via Digitalmars-d

On Saturday, 21 March 2015 at 21:46:10 UTC, H. S. Teoh wrote:
This is the unpopular opinion, but I'm skeptical if this day 
will ever
come. The problem with voice recognition is that it's based on 
natural
language, and natural language is inherently ambiguous. You say 
that
heuristics can solve this, I call BS on that. Heuristics are 
bug-prone
and unreliable (because otherwise they'd be algorithms!), 
precisely
because they fail to capture the essence of the problem, but 
are merely

crutches to get us mostly there in lieu of an actual solution.


Right, but it is likely that the nature of programming will 
change. In the beginning of the web the search engines had 
trouble matching anything but exact phrases, now they are capable 
of figuring out what you probably wanted.


Take music composition, people still write notes explicitly as 
discrete symbols, yet others compose music by recording a song, 
and then manipulating it (i.e. auto tune). So, even though you 
can do pitch recognition many probably use discrete interfaces 
like keyboard or a mouse for writing music, yet new forms of 
music and composition has come with the ability to process audio 
in a more intuitive, evolutionary fashion.


Same thing is likely to happen with programming, e.g. a different 
models for computation or at least new ways to modify existing 
components. Like neural simulations, adaptive systems, fuzzy 
logic etc...


You also have areas like program synthesis, genetic programming 
etc, where the computer itself generates the program to fit a 
specified result. When the computer is capable of that you might 
have a more top down programming model where you just keep 
adding constraints until you are happy with the result.


Re: [Semi OT] The programming language wars

2015-03-21 Thread Laeeth Isharc via Digitalmars-d

Right, but it is likely that the nature of programming will ni
change. In the beginning of the web the search engines had 
trouble matching anything but exact phrases, now they are 
capable of figuring out what you probably wanted.


As you implicitly recognize later, it's not either/or, in the 
same way that spreadsheets (ugh) constituted a new way of 
programming and people continued to program conventionally 
similar kinds of tasks even as spreadsheets exploded in usage 
(and now we are back to finding it often more convenient to write 
code again, plus the robustness that never went away).


HS Teoh is right about context, and the superiority of the 
written word for organizing and expressing thinking at a very 
high level.  The nature of human memory and perception means that 
is unlikely to change very soon, if ever.


Dr Iain Mcgilchrist (The Master and His Emissary) is very good on 
context, BTW.


You also have areas like program synthesis, genetic programming 
etc, where the computer itself generates the program to fit a 
specified result. When the computer is capable of that you 
might have a more top down programming model where you just 
keep adding constraints until you are happy with the result.


No doubt these techniques will continue to grow in usefulness (I 
certainly hope so, and am making that bet), but the ultimate 
implications depend on your conception of what creativity is.


Re: [Semi OT] The programming language wars

2015-03-20 Thread Walter Bright via Digitalmars-d

On 3/19/2015 10:46 PM, deadalnix wrote:

I let this here. Very interesting and relevant to anyone here.

https://www.youtube.com/watch?v=mDZ-QSLQIB8


It's nearly an hour long. Got a TL;DW ?


Re: [Semi OT] The programming language wars

2015-03-20 Thread Paulo Pinto via Digitalmars-d

On Friday, 20 March 2015 at 07:14:58 UTC, Walter Bright wrote:

On 3/19/2015 10:46 PM, deadalnix wrote:

I let this here. Very interesting and relevant to anyone here.

https://www.youtube.com/watch?v=mDZ-QSLQIB8


It's nearly an hour long. Got a TL;DW ?


Language features should be tested with real users using 
scientific validation processes, instead of being blindly added 
to a language.


This means using tests groups with different types of 
backgrounds, applying the features in several situations, 
devising specific set of measurable targets and applying the 
whole statistic analysis to the results.


For example, instead of having an online forum which syntax would 
be desired for the new feature X, have the whole set of options 
implemented and see which of them achieves better results with 
the respective study groups.


An example that stuck with me was that languages that follow 
Algol/Pascal syntax lead themselves to less bugs, than those that 
follow C like syntax.


There are quite a few other examples. Also the mention that as 
far as the researcher is aware, only Microsoft is pursuing such 
studies for language features. Don Syme is in the audience and 
gives an example how they did it for .NET generics.


--
Paulo



Re: [Semi OT] The programming language wars

2015-03-20 Thread Walter Bright via Digitalmars-d

Thanks!


Re: [Semi OT] The programming language wars

2015-03-20 Thread Walter Bright via Digitalmars-d

On 3/20/2015 3:55 PM, Laeeth Isharc wrote:

So one must be careful to avoid being dazzled by shiny 'scientific' approaches
when their value remains yet to be proven.


True. Scientific studies of human behavior are notoriously difficult to remove 
hidden biases from.




Re: [Semi OT] The programming language wars

2015-03-20 Thread Rikki Cattermole via Digitalmars-d

On 21/03/2015 8:40 a.m., weaselcat wrote:

On Friday, 20 March 2015 at 17:04:20 UTC, ketmar wrote:

On Fri, 20 Mar 2015 13:28:45 +, Paulo  Pinto wrote:


Given that I have been an IDE fan since the Amiga days, I fully agree.

Every time I am on UNIX I feel like a time travel to the days of yore.


being on non-nix system is a torture. there aren't even gcc, let alone
emacs/vim.


I wish there was a blend of modern IDEs with the editing/customizability
power of emacs/vim. mono-d and DDT's utilities are far ahead of anything
available for D in emacs/vim, but the actual IDEs themselves are
difficult to work with.


You guys are giving me ideas...


Re: [Semi OT] The programming language wars

2015-03-20 Thread Laeeth Isharc via Digitalmars-d

On Friday, 20 March 2015 at 07:37:04 UTC, Paulo  Pinto wrote:
Language features should be tested with real users using 
scientific validation processes, instead of being blindly added 
to a language.


There is nothing intrinsically more scientific about basing a 
decision on a study rather than experience and judgement 
(including aesthetic judgement), which is not to say that more 
data cannot be useful, if thoughtfully considered.  The problem 
is that people tend to emphasize tangible hard data over 
sometimes more important but less easy to measure factors - the 
drunk looking for his keys under the lamppost 'because that is 
where the light is'.


So one must be careful to avoid being dazzled by shiny 
'scientific' approaches when their value remains yet to be proven.



Laeeth.


Re: [Semi OT] The programming language wars

2015-03-20 Thread Paulo Pinto via Digitalmars-d
On Friday, 20 March 2015 at 12:07:25 UTC, Ola Fosheim Grøstad 
wrote:

On Friday, 20 March 2015 at 12:00:35 UTC, Paulo  Pinto wrote:
At end of the study, the groups using the Algol based syntax 
will deliver less bugs.


I think language syntax design these days should include the 
IDE. E.g. Dart's syntax is nothing special and not good at 
preventing bugs in itself, but the Dart syntax works very well 
when used with the Dart editor (based on Eclipse).


These days it probably is a mistake to design a language syntax 
without considering what kind of IDE design the language is 
supposed to be used with.


Given that I have been an IDE fan since the Amiga days, I fully 
agree.


Every time I am on UNIX I feel like a time travel to the days of 
yore.


Thankfully the situation has improved a lot in the last decade.

--
Paulo




Re: [Semi OT] The programming language wars

2015-03-20 Thread Joakim via Digitalmars-d

On Friday, 20 March 2015 at 18:31:07 UTC, Paulo Pinto wrote:

On Friday, 20 March 2015 at 17:25:54 UTC, H. S. Teoh wrote:
Ultimately, I think rodent-based UIs will go the way of the 
dinosaur.
It's a regression from the expressiveness of an actual 
language with
grammar and semantics back to caveman-style point-and-grunt. 
It may take
decades, maybe even centuries, before the current GUI 
trendiness fades
away, but eventually it will become obvious that there is no 
future in a
non-abstractible UI. Either CLIs will be proven by the test of 
time, or
something else altogether will come along to replace the 
rodent dead-end

with something more powerful. Something abstractible with the
expressiveness of language and semantics, not regressive
point-and-grunt.


As for CLIs regaining their central place in the world of 
computing, in a world going towards speech recognition and 
touch interfaces, I very much doubt CLI use will increase.


What I was going to say too, neither CLI or GUI will win, speech 
recognition will replace them both, by providing the best of 
both.  Rather than writing a script to scrape several shopping 
websites for the price of a Galaxy S6, I'll simply tell the 
intelligent agent on my computer Find me the best deal on a S6 
and it will go find it.


As for touch, seems like a dead-end to me, far less expressive 
than anything else and really only geared for rudimentary 
interaction.  It may always be there but you likely won't use it 
much.  I do think some sort of hand gesture-based interface will 
stick around for when voice isn't expressive enough, ie you'll 
still want to use your hands when painting:


http://www.engadget.com/2015/03/19/jaw-dropping-magic-leap-demo/

That video is not the way it will be done, as waving your arms 
around Minority Report-style is way too much effort, but 
something akin to the small finger movements I make on my 
touch-based trackpad, but in 3D, will likely be it.


Re: [Semi OT] The programming language wars

2015-03-20 Thread H. S. Teoh via Digitalmars-d
On Sat, Mar 21, 2015 at 04:17:00AM +, Joakim via Digitalmars-d wrote:
[...]
 What I was going to say too, neither CLI or GUI will win, speech
 recognition will replace them both, by providing the best of both.
 Rather than writing a script to scrape several shopping websites for
 the price of a Galaxy S6, I'll simply tell the intelligent agent on my
 computer Find me the best deal on a S6 and it will go find it.

I dunno, I find that I can express myself far more precisely and
concisely on the keyboard than I can verbally. Maybe for everyday tasks
like shopping for the best deals voice recognition is Good Enough(tm),
but for more complex tasks, I have yet to find something more expressive
than the keyboard.


 As for touch, seems like a dead-end to me, far less expressive than
 anything else and really only geared for rudimentary interaction.  It
 may always be there but you likely won't use it much.

Yeah, it's just another variation of point-and-grunt. Except the grunt
part is replaced with tap. :-P


 I do think some sort of hand gesture-based interface will stick around
 for when voice isn't expressive enough, ie you'll still want to use
 your hands when painting:
 
 http://www.engadget.com/2015/03/19/jaw-dropping-magic-leap-demo/
 
 That video is not the way it will be done, as waving your arms around
 Minority Report-style is way too much effort, but something akin to
 the small finger movements I make on my touch-based trackpad, but in
 3D, will likely be it.

You might be on to something. Manipulation of 3D holograms via hand
motion detection perhaps might be what will eventually work best.


T

-- 
Maybe is a strange word.  When mom or dad says it it means yes, but when my 
big brothers say it it means no! -- PJ jr.


Re: [Semi OT] The programming language wars

2015-03-20 Thread w0rp via Digitalmars-d

On Friday, 20 March 2015 at 07:37:04 UTC, Paulo  Pinto wrote:
An example that stuck with me was that languages that follow 
Algol/Pascal syntax lead themselves to less bugs, than those 
that follow C like syntax.


That's probably skewed by C being the most popular language, 
which is optimised for bug creation.


Re: [Semi OT] The programming language wars

2015-03-20 Thread Paulo Pinto via Digitalmars-d

On Friday, 20 March 2015 at 11:48:20 UTC, w0rp wrote:

On Friday, 20 March 2015 at 07:37:04 UTC, Paulo  Pinto wrote:
An example that stuck with me was that languages that follow 
Algol/Pascal syntax lead themselves to less bugs, than those 
that follow C like syntax.


That's probably skewed by C being the most popular language, 
which is optimised for bug creation.


I said scientific studies, not asking joe/jane developer.

Pick a random set of people without programming skills, using 
imaginary language X, with two types of syntax's to code a set of 
tasks in separate groups, one for each syntax.


Measure each group behavior using proven analysis methods.

At end of the study, the groups using the Algol based syntax will 
deliver less bugs.



--
Paulo


Re: [Semi OT] The programming language wars

2015-03-20 Thread via Digitalmars-d

On Friday, 20 March 2015 at 12:00:35 UTC, Paulo  Pinto wrote:
At end of the study, the groups using the Algol based syntax 
will deliver less bugs.


I think language syntax design these days should include the IDE. 
E.g. Dart's syntax is nothing special and not good at preventing 
bugs in itself, but the Dart syntax works very well when used 
with the Dart editor (based on Eclipse).


These days it probably is a mistake to design a language syntax 
without considering what kind of IDE design the language is 
supposed to be used with.


Re: [Semi OT] The programming language wars

2015-03-20 Thread deadalnix via Digitalmars-d

On Friday, 20 March 2015 at 11:48:20 UTC, w0rp wrote:

On Friday, 20 March 2015 at 07:37:04 UTC, Paulo  Pinto wrote:
An example that stuck with me was that languages that follow 
Algol/Pascal syntax lead themselves to less bugs, than those 
that follow C like syntax.


That's probably skewed by C being the most popular language, 
which is optimised for bug creation.


The guy explain that for beginner it is worse, but after 3+ years 
of experience, C syntax wins. Most likely because C style has 
become a de facto standard, and that the benefit of 
standardization outweigh the initial confusion.


He explains that the same goes for strong typing.


Re: [Semi OT] The programming language wars

2015-03-20 Thread Paulo Pinto via Digitalmars-d

On Friday, 20 March 2015 at 17:25:54 UTC, H. S. Teoh wrote:
On Fri, Mar 20, 2015 at 05:04:20PM +, ketmar via 
Digitalmars-d wrote:

On Fri, 20 Mar 2015 13:28:45 +, Paulo  Pinto wrote:

 Given that I have been an IDE fan since the Amiga days, I 
 fully

 agree.
 
 Every time I am on UNIX I feel like a time travel to the 
 days of

 yore.

being on non-nix system is a torture. there aren't even gcc, 
let alone

emacs/vim.


Yeah, I've become so accustomed to the speed of keyboard-based 
controls
that every time I use my wife's Windows laptop, I feel so 
frustrated at
the rodent dependence and its slowness that I want to throw the 
thing

out the window.

But at another level, it's not even about keyboard vs. 
rodent... it's
about *scriptability*. It's about abstraction. Typing commands 
at the
CLI, while on the surface looks so tedious, actually has a 
powerful
advantage: you can abstract it. You can encapsulate it into a 
script.
Most well-designed CLI programs are scriptable, which means 
complex
operations can be encapsulated and then used as new primitives 
with

greater expressiveness.

Sure you can have keyboard shortcuts in GUI programs, but you 
can't
abstract a series of mouse clicks and drags or a series of 
keyboard
shortcuts into a single action. They will forever remain in the 
realm of
micromanagement -- click this menu, move mouse to item 6, open 
submenu,

click that, etc.. I have yet to see a successful attempt at
encapsulation a series of actions as a single meta-action (I've 
seen
attempts at it, but none that were compelling enough to be 
useful.) You
can't build meta-meta-actions from meta-actions. Everything is 
bound to
what-you-see-is-all-you-get. You can't parametrize a series of 
mouse
interactions the same way you can take a bash script and 
parametrize it
to do something far beyond what the original sequence of typed 
commands

did.

Ultimately, I think rodent-based UIs will go the way of the 
dinosaur.
It's a regression from the expressiveness of an actual language 
with
grammar and semantics back to caveman-style point-and-grunt. It 
may take
decades, maybe even centuries, before the current GUI 
trendiness fades
away, but eventually it will become obvious that there is no 
future in a
non-abstractible UI. Either CLIs will be proven by the test of 
time, or
something else altogether will come along to replace the rodent 
dead-end

with something more powerful. Something abstractible with the
expressiveness of language and semantics, not regressive
point-and-grunt.


T


When I use GUIs, my knowledge of keyboard shortcuts coupled with 
the mouse actions is no different than top gamer playing a 
FPS/RTS game.


Sure, GUIs might not beat CLIs in terms of scriptability, but 
GUIs coupled with REPLs sure do the job pretty well.


When I am on Mac OS X and Windows, reaching out for the CLI means 
using developer tools only available via the console or 
automating administration tasks.


On the other hand when at any other UNIX flavours besides Mac OS 
X, it seems the CLI is unavoidable.



As for CLIs regaining their central place in the world of 
computing, in a world going towards speech recognition and touch 
interfaces, I very much doubt CLI use will increase.


--
Paulo



Re: [Semi OT] The programming language wars

2015-03-20 Thread ketmar via Digitalmars-d
On Fri, 20 Mar 2015 10:23:27 -0700, H. S. Teoh via Digitalmars-d wrote:

 Ultimately, I think rodent-based UIs will go the way of the dinosaur.
 It's a regression from the expressiveness of an actual language with
 grammar and semantics back to caveman-style point-and-grunt. It may take
 decades, maybe even centuries, before the current GUI trendiness fades
 away, but eventually it will become obvious that there is no future in a
 non-abstractible UI. Either CLIs will be proven by the test of time, or
 something else altogether will come along to replace the rodent dead-end
 with something more powerful. Something abstractible with the
 expressiveness of language and semantics, not regressive
 point-and-grunt.

Oberon TUI was really nice, as it combines point-and-click with CLI 
expressivenes. not only you can type command almost anywhere and 
immediately execute it, but you can combine commands in tools, 
parameterize commands with marked window, selected text (those were 
completely independend things), simple text and so on. even window 
actions like close, save and such was ordinary commands, simply 
printed at window header.

Oberon Gadgets was an extension of that system, where buttons and so on 
were simply nicely drawn area that still executes command on click.

signature.asc
Description: PGP signature


Re: [Semi OT] The programming language wars

2015-03-20 Thread CraigDillabaugh via Digitalmars-d

On Friday, 20 March 2015 at 17:25:54 UTC, H. S. Teoh wrote:
On Fri, Mar 20, 2015 at 05:04:20PM +, ketmar via 
Digitalmars-d wrote:

On Fri, 20 Mar 2015 13:28:45 +, Paulo  Pinto wrote:

 Given that I have been an IDE fan since the Amiga days, I 
 fully

 agree.
 
 Every time I am on UNIX I feel like a time travel to the 
 days of

 yore.

being on non-nix system is a torture. there aren't even gcc, 
let alone

emacs/vim.


Yeah, I've become so accustomed to the speed of keyboard-based 
controls
that every time I use my wife's Windows laptop, I feel so 
frustrated at
the rodent dependence and its slowness that I want to throw the 
thing

out the window.

But at another level, it's not even about keyboard vs. 
rodent... it's
about *scriptability*. It's about abstraction. Typing commands 
at the
CLI, while on the surface looks so tedious, actually has a 
powerful

advantage:


clip


Ultimately, I think rodent-based UIs will go the way of the 
dinosaur.


While I may not share you optimism for the future, I do agree the 
CLI is almost always better:o)


One big advantage to CLI stuff is that when you come up against 
some tricky configuration, or rarely used command, you can write 
a little script (with comments) describing how to do it (and WHY 
you did it that way).  Very handy for those tasks that you end up 
doing once every X months, and always forget the details of in 
between.  How do you do that with a GUI? Make a video or open up 
OpenOffice/MS Word and start taking screen shots. Painful stuff.


Same goes for configuration files which beat GUI-based 
configuration hands down.


Having said all that having IDE-like, language aware, 
code-completion and background compilation, and a good debugger 
are a big plus for productivity in many cases.




Re: [Semi OT] The programming language wars

2015-03-20 Thread H. S. Teoh via Digitalmars-d
On Fri, Mar 20, 2015 at 05:04:20PM +, ketmar via Digitalmars-d wrote:
 On Fri, 20 Mar 2015 13:28:45 +, Paulo  Pinto wrote:
 
  Given that I have been an IDE fan since the Amiga days, I fully
  agree.
  
  Every time I am on UNIX I feel like a time travel to the days of
  yore.
 
 being on non-nix system is a torture. there aren't even gcc, let alone
 emacs/vim.

Yeah, I've become so accustomed to the speed of keyboard-based controls
that every time I use my wife's Windows laptop, I feel so frustrated at
the rodent dependence and its slowness that I want to throw the thing
out the window.

But at another level, it's not even about keyboard vs. rodent... it's
about *scriptability*. It's about abstraction. Typing commands at the
CLI, while on the surface looks so tedious, actually has a powerful
advantage: you can abstract it. You can encapsulate it into a script.
Most well-designed CLI programs are scriptable, which means complex
operations can be encapsulated and then used as new primitives with
greater expressiveness.

Sure you can have keyboard shortcuts in GUI programs, but you can't
abstract a series of mouse clicks and drags or a series of keyboard
shortcuts into a single action. They will forever remain in the realm of
micromanagement -- click this menu, move mouse to item 6, open submenu,
click that, etc.. I have yet to see a successful attempt at
encapsulation a series of actions as a single meta-action (I've seen
attempts at it, but none that were compelling enough to be useful.) You
can't build meta-meta-actions from meta-actions. Everything is bound to
what-you-see-is-all-you-get. You can't parametrize a series of mouse
interactions the same way you can take a bash script and parametrize it
to do something far beyond what the original sequence of typed commands
did.

Ultimately, I think rodent-based UIs will go the way of the dinosaur.
It's a regression from the expressiveness of an actual language with
grammar and semantics back to caveman-style point-and-grunt. It may take
decades, maybe even centuries, before the current GUI trendiness fades
away, but eventually it will become obvious that there is no future in a
non-abstractible UI. Either CLIs will be proven by the test of time, or
something else altogether will come along to replace the rodent dead-end
with something more powerful. Something abstractible with the
expressiveness of language and semantics, not regressive
point-and-grunt.


T

-- 
Indifference will certainly be the downfall of mankind, but who cares? -- 
Miquel van Smoorenburg


Re: [Semi OT] The programming language wars

2015-03-20 Thread ketmar via Digitalmars-d
On Fri, 20 Mar 2015 13:28:45 +, Paulo  Pinto wrote:

 Given that I have been an IDE fan since the Amiga days, I fully agree.
 
 Every time I am on UNIX I feel like a time travel to the days of yore.

being on non-nix system is a torture. there aren't even gcc, let alone 
emacs/vim.

signature.asc
Description: PGP signature


Re: [Semi OT] The programming language wars

2015-03-20 Thread weaselcat via Digitalmars-d

On Friday, 20 March 2015 at 17:04:20 UTC, ketmar wrote:

On Fri, 20 Mar 2015 13:28:45 +, Paulo  Pinto wrote:

Given that I have been an IDE fan since the Amiga days, I 
fully agree.


Every time I am on UNIX I feel like a time travel to the days 
of yore.


being on non-nix system is a torture. there aren't even gcc, 
let alone

emacs/vim.


I wish there was a blend of modern IDEs with the 
editing/customizability power of emacs/vim. mono-d and DDT's 
utilities are far ahead of anything available for D in emacs/vim, 
but the actual IDEs themselves are difficult to work with.


Re: [Semi OT] The programming language wars

2015-03-20 Thread ketmar via Digitalmars-d
On Fri, 20 Mar 2015 18:31:06 +, Paulo Pinto wrote:

 As for CLIs regaining their central place in the world of computing, in
 a world going towards speech recognition and touch interfaces, I very
 much doubt CLI use will increase.

this is another overhyped trendy blah-blah, like cloud computing. such 
interfaces are for casual users. sure, when i want to simply listen music 
or watch a movie, it's easier to tell that or point it with my finger. 
but if i want to use my computer for something more complex, typing on 
keyboard is simply faster and less error-prone than speaking. especially 
if you cut off that stupid mouse. ;-)

signature.asc
Description: PGP signature