Re: [fonc] OT: Hypertext and the e-book

2012-03-07 Thread Mack
I am a self-admitted Kindle and iPad addict, however most of the people I know 
are "real book" aficionados for relatively straight-forward reasons that can be 
summed up as:

-   Aesthetics:  digital readers don't even come close to approximating the 
experience of reading a printed and bound paper text.  To some folks, this 
matters a lot.

-   A feeling of connectedness with history: it's not a difficult leap from 
turning the pages of a modern edition of 'Cyrano de Bergerac' to perusing a 
volume that was current in Edmund Rostand's time.  Imagining that the iPad you 
hold in your hands was once upon a shelf in Dumas Pere's study is a much bigger 
suspension of disbelief.  For some people, this contributes to a psychological 
distancing from the material being read.

-   Simplicity of sharing:  for those not of the technical elite, sharing a 
favored book more closely resembles the kind of matching of intrinsics that 
happens during midair refueling of military jets than the simple act of 
dropping a dog-eared paperback on a friend's coffee table.

-   Simplicity.  Period.  (Manual transmissions and paring knives are still 
with us and going strong in this era of ubiquitous automatic transmissions and 
food processors.  Facility and convenience doesn't always trump simplicity and 
reliability.  Especially when the power goes out.)

Remember Marshall Mcluhan's observation: "The medium is the message"?  Until we 
pass a generational shift where the bulk of readers have little experience of 
analog books, these considerations will be with us.

-- Mack

m...@mackenzieresearch.com



On Mar 7, 2012, at 3:13 PM, BGB wrote:

> On 3/7/2012 3:24 AM, Ryan Mitchley wrote:
>> May be of interest to some readers of the list:
>> 
>> http://nplusonemag.com/bones-of-the-book
>> 
> 
> thoughts:
> admittedly, I am not really much of a person for reading fiction (I tend 
> mostly to read technical information, and most fictional material is more 
> often experienced in the form of movies/TV/games/...).
> 
> I did find the article interesting though.
> 
> I wonder: why really do some people have such a thing for traditional books?
> 
> they are generally inconvenient, can't be readily accessed:
> they have to be physically present;
> one may have to go physically retrieve them;
> it is not possible to readily access their information (searching is a pain);
> ...
> 
> by contrast, a wiki is often a much better experience, and similarly allows 
> the option of being presented sequentially (say, by daisy chaining articles 
> together, and/or writing huge articles). granted, it could be made maybe a 
> little better with a good WYSIWYG style editing system.
> 
> potentially,  maybe, something like MediaWiki or similar could be used for 
> fiction and similar.
> granted, this is much less graphically elaborate than some stuff the article 
> describes, but I don't think text is dead yet (and generally doubt that fancy 
> graphical effects are going to kill it off any time soon...). even in digital 
> forms (where graphics are moderately cheap), likely text is still far from 
> dead.
> 
> it is much like how magazines filled with images have not killed books filled 
> solely with text, despite both being printed media (granted, there are 
> college textbooks, which are sometimes in some ways almost closer to being 
> very and large expensive magazines in these regards: filled with lots of 
> graphics, a new edition for each year, ...).
> 
> 
> but, it may be a lot more about the information being presented, and who it 
> is being presented to, than about how the information is presented. graphics 
> work great for some things, and poor for others. text works great for some 
> things, and kind of falls flat for others.
> 
> expecting all one thing or the other, or expecting them to work well in cases 
> for which they are poorly suited, is not likely to turn out well.
> 
> 
> I also suspect maybe some people don't like the finite resolution or usage of 
> back-lighting or similar (like in a device based on a LCD screen). there are 
> "electronic paper" technologies, but these generally have poor refresh times.
> 
> a mystery is why, say, LCD panels can't be made to better utilize ambient 
> light (as opposed to needing all the light to come from the backlight). idle 
> thoughts include using either a reflective layer, or a layer which responds 
> strongly to light (such as a phosphorescent layer), placed between the LCD 
> and the backlight.
> 
> 
> but, either way, things like digital media and hypertext displacing the use 
> of printed books may be only a matter of time.
> 
> the one area I think printed books curr

Re: [fonc] OT: Hypertext and the e-book

2012-03-08 Thread Mack
Just a reminder that paper-making is one of the more toxic industries in this 
country:

http://en.wikipedia.org/wiki/Paper_pollution

Paper itself may be simple and eco-friendly, but the commercial process to 
produce it is rife with chorine, dioxin, etc. not to mention heavy thermal 
pollution of water sources.

So there are definitely arguments on both sides of the ledger wrt eBooks.

-- Mack


On Mar 8, 2012, at 1:54 PM, BGB wrote:

> On 3/8/2012 12:34 PM, Max Orhai wrote:
>> 
>> 
>> 
>> On Thu, Mar 8, 2012 at 7:07 AM, Martin Baldan  wrote:
>> >
>> > - Print technology is orders of magnitude more environmentally benign
>> > and affordable.
>> >
>> 
>> That seems a pretty strong claim. How do you back it up? Low cost and
>> environmental impact are supposed to be some of the strong points of
>> ebooks.
>> 
>> 
>> Glad you asked! That was a pretty drastic simplification, and I'm conflating 
>> 'software' with 'hardware' too. Without wasting too much time, hopefully, 
>> here's what I had in mind.
>> 
>> I live in a city with some amount of printing industry, still. In the past, 
>> much more. Anyway, small presses have been   part of civic life for 
>> centuries now, and the old-fashioned presses didn't require much in the way 
>> of imports, paper mills aside. I used to live in a smaller town with a 
>> mid-sized paper mill, too. No idea if they're still in business, but I've 
>> made my own paper, and it's not that hard to do well in small batches. My 
>> point is just that print technology (specifically the letterpress) can be 
>> easily found in the real world which is local, nontoxic, and "sustainable" 
>> (in the sense of only needing routine maintenance to last indefinitely) in a 
>> way that I find hard to imagine of modern electronics, at least at this 
>> point in time. Have you looked into the environmental cost of manufacturing 
>> and disposing of all our fragile, toxic gadgets which last two years or 
>> less? It's horrifying.
>> 
> 
> I would guess, apart from macro-scale parts/materials reuse (from electronics 
> and similar), one could maybe:
> grind them into dust and extract reusable materials via means of mechanical 
> separation (magnetism, density, ..., which could likely separate out most 
> bulk glass/plastic/metals/silicon/... which could then be refined and reused);
> maybe feed whatever is left over into a plasma arc, and maybe use either 
> magnetic fields or a centrifuge to separate various raw elements (dunno if 
> this could be made practical), or maybe dissolve it with strong acids and use 
> chemical means to extract elements (could also be expensive), or lacking a 
> better (cost effective) option, simply discard it.
> 
> 
> the idea for a magnetic-field separation could be:
> feed material through a plasma arc, which will basically convert it into 
> mostly free atoms;
> a large magnetic coil accelerates the resultant plasma;
> a secondary horizontal magnetic field is applied (similar to the one in a 
> CRT), causing elements to deflect based on relative charge (valence 
> electrons);
> depending on speed and distance, there is likely to be a gravity based 
> separation as well (mostly for elements which have similar charge but differ 
> in atomic weight, such as silicon vs carbon, ...);
> eventually, all of them ram into a wall (probably chilled), with a more or 
> less 2D distribution of the various elements (say, one spot on the wall has a 
> big glob of silicon, and another a big glob of gold, ...). (apart from mass 
> separation, one will get mixes of "similarly charged" elements, such as globs 
> of silicon carbide and titanium-zirconium and similar)
> 
> an advantage of a plasma arc is that it will likely result in some amount of 
> carbon-monoxide and methane and similar as well, which can be burned as fuel 
> (providing electricity needed for the process). this would be similar to a 
> traditional gasifier.
> 
> 
> but, it is possible that in the future, maybe some more advanced forms of 
> manufacturing may become more readily available at the small scale.
> 
> a particular example is that it is now at least conceivably possible that 
> lower-density lower-speed semiconductor electronics (such as polymer 
> semiconductors) could be made at much smaller scales and cheaper than with 
> traditional manufacturing (silicon wafers and optical lithography), but at 
> this point there is little economic incentive for this (companies don't care, 
> as they have big expensive fabs to make chips, and individuals and 
> communities don'

Re: [fonc] OT: Hypertext and the e-book

2012-03-09 Thread Mack
One thing I think that is being overlooked in this discussion is that by virtue 
of belonging to this mailing list, we are ALL of us demographic outliers and 
don't really represent the larger, "normal" population, thus our personal 
impressions of concepts like "ease of use" are completely skewed with regard to 
the larger population.

A couple of the comments I read on this thread really drive this home to me:

One person said something like "..I routinely send eBooks by email…" and 
another said something like "…an EPUB book can be constructed with a simple 
text editor…"

Both are very true statements, when taken in the context of the eLiterati that 
populate this list.  By contrast, several weeks ago I watched my father-in-law 
struggle for a couple of hours trying to figure out how to buy a book on Amazon 
and read it on the Kindle Fire we gave him.  …and he is a person who spends 
several hours a day web browsing and emailing with his various ePenPals.  My 
conclusion is that ease of use MATTERS and even things we eLiterati consider 
"simple" aren't yet simple in an absolute sense.  Heck, this entire topic 
assumes that a person can READ.  If we are to look deeply at something that is 
"better than a book" the assumption of literacy ought to be open to challenge 
as well.

...

Someone else emphasized the ease of keeping very large research libraries of 
reference material easily accessible in electronic form.

Again, a true and powerful point, relative to the kinds of folks on this list.  
For many of us, research is a casual and normal part of either our vocational 
or avocational existence.  For much of the REST of society, however, research 
is something that is confined to well-defined periods of their lives, not 
casually integrated into day-to-day life -- so having the Library of Congress 
in their back pocket is not interesting or useful to them beyond the "gosh gee 
wow" factor.

As I look around people I know who are "not in the CSCI/IT biz", what I see is 
that there are only a couple of ways they encounter reading material:

-   Public signage

-   Time-sensitive periodical information (news/blogs/etc) that 
give them current topical information
directly related to their active interests and current public 
affairs.

-   Research material related to some special project they have 
undertaken (and which
they encounter seldom enough that they don't mind going to a 
library or to a school
to do their work because the nature of the work sets them 
outside their normal routines anyway.)

-   Recreational reading.

Of those three categories, only the second and last are meaningful to them in 
the context of a tablet or eReader, hence questions of aesthetics and cognitive 
dissonance relating to media ARE pertinent.

All of this makes me feel that we have not yet begun to understand the right 
answer to "replacing the book".

…

This discussion reminds me of the eternal debate between emacs / simple IDE 
programmers and Rich IDE/Visual programmers because it comes down to the same 
fundamental: admitting to ourselves that tool design should be driven by the 
points of view of the people who are trying to accomplish the work rather than 
by the laws of the mise-en-scene of the work.  Further, that there is usually 
more than one point of view and that those points of view are often mutually 
inconsistent and incomplete.

(Okay, I'll stop here before I fall off into a discussion of 
non-turing-complete languages and partial functions.)




On Mar 9, 2012, at 2:50 AM, Eugen Leitl wrote:

> On Thu, Mar 08, 2012 at 11:34:21AM -0800, Max Orhai wrote:
>> On Thu, Mar 8, 2012 at 7:07 AM, Martin Baldan  wrote:
>> 
 
 - Print technology is orders of magnitude more environmentally benign
 and affordable.
 
>>> 
>>> That seems a pretty strong claim. How do you back it up? Low cost and
>>> environmental impact are supposed to be some of the strong points of
>>> ebooks.
> 
> I would like to point out that there are research libraries with some ~million
> electronic volumes available which can be owned by single inviduals or groups
> and yet occupy only one modest (~10 TByte) NAS box less than 2 kUSD.
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-13 Thread Mack
I couldn't agree more.

"text" and "files" are just encoding and packaging.   We routinely represent 
the same information in different ways during different stages of a program or 
system's lifecycle in order to obtain advantages relevant to the processing 
problems at hand.  In the past, it has been convenient to encourage ubiquitous 
use of standard encoding (ASCII) and packaging (files) in exchange for the 
obvious benefits of simplicity, access to common tooling that understands those 
standards, and interchange between systems.

However, if we set simplicity aside for the moment, the goals of access and 
interchange can be accomplished by means of mapping.  It is not essential to 
maintain ubiquitous lowest-common-denominator standards if suitable mapping 
functions exist.

My personal feeling is that the design of practical next-generation languages 
and tools has been retarded for a very long time by an unexamined emotional 
need to cling to common historical standards that are insufficient to support 
the needs of forward-looking language concepts.

For example, if we look beyond system interchange, the most significant value 
of core ASCII is its relatively good impedance match to keys found on most 
computer keyboards.  When "standard typewriter" keyboards were the ubiquitous 
form of data entry, this was an overwhelmingly important consideration.  
However, we long ago broke the chains of this relationship:  Data entry 
routinely encompasses entry from pointer devices such as mice and trackballs, 
tablets of various descriptions, incomplete keyboards such as numeric keypads, 
game controllers, etc.  These axes of expression are not represented in the 
graphology of ASCII.

In this world, the impedance mismatch to ASCII (and UNICODE, which could be 
seen as ASCII++, since it offers more glyphs but makes little attempt to 
increase the core semantics of graphology offered) invites examination.  In 
this world, it seems to me that core expressiveness of a graphology trumps 
ubiquity.  I'd like to see more languages being bold and looking beyond 
ASCII-derived symbology to find graphologies that allow for more powerful 
representation and manipulation of modern ontologies.

A concrete example:  ASCII only allows "to the right of" as a first class 
relationship in its representation ontology.  (The word "at" is formed as the 
character "t" to the right of the character "a".)  Even concepts such as "next 
line" or "backspace" are second-order concepts encoded by reserved symbols 
borrowed from the representable namespace.  Advanced but still fundamental 
concepts such as "subordinate to" (i.e., subscript) are only available in 
so-called RichText systems.  Even more powerful concepts like "contains" (for 
example, a "word" which is composed of the symbol "O" containing inside it the 
symbol "c") are not representable at all in the commonly available 
graphologies.  The people who attempt to express mathematical formulae 
routinely grapple with these limitations.  Even where a character set includes 
a root symbol, the underlying graphology does not implement rules by which 
characters can be arranged around it to represent the third root of x.

Many of the excruciating design exercises language designers go thru these days 
are largely driven by limitations of the ASCII++ graphology we assume to be 
sacrosanct.  (For example, the parts of this discussion thread analyzing the 
use of various compound-character combinations which intrude all the way to the 
parsing layer of a language because the core ASCII graphology doesn't feature 
enough bracket symbols.)

This barrier is artificial, historic in nature and need no longer constrain us 
because we have the luxury of modern high-powered computing systems which allow 
us to impose abstraction in important ways that were historically infeasible to 
allow us to achieve new kinds of expressive power and simplicity.

-- Mack


On Mar 13, 2012, at 8:11 AM, David Barbour wrote:

> 
> 
> On Tue, Mar 13, 2012 at 5:42 AM, Josh Grams  wrote:
> On 2012-03-13 02:13PM, Julian Leviston wrote:
> >What is "text"? Do you store your "text" in ASCII, EBCDIC, SHIFT-JIS or
> >UTF-8?  If it's UTF-8, how do you use an ASCII editor to edit the UTF-8
> >files?
> >
> >Just saying' ;-) Hopefully you understand my point.
> >
> >You probably won't initially, so hopefully you'll meditate a bit on my
> >response without giving a knee-jerk reaction.
> 
> OK, I've thought about it and I still don't get it.  I understand that
> there have been a number of different text encodings, but I thought that
> the whole point of Unicode was to provide a future-proof way out of that
> mess.  And I could be totally 

Re: [fonc] Error trying to compile COLA

2012-03-13 Thread Mack
For better or worse, both Apple and Microsoft (via Windows 8) are attempting to 
rectify this via the "Terms and Conditions" route.

It's been announced that both Windows 8 and OSX Mountain Lion will require 
applications to be installed via download thru their respective "App Stores" in 
order to obtain certification required for the OS to allow them access to 
features (like an installed camera, or the network) that are outside the 
default application sandbox.  

The acceptance of the App Store model for the iPhone/iPad has persuaded them 
that this will be (commercially) viable as a model for general public 
distribution of trustable software.

In that world, the Squeak plugin could be certified as safe to download in a 
way that System Admins might believe.


On Feb 29, 2012, at 3:09 PM, Alan Kay wrote:

> Windows (especially) is so porous that SysAdmins (especially in school 
> districts) will not allow teachers to download .exe files. This wipes out the 
> Squeak plugin that provides all the functionality.
> 
> But there is still the browser and Javascript. But Javascript isn't fast 
> enough to do the particle system. But why can't we just download the particle 
> system and run it in a safe address space? The browser people don't yet 
> understand that this is what they should have allowed in the first place. So 
> right now there is only one route for this (and a few years ago there were 
> none) -- and that is Native Client on Google Chrome. 
> 
>  But Google Chrome is only 13% penetrated, and the other browser fiefdoms 
> don't like NaCl. Google Chrome is an .exe file so teachers can't download 
> it (and if they could, they could download the Etoys plugin).
> 

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] De Tocqueville, was Re: Error trying to compile COLA

2012-03-13 Thread Mack
The entire effort to lift software development to a level beyond today's 
institutionalized approaches reminds me of a quote from Alexis de Tocqueville…

"… Such is not the course adopted by tyranny in democratic republics; there the 
body is left free, and the soul is enslaved. The master no longer says: "You 
shall think as I do or you shall die"; but he says: "You are free to think 
differently from me and to retain your life, your property, and all that you 
possess; but you are henceforth a stranger among your people. You may retain 
your civil rights, but they will be useless to you, for you will never be 
chosen by your fellow citizens if you solicit their votes; and they will affect 
to scorn you if you ask for their esteem. You will remain among men, but you 
will be deprived of the rights of mankind. Your fellow creatures will shun you 
like an impure being; and even those who believe in your innocence will abandon 
you, lest they should be shunned in their turn. Go in peace! I have given you 
your life, but it is an existence worse than death."

It's not enough to find a better way.  To effect lasting benefit, one has to 
make it a POPULAR way.  And that, sadly is not the province of reason, but of 
whim and fashion.

Prima facie, the current popularity of Objective C as a programming language 
owes nothing to its feature set and everything to the fact that it is required 
in order to program for the iPhone or iPad.

Cheers,

-- Mack



On Mar 13, 2012, at 11:09 AM, Alan Kay wrote:

> But we haven't wanted to program in Smalltalk for a long time.
> 
> This is a crazy non-solution (and is so on the iPad already)
> 
> No one should have to work around someone else's bad designs and 
> implementations ...
> 
> Cheers,
> 
> Alan
> 

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Apple and hardware (was: Error trying to compile COLA)

2012-03-14 Thread Mack
Jay Freeman has also released his Wraith Scheme for the iPad.

On Mar 14, 2012, at 9:17 AM, Jecel Assumpcao Jr. wrote:

> Alan Kay wrote on Wed, 14 Mar 2012 05:53:21 -0700 (PDT)
>> A hardware vendor with huge volumes (like Apple) should be able to get a CPU
>> vendor to make HW that offers real protection, and at a granularity that 
>> makes
>> more systems sense.
> 
> They did just that when they founded ARM Ltd (with Acorn and VTI): the
> most significant change from the ARM3 to the ARM6 was a new MMU with a
> more fine grained protection mechnism which was designed specially for
> the Newton OS. No other system used it and though I haven't checked, I
> wouldn't be surprised if this feature was eliminated from more recent
> versions of ARM.
> 
> Compared to a real capability system (like the Intel iAPX432/BiiN/960XA
> or the IBM AS/400) it was a rather awkward solution, but at least they
> did make an effort.
> 
> Having been created under Scully, this technology did not survive Jobs'
> return.
> 
>> But the main point here is that there are no technical reasons why a child 
>> should
>> be restricted from making an Etoys or Scratch project and sharing it with 
>> another
>> child on an iPad.
>> No matter what Apple says, the reasons clearly stem from strategies and 
>> tactics
>> of economic exclusion.
>> So I agree with Max that the iPad at present is really the anti-Dynabook
> 
> They have changed their position a little. I have a "Hand Basic" on my
> iPhone which is compatible with the Commodore 64 Basic. I can write and
> save programs, but can't send them to another device or load new
> programs from the Internet. Except I can - there are applications for
> the iPhone that give you access to the filing system and let you
> exchange files with a PC or Mac. But that is beyond most users, which
> seems to be a good enough barrier from Apple's viewpoint.
> 
> The same thing applies to this nice native development environment for
> Lua on the iPad:
> 
> http://twolivesleft.com/Codea/
> 
> You can program on the iPad/iPhone, but can't share.
> 
> -- Jecel
> 
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-14 Thread Mack

On Mar 13, 2012, at 6:27 PM, BGB wrote:


> the issue is not that I can't imagine anything different, but rather that 
> doing anything different would be a hassle with current keyboard technology:
> pretty much anyone can type ASCII characters;
> many other people have keyboards (or key-mappings) that can handle 
> region-specific characters.
> 
> however, otherwise, typing unusual characters (those outside their current 
> keyboard mapping) tends to be a bit more painful, and/or introduces editor 
> dependencies, and possibly increases the learning curve (now people have to 
> figure out how these various unorthodox characters map to the keyboard, ...).
> 
> more graphical representations, however, have a secondary drawback:
> they can't be manipulated nearly as quickly or as easily as text.
> 
> one could be like "drag and drop", but the problem is that drag and drop is 
> still a fairly slow and painful process (vs, hitting keys on the keyboard).
> 
> 
> yes, there are scenarios where keyboards aren't ideal:
> such as on an XBox360 or an Android tablet/phone/... or similar, but people 
> probably aren't going to be using these for programming anyways, so it is 
> likely a fairly moot point. 
> 
> however, even in these cases, it is not clear there are many "clearly better" 
> options either (on-screen keyboard, or on-screen tile selector, either way it 
> is likely to be painful...).
> 
> 
> simplest answer:
> just assume that current text-editor technology is "basically sufficient" and 
> call it "good enough".

Stipulating that having the keys on the keyboard "mean what the painted symbols 
show" is the simplest path with the least impedance mismatch for the user, 
there are already alternatives in common use that bear thinking on.  For 
example:

On existing keyboards, multi-stroke operations to produce new characters 
(holding down shift key to get CAPS, CTRL-ALT-TAB-whatever to get a special 
character or function, etc…) are customary and have entered average user 
experience.

Users of IDE's like EMACS, IntelliJ or Eclipse are well-acquainted with special 
keystrokes to get access to code completions and intention templates.

So it's not inconceivable to consider a similar strategy for "typing" 
non-character graphical elements.  One could think of say… CTRL-O, UP ARROW, UP 
ARROW, ESC to "type" a circle and size it, followed by CTRL-RIGHT ARROW, C to 
"enter" the circle and type a "c" inside it.

An argument against these strategies is the same one against command-line 
interfaces in the CLI vs. GUI discussion: namely, that without visual 
prompting, the possibilities that are available to be typed are not readily 
visible to the user.  The user has to already know what combination gives him 
what symbol.

One solution for mitigating this, presuming "rich graphical typing" was 
desirable, would be to take a page from the way "touch" type cell phones and 
tablets work, showing symbol maps on the screen in response to user input, with 
the maps being progressively refined as the user types to guide the user 
through constructing their desired input.

…just a thought :)



On Mar 13, 2012, at 6:27 PM, BGB also wrote:

> 
> 
>> I'll take Dave's point that penetration matters, and at the same time, most 
>> "new ideas" have "old idea" constituents, so you can easily find some matter 
>> for people stuck in the old methodologies and thinking to relate to when 
>> building your "new stuff" ;-)
>> 
> 
> well, it is like using alternate syntax designs (say, not a C-style "curly 
> brace" syntax).
> 
> one can do so, but is it worth it?
> in such a case, the syntax is no longer what most programmers are familiar or 
> comfortable with, and it is more effort to convert code to/from the language, 
> …

The degenerate endpoint of this argument (which, sadly I encounter on a daily 
basis in the larger business-technical community) is "if it isn't Java, it is 
by definition alien and to uncomfortable (and therefore too expensive) to use".

We can protest the myopia inherent in that objection, but the sad fact is that 
perception and emotional comfort are more important to the average person's 
decision-making process than coldly rational analysis.  (I refer to this as the 
"Discount Shirt" problem.  Despite the fact that a garment bought at a discount 
store doesn't fit well and falls apart after the first washing… not actually 
fulfilling our expectations of what a shirt should do, so ISN'T really a shirt 
from a usability perspective… because it LOOKS like a shirt and the store CALLS 
it a shirt, we still buy it, telling ourselves we've bought a shirt.  Then we 
go home and complain that shirts are a failure.)

Given this hurdle of perception, I have come to the conclusion that the only 
reasonable way to make advances is to live in the world of use case-driven 
design and measure the success of a language by how well it fits the perceived 
shape of the problem to be solved, looking for "familiarity" on the