Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-13 Thread BGB

On 3/12/2012 9:01 PM, David Barbour wrote:


On Mon, Mar 12, 2012 at 8:13 PM, Julian Leviston > wrote:



On 13/03/2012, at 1:21 PM, BGB wrote:


although theoretically possible, I wouldn't really trust not
having the ability to use conventional text editors whenever
need-be (or mandate use of a particular editor).

for most things I am using text-based formats, including for
things like world-maps and 3D models (both are based on arguably
mutilated versions of other formats: Quake maps and AC3D models).
the power of text is that, if by some chance someone does need to
break out a text editor and edit something, the format wont
hinder them from doing so.



What is "text"? Do you store your "text" in ASCII, EBCDIC,
SHIFT-JIS or UTF-8? If it's UTF-8, how do you use an ASCII editor
to edit the UTF-8 files?

Just saying' ;-) Hopefully you understand my point.

You probably won't initially, so hopefully you'll meditate a bit
on my response without giving a knee-jerk reaction.




I typically work with the ASCII subset of UTF-8 (where ASCII and UTF-8 
happen to be equivalent).


most of the code is written to assume UTF-8, but languages are designed 
to not depend on any characters outside the ASCII range (leaving them 
purely for comments, and for those few people who consider using them 
for identifiers).


EBCDIC and SHIFT-JIS are sufficiently obscure that one can generally 
pretend that they don't exist (FWIW, I don't generally support codepages 
either).


a lot of code also tends to assume Modified UTF-8 (basically, the same 
variant of UTF-8 used by the JVM). typically, code will ignore things 
like character normalization or alternative orderings. a lot of code 
doesn't particularly know or care what the exact character encoding is.


some amount of code internally uses UTF-16 as well, but this is less 
common as UTF-16 tends to eat a lot more memory (and, some code just 
pretends to use UTF-16, when really it is using UTF-8).




Text is more than an arbitrary arcane linear sequence of characters. 
Its use suggests TRANSPARENCY - that a human could understand the 
grammar and content, from a relatively small sample, and effectively 
hand-modify the content to a particular end.


If much of our text consisted of GUIDs:
  {21EC2020-3AEA-1069-A2DD-08002B30309D}
This might as well be
  {BLAHBLAH-BLAH-BLAH-BLAH-BLAHBLAHBLAH}

The structure is clear, but its meaning is quite opaque.



yep.

this is also a goal, and many of my formats are designed to at least try 
to be human editable.
some number of them are still often hand-edited as well (such as texture 
information files).



That said, structured editors are not incompatible with an underlying 
text format. I think that's really the best option.


yes.

for example, several editors/IDEs have expand/collapse, but still use 
plaintext for the source-code.


Visual Studio and Notepad++ are examples of this, and a more advanced 
editor could do better (such as expand/collapse on arbitrary code blocks).


these are also things like auto-completion, ... which are also nifty and 
work fine with text.



Regarding multi-line quotes... well, if you aren't fixated on ASCII 
you could always use unicode to find a whole bunch more brackets:

http://www.fileformat.info/info/unicode/block/cjk_symbols_and_punctuation/images.htm
http://www.fileformat.info/info/unicode/block/miscellaneous_technical/images.htm
http://www.fileformat.info/info/unicode/block/miscellaneous_mathematical_symbols_a/images.htm
Probably more than you know what to do with.



AFAIK, the common consensus in much of programmer-land, is that using 
Unicode characters as part of the basic syntax of a programming language 
borders on evil...



I ended up using:
<[[ ... ]]>
and:
""" ... """ (basically, same syntax as Python).

these seem probably like good enough choices.

currently, the <[[ and ]]> braces are not real tokens, and so will only 
be parsed specially as such in the particular contexts where they are 
expected to appear.


so, if one types:
2<[[3, 4], [5, 6]]
the '<' will be parsed as a less-than operator.

but, if one writes instead:
var str=<[[
some text...
more text...
]]>;

it will parse as a multi-line string...

both types of string are handled specially by the parser (rather than 
being handled by the tokenizer, as are normal strings).



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-13 Thread Julian Leviston

On 13/03/2012, at 6:19 PM, BGB wrote:

> On 3/12/2012 9:01 PM, David Barbour wrote:
>> 
>> 
>> On Mon, Mar 12, 2012 at 8:13 PM, Julian Leviston  wrote:
>> 
>> On 13/03/2012, at 1:21 PM, BGB wrote:
>> 
>>> although theoretically possible, I wouldn't really trust not having the 
>>> ability to use conventional text editors whenever need-be (or mandate use 
>>> of a particular editor).
>>> 
>>> for most things I am using text-based formats, including for things like 
>>> world-maps and 3D models (both are based on arguably mutilated versions of 
>>> other formats: Quake maps and AC3D models). the power of text is that, if 
>>> by some chance someone does need to break out a text editor and edit 
>>> something, the format wont hinder them from doing so.
>> 
>> 
>> What is "text"? Do you store your "text" in ASCII, EBCDIC, SHIFT-JIS or 
>> UTF-8? If it's UTF-8, how do you use an ASCII editor to edit the UTF-8 files?
>> 
>> Just saying' ;-) Hopefully you understand my point.
>> 
>> You probably won't initially, so hopefully you'll meditate a bit on my 
>> response without giving a knee-jerk reaction.
>> 
> 
> I typically work with the ASCII subset of UTF-8 (where ASCII and UTF-8 happen 
> to be equivalent).
> 
> most of the code is written to assume UTF-8, but languages are designed to 
> not depend on any characters outside the ASCII range (leaving them purely for 
> comments, and for those few people who consider using them for identifiers).
> 
> EBCDIC and SHIFT-JIS are sufficiently obscure that one can generally pretend 
> that they don't exist (FWIW, I don't generally support codepages either).
> 
> a lot of code also tends to assume Modified UTF-8 (basically, the same 
> variant of UTF-8 used by the JVM). typically, code will ignore things like 
> character normalization or alternative orderings. a lot of code doesn't 
> particularly know or care what the exact character encoding is.
> 
> some amount of code internally uses UTF-16 as well, but this is less common 
> as UTF-16 tends to eat a lot more memory (and, some code just pretends to use 
> UTF-16, when really it is using UTF-8).



Maybe you entirely missed my point:

>>  If it's UTF-8, how do you use an ASCII editor to edit the UTF-8 files?

>> Hopefully you understand my point.

>> You probably won't initially, so hopefully you'll meditate a bit on my 
>> response without giving a knee-jerk reaction.
> 

Julian___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-13 Thread Josh Grams
On 2012-03-13 02:13PM, Julian Leviston wrote:
>What is "text"? Do you store your "text" in ASCII, EBCDIC, SHIFT-JIS or
>UTF-8?  If it's UTF-8, how do you use an ASCII editor to edit the UTF-8
>files?
>
>Just saying' ;-) Hopefully you understand my point.
>
>You probably won't initially, so hopefully you'll meditate a bit on my
>response without giving a knee-jerk reaction.

OK, I've thought about it and I still don't get it.  I understand that
there have been a number of different text encodings, but I thought that
the whole point of Unicode was to provide a future-proof way out of that
mess.  And I could be totally wrong, but I have the impression that it
has pretty good penetration.  I gather that some people who use the
Cyrillic alphabet often use some code page and China and Japan use
SHIFT-JIS or whatever in order to have a more compact representation,
but that even there UTF-8 tools are commonly available.

So I would think that the sensible thing would be to use UTF-8 and
figure that anyone (now or in the future) will have tools which support
it, and that anyone dedicated enough to go digging into your data files
will have no trouble at all figuring out what it is.

If that's your point it seems like a pretty minor nitpick.  What am I
missing?

--Josh
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-13 Thread Martin Baldan
>
>
> this is possible, but it assumes, essentially, that one doesn't run into
> such a limit.
>
> if one gets to a point where every "fundamental" concept is only ever
> expressed once, and everything is built from preceding fundamental concepts,
> then this is a limit, short of dropping fundamental concepts.

Yes, but I don't think any theoretical framework can tell us a priori
how close we are to that limit. The fact that we run out of ideas
doesn't mean there are no more new ideas waiting to be discovered.
Maybe if we change our choice of fundamental concepts, we can further
simplify our systems.

For instance, it was assumed that the holy grail of Lisp would be to
get to the essence of lambda calculus, and then John Shutt did away
with lambda as a fundamental concept, he derived it from vau, doing
away with macros and special forms in the process. I don't know
whether Kernel will live up to its promise, but in any case it was an
innovative line of inquiry.


> theoretically, about the only way to really do much better would be using a
> static schema (say, where the sender and receiver have a predefined set of
> message symbols, predefined layout templates, ...). personally though, I
> really don't like these sorts of compressors (they are very brittle,
> inflexible, and prone to version issues).
>
> this is essentially what "write a tic-tac-toe player in Scheme" implies:
> both the sender and receiver of the message need to have a common notion of
> both "tic-tac-toe player" and "Scheme". otherwise, the message can't be
> decoded.

But nothing prevents you from reaching this common notion via previous
messages. So, I don't see why this protocol would have to be any more
brittle than a more verbous one.

>
> a more general strategy is basically to build a model "from the ground up",
> where the sender and reciever have only basic knowledge of basic concepts
> (the basic compression format), and most everything else is built on the fly
> based on the data which has been seen thus far (old data is used to build
> new data, ...).

Yes, but, as I said, old that are used to build new data, but there's
no need to repeat old data over and over again. When two people
communicate with each other, they don't introduce themselves and share
their personal details again and again at the beginning of each
conversation.



>
> and, of course, such a system would likely be, itself, absurdly complex...
>

The system wouldn't have to be complex. Instead, it would *represent*
complexity through first-class data structures. The aim would be to
make the implicit complexity explicit, so that this simple system can
reason about it. More concretely, the implicit complexity is the
actual use of competing, redundant standards, and the explicit
complexity is an ontology describing those standards, so that a
reasoner can transform, translate and find duplicities with
dramatically less human attention. Developing such an ontology is by
no means trivial, it's hard work, but in the end I think it would be
very much worth the trouble.


>
>
> and this is also partly why making everything smaller (while keeping its
> features intact) would likely end up looking a fair amount like data
> compression (it is compression code and semantic space).
>

Maybe, but I prefer to think of it in terms of machine translation.
There are many different human languages, some of them more expressive
than others (for instance, with a larger lexicon, or a more
fine-grained tense system). If you want to develop an interlingua for
machine translation, you have to take a superset of all "features" of
the supported languages, and a convenient grammar to encode it (in GF
it would be an "abstract syntax"). Of course, it may be tricky to
support translation from any language to any other, because you may
need neologisms or long clarifications to express some ideas in the
least expressive languages, but let's leave that aside for the moment.
My point is that, once you do that, you can feed a reasoner with
literature in any language, and the reasoner doesn't have to
understand them all; it only has to understand the interlingua, which
may well be easier to parse than any of the target languages. You
didn't eliminate the complexity of human languages, but now it's
tidily packaged in an ontology, where it doesn't get in the reasoner's
way.


>
> some of this is also what makes my VM sub-project as complex as it is: it
> deals with a variety of problem cases, and each adds a little complexity,
> and all this adds up. likewise, some things, such as interfacing (directly)
> with C code and data, add more complexity than others (simpler and cleaner
> FFI makes the VM itself much more complex).

Maybe that's because you are trying to support everything "by hand",
with all this knowledge and complexity embedded in your code. On the
other hand, it seems that the VPRI team is trying to develop new,
powerful standards with all the combined "features" of the existing
ones w

Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-13 Thread David Barbour
On Tue, Mar 13, 2012 at 5:42 AM, Josh Grams  wrote:

> On 2012-03-13 02:13PM, Julian Leviston wrote:
> >What is "text"? Do you store your "text" in ASCII, EBCDIC, SHIFT-JIS or
> >UTF-8?  If it's UTF-8, how do you use an ASCII editor to edit the UTF-8
> >files?
> >
> >Just saying' ;-) Hopefully you understand my point.
> >
> >You probably won't initially, so hopefully you'll meditate a bit on my
> >response without giving a knee-jerk reaction.
>
> OK, I've thought about it and I still don't get it.  I understand that
> there have been a number of different text encodings, but I thought that
> the whole point of Unicode was to provide a future-proof way out of that
> mess.  And I could be totally wrong, but I have the impression that it
> has pretty good penetration.  I gather that some people who use the
> Cyrillic alphabet often use some code page and China and Japan use
> SHIFT-JIS or whatever in order to have a more compact representation,
> but that even there UTF-8 tools are commonly available.
>
> So I would think that the sensible thing would be to use UTF-8 and
> figure that anyone (now or in the future) will have tools which support
> it, and that anyone dedicated enough to go digging into your data files
> will have no trouble at all figuring out what it is.
>
> If that's your point it seems like a pretty minor nitpick.  What am I
> missing?
>

Julian's point, AFAICT, is that text is just a class of storage that
requires appropriate viewers and editors, doesn't even describe a specific
standard. Thus, another class that requires appropriate viewers and editors
can work just as well - spreadsheets, tables, drawings.

You mention `data files`. What is a `file`? Is it not a service provided by
a `file system`? Can we not just as easily hide a storage format behind a
standard service more convenient for ad-hoc views and analysis (perhaps
RDBMS). Why organize into files? Other than penetration, they don't seem to
be especially convenient.

Penetration matters, which is one reason that text and filesystems matter.

But what else has penetrated? Browsers. Wikis. Web services. It wouldn't be
difficult to support editing of tables, spreadsheets, drawings, etc. atop a
web service platform. We probably have more freedom today than we've ever
had for language design, if we're willing to stretch just a little bit
beyond the traditional filesystem+text-editor framework.

Regards,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-13 Thread David Barbour
This has been an interesting conversation. I don't like how it's hidden
under the innocent looking subject `Error trying to compile COLA`

On Tue, Mar 13, 2012 at 8:08 AM, Martin Baldan  wrote:

> >
> >
> > this is possible, but it assumes, essentially, that one doesn't run into
> > such a limit.
> >
> > if one gets to a point where every "fundamental" concept is only ever
> > expressed once, and everything is built from preceding fundamental
> concepts,
> > then this is a limit, short of dropping fundamental concepts.
>
> Yes, but I don't think any theoretical framework can tell us a priori
> how close we are to that limit. The fact that we run out of ideas
> doesn't mean there are no more new ideas waiting to be discovered.
> Maybe if we change our choice of fundamental concepts, we can further
> simplify our systems.
>
> For instance, it was assumed that the holy grail of Lisp would be to
> get to the essence of lambda calculus, and then John Shutt did away
> with lambda as a fundamental concept, he derived it from vau, doing
> away with macros and special forms in the process. I don't know
> whether Kernel will live up to its promise, but in any case it was an
> innovative line of inquiry.
>
>
> > theoretically, about the only way to really do much better would be
> using a
> > static schema (say, where the sender and receiver have a predefined set
> of
> > message symbols, predefined layout templates, ...). personally though, I
> > really don't like these sorts of compressors (they are very brittle,
> > inflexible, and prone to version issues).
> >
> > this is essentially what "write a tic-tac-toe player in Scheme" implies:
> > both the sender and receiver of the message need to have a common notion
> of
> > both "tic-tac-toe player" and "Scheme". otherwise, the message can't be
> > decoded.
>
> But nothing prevents you from reaching this common notion via previous
> messages. So, I don't see why this protocol would have to be any more
> brittle than a more verbous one.
>
> >
> > a more general strategy is basically to build a model "from the ground
> up",
> > where the sender and reciever have only basic knowledge of basic concepts
> > (the basic compression format), and most everything else is built on the
> fly
> > based on the data which has been seen thus far (old data is used to build
> > new data, ...).
>
> Yes, but, as I said, old that are used to build new data, but there's
> no need to repeat old data over and over again. When two people
> communicate with each other, they don't introduce themselves and share
> their personal details again and again at the beginning of each
> conversation.
>
>
>
> >
> > and, of course, such a system would likely be, itself, absurdly
> complex...
> >
>
> The system wouldn't have to be complex. Instead, it would *represent*
> complexity through first-class data structures. The aim would be to
> make the implicit complexity explicit, so that this simple system can
> reason about it. More concretely, the implicit complexity is the
> actual use of competing, redundant standards, and the explicit
> complexity is an ontology describing those standards, so that a
> reasoner can transform, translate and find duplicities with
> dramatically less human attention. Developing such an ontology is by
> no means trivial, it's hard work, but in the end I think it would be
> very much worth the trouble.
>
>
> >
> >
> > and this is also partly why making everything smaller (while keeping its
> > features intact) would likely end up looking a fair amount like data
> > compression (it is compression code and semantic space).
> >
>
> Maybe, but I prefer to think of it in terms of machine translation.
> There are many different human languages, some of them more expressive
> than others (for instance, with a larger lexicon, or a more
> fine-grained tense system). If you want to develop an interlingua for
> machine translation, you have to take a superset of all "features" of
> the supported languages, and a convenient grammar to encode it (in GF
> it would be an "abstract syntax"). Of course, it may be tricky to
> support translation from any language to any other, because you may
> need neologisms or long clarifications to express some ideas in the
> least expressive languages, but let's leave that aside for the moment.
> My point is that, once you do that, you can feed a reasoner with
> literature in any language, and the reasoner doesn't have to
> understand them all; it only has to understand the interlingua, which
> may well be easier to parse than any of the target languages. You
> didn't eliminate the complexity of human languages, but now it's
> tidily packaged in an ontology, where it doesn't get in the reasoner's
> way.
>
>
> >
> > some of this is also what makes my VM sub-project as complex as it is: it
> > deals with a variety of problem cases, and each adds a little complexity,
> > and all this adds up. likewise, some things, such as interfacing
> (directly)
> > with 

[fonc] The subject line can be changed

2012-03-13 Thread Loup Vaillant

So let's change it the next time the actual subject changes.

Loup.


David Barbour wrote:

This has been an interesting conversation. I don't like how it's hidden
under the innocent looking subject `Error trying to compile COLA`

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-13 Thread Mack
I couldn't agree more.

"text" and "files" are just encoding and packaging.   We routinely represent 
the same information in different ways during different stages of a program or 
system's lifecycle in order to obtain advantages relevant to the processing 
problems at hand.  In the past, it has been convenient to encourage ubiquitous 
use of standard encoding (ASCII) and packaging (files) in exchange for the 
obvious benefits of simplicity, access to common tooling that understands those 
standards, and interchange between systems.

However, if we set simplicity aside for the moment, the goals of access and 
interchange can be accomplished by means of mapping.  It is not essential to 
maintain ubiquitous lowest-common-denominator standards if suitable mapping 
functions exist.

My personal feeling is that the design of practical next-generation languages 
and tools has been retarded for a very long time by an unexamined emotional 
need to cling to common historical standards that are insufficient to support 
the needs of forward-looking language concepts.

For example, if we look beyond system interchange, the most significant value 
of core ASCII is its relatively good impedance match to keys found on most 
computer keyboards.  When "standard typewriter" keyboards were the ubiquitous 
form of data entry, this was an overwhelmingly important consideration.  
However, we long ago broke the chains of this relationship:  Data entry 
routinely encompasses entry from pointer devices such as mice and trackballs, 
tablets of various descriptions, incomplete keyboards such as numeric keypads, 
game controllers, etc.  These axes of expression are not represented in the 
graphology of ASCII.

In this world, the impedance mismatch to ASCII (and UNICODE, which could be 
seen as ASCII++, since it offers more glyphs but makes little attempt to 
increase the core semantics of graphology offered) invites examination.  In 
this world, it seems to me that core expressiveness of a graphology trumps 
ubiquity.  I'd like to see more languages being bold and looking beyond 
ASCII-derived symbology to find graphologies that allow for more powerful 
representation and manipulation of modern ontologies.

A concrete example:  ASCII only allows "to the right of" as a first class 
relationship in its representation ontology.  (The word "at" is formed as the 
character "t" to the right of the character "a".)  Even concepts such as "next 
line" or "backspace" are second-order concepts encoded by reserved symbols 
borrowed from the representable namespace.  Advanced but still fundamental 
concepts such as "subordinate to" (i.e., subscript) are only available in 
so-called RichText systems.  Even more powerful concepts like "contains" (for 
example, a "word" which is composed of the symbol "O" containing inside it the 
symbol "c") are not representable at all in the commonly available 
graphologies.  The people who attempt to express mathematical formulae 
routinely grapple with these limitations.  Even where a character set includes 
a root symbol, the underlying graphology does not implement rules by which 
characters can be arranged around it to represent the third root of x.

Many of the excruciating design exercises language designers go thru these days 
are largely driven by limitations of the ASCII++ graphology we assume to be 
sacrosanct.  (For example, the parts of this discussion thread analyzing the 
use of various compound-character combinations which intrude all the way to the 
parsing layer of a language because the core ASCII graphology doesn't feature 
enough bracket symbols.)

This barrier is artificial, historic in nature and need no longer constrain us 
because we have the luxury of modern high-powered computing systems which allow 
us to impose abstraction in important ways that were historically infeasible to 
allow us to achieve new kinds of expressive power and simplicity.

-- Mack


On Mar 13, 2012, at 8:11 AM, David Barbour wrote:

> 
> 
> On Tue, Mar 13, 2012 at 5:42 AM, Josh Grams  wrote:
> On 2012-03-13 02:13PM, Julian Leviston wrote:
> >What is "text"? Do you store your "text" in ASCII, EBCDIC, SHIFT-JIS or
> >UTF-8?  If it's UTF-8, how do you use an ASCII editor to edit the UTF-8
> >files?
> >
> >Just saying' ;-) Hopefully you understand my point.
> >
> >You probably won't initially, so hopefully you'll meditate a bit on my
> >response without giving a knee-jerk reaction.
> 
> OK, I've thought about it and I still don't get it.  I understand that
> there have been a number of different text encodings, but I thought that
> the whole point of Unicode was to provide a future-proof way out of that
> mess.  And I could be totally wrong, but I have the impression that it
> has pretty good penetration.  I gather that some people who use the
> Cyrillic alphabet often use some code page and China and Japan use
> SHIFT-JIS or whatever in order to have a more compact representation,
> but that even there UTF-8 tools 

Re: [fonc] Error trying to compile COLA

2012-03-13 Thread Mack
For better or worse, both Apple and Microsoft (via Windows 8) are attempting to 
rectify this via the "Terms and Conditions" route.

It's been announced that both Windows 8 and OSX Mountain Lion will require 
applications to be installed via download thru their respective "App Stores" in 
order to obtain certification required for the OS to allow them access to 
features (like an installed camera, or the network) that are outside the 
default application sandbox.  

The acceptance of the App Store model for the iPhone/iPad has persuaded them 
that this will be (commercially) viable as a model for general public 
distribution of trustable software.

In that world, the Squeak plugin could be certified as safe to download in a 
way that System Admins might believe.


On Feb 29, 2012, at 3:09 PM, Alan Kay wrote:

> Windows (especially) is so porous that SysAdmins (especially in school 
> districts) will not allow teachers to download .exe files. This wipes out the 
> Squeak plugin that provides all the functionality.
> 
> But there is still the browser and Javascript. But Javascript isn't fast 
> enough to do the particle system. But why can't we just download the particle 
> system and run it in a safe address space? The browser people don't yet 
> understand that this is what they should have allowed in the first place. So 
> right now there is only one route for this (and a few years ago there were 
> none) -- and that is Native Client on Google Chrome. 
> 
>  But Google Chrome is only 13% penetrated, and the other browser fiefdoms 
> don't like NaCl. Google Chrome is an .exe file so teachers can't download 
> it (and if they could, they could download the Etoys plugin).
> 

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-13 Thread Alan Kay
But we haven't wanted to program in Smalltalk for a long time.

This is a crazy non-solution (and is so on the iPad already)

No one should have to work around someone else's bad designs and 
implementations ...


Cheers,

Alan




>
> From: Mack 
>To: Fundamentals of New Computing  
>Sent: Tuesday, March 13, 2012 9:28 AM
>Subject: Re: [fonc] Error trying to compile COLA
> 
>
>For better or worse, both Apple and Microsoft (via Windows 8) are attempting 
>to rectify this via the "Terms and Conditions" route.
>
>
>It's been announced that both Windows 8 and OSX Mountain Lion will require 
>applications to be installed via download thru their respective "App Stores" 
>in order to obtain certification required for the OS to allow them access to 
>features (like an installed camera, or the network) that are outside the 
>default application sandbox.  
>
>
>The acceptance of the App Store model for the iPhone/iPad has persuaded them 
>that this will be (commercially) viable as a model for general public 
>distribution of trustable software.
>
>
>In that world, the Squeak plugin could be certified as safe to download in a 
>way that System Admins might believe.
>
>
>
>On Feb 29, 2012, at 3:09 PM, Alan Kay wrote:
>
>Windows (especially) is so porous that SysAdmins (especially in school 
>districts) will not allow teachers to download .exe files. This wipes out the 
>Squeak plugin that provides all the functionality.
>>
>>
>>But there is still the browser and Javascript. But Javascript isn't fast 
>>enough to do the particle system. But why can't we just download the particle 
>>system and run it in a safe address space? The browser people don't yet 
>>understand that this is what they should have allowed in the first place. So 
>>right now there is only one route for this (and a few years ago there were 
>>none) -- and that is Native Client on Google Chrome. 
>>
>>
>>
>> But Google Chrome is only 13% penetrated, and the other browser fiefdoms 
>>don't like NaCl. Google Chrome is an .exe file so teachers can't download 
>>it (and if they could, they could download the Etoys plugin).
>>
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] De Tocqueville, was Re: Error trying to compile COLA

2012-03-13 Thread Mack
The entire effort to lift software development to a level beyond today's 
institutionalized approaches reminds me of a quote from Alexis de Tocqueville…

"… Such is not the course adopted by tyranny in democratic republics; there the 
body is left free, and the soul is enslaved. The master no longer says: "You 
shall think as I do or you shall die"; but he says: "You are free to think 
differently from me and to retain your life, your property, and all that you 
possess; but you are henceforth a stranger among your people. You may retain 
your civil rights, but they will be useless to you, for you will never be 
chosen by your fellow citizens if you solicit their votes; and they will affect 
to scorn you if you ask for their esteem. You will remain among men, but you 
will be deprived of the rights of mankind. Your fellow creatures will shun you 
like an impure being; and even those who believe in your innocence will abandon 
you, lest they should be shunned in their turn. Go in peace! I have given you 
your life, but it is an existence worse than death."

It's not enough to find a better way.  To effect lasting benefit, one has to 
make it a POPULAR way.  And that, sadly is not the province of reason, but of 
whim and fashion.

Prima facie, the current popularity of Objective C as a programming language 
owes nothing to its feature set and everything to the fact that it is required 
in order to program for the iPhone or iPad.

Cheers,

-- Mack



On Mar 13, 2012, at 11:09 AM, Alan Kay wrote:

> But we haven't wanted to program in Smalltalk for a long time.
> 
> This is a crazy non-solution (and is so on the iPad already)
> 
> No one should have to work around someone else's bad designs and 
> implementations ...
> 
> Cheers,
> 
> Alan
> 

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-13 Thread Julian Leviston

On 14/03/2012, at 2:11 AM, David Barbour wrote:

> 
> 
> On Tue, Mar 13, 2012 at 5:42 AM, Josh Grams  wrote:
> On 2012-03-13 02:13PM, Julian Leviston wrote:
> >What is "text"? Do you store your "text" in ASCII, EBCDIC, SHIFT-JIS or
> >UTF-8?  If it's UTF-8, how do you use an ASCII editor to edit the UTF-8
> >files?
> >
> >Just saying' ;-) Hopefully you understand my point.
> >
> >You probably won't initially, so hopefully you'll meditate a bit on my
> >response without giving a knee-jerk reaction.
> 
> OK, I've thought about it and I still don't get it.  I understand that
> there have been a number of different text encodings, but I thought that
> the whole point of Unicode was to provide a future-proof way out of that
> mess.  And I could be totally wrong, but I have the impression that it
> has pretty good penetration.  I gather that some people who use the
> Cyrillic alphabet often use some code page and China and Japan use
> SHIFT-JIS or whatever in order to have a more compact representation,
> but that even there UTF-8 tools are commonly available.
> 
> So I would think that the sensible thing would be to use UTF-8 and
> figure that anyone (now or in the future) will have tools which support
> it, and that anyone dedicated enough to go digging into your data files
> will have no trouble at all figuring out what it is.
> 
> If that's your point it seems like a pretty minor nitpick.  What am I
> missing?
> 
> Julian's point, AFAICT, is that text is just a class of storage that requires 
> appropriate viewers and editors, doesn't even describe a specific standard. 
> Thus, another class that requires appropriate viewers and editors can work 
> just as well - spreadsheets, tables, drawings. 
> 
> You mention `data files`. What is a `file`? Is it not a service provided by a 
> `file system`? Can we not just as easily hide a storage format behind a 
> standard service more convenient for ad-hoc views and analysis (perhaps 
> RDBMS). Why organize into files? Other than penetration, they don't seem to 
> be especially convenient.
> 
> Penetration matters, which is one reason that text and filesystems matter.  
> 
> But what else has penetrated? Browsers. Wikis. Web services. It wouldn't be 
> difficult to support editing of tables, spreadsheets, drawings, etc. atop a 
> web service platform. We probably have more freedom today than we've ever had 
> for language design, if we're willing to stretch just a little bit beyond the 
> traditional filesystem+text-editor framework. 
> 
> Regards,
> 
> Dave

Perfectly the point, David. A "token/character" in ASCII is equivalent to a 
byte. In SHIFT-JIS, it's two, but this doesn't mean you can't express the 
equivalent meaning in them (ie by selecting the same graphemes) - this is 
called translation) ;-)

One of the most profound things for me has been understanding the ramifications 
of OMeta. It doesn't "just" parse streams of "characters" (whatever they are) 
in fact it doesn't care what the individual tokens of its parsing stream is. 
It's concerned merely with the syntax of its elements (or tokens) - how they 
combine to form certain rules - (here I mean "valid patterns of grammar" by 
rules). If one considers this well, it has amazing ramifications. OMeta invites 
us to see the entire computing world in terms of sets of 
problem-oriented-languages, where language is a liberal word that simply means 
a pattern of sequence of the constituent elements of a "thing". To PEG, it 
basically adds proper translation and true object-orientism of individual 
parsing elements. This takes a while to understand, I think.

Formats here become "languages", protocols are "languages", and so are any 
other kind of representation system you care to name (computer programming 
languages, processor instruction sets, etc.).

I'm postulating, BGB, that you're perhaps so ingrained in the current modality 
and approach to thinking about computers, that you maybe can't break out of it 
to see what else might be possible. I think it was turing, wasn't it, who 
postulated that his turing machines could work off ANY symbols... so if that's 
the case, and your programming language grammar has a set of symbols, why not 
use arbitrary (ie not composed of english letters) ideograms for them? (I think 
these days we call these things icons ;-))

You might say "but how will people name their variables" - well perhaps for 
those things, you could use english letters, but maybe you could enforce that 
no one use more than 30 variables in their code in any one simple chunk, in 
which case build them in with the other ideograms.

I'm not attempting to build any kind of authoritative status here, merely 
provoke some different thought in you.

I'll take Dave's point that penetration matters, and at the same time, most 
"new ideas" have "old idea" constituents, so you can easily find some matter 
for people stuck in the old methodologies and thinking to relate to when 
building your "new stuff" ;-)


[fonc] Where is the Moshi image?

2012-03-13 Thread Martin Baldan
I've been reading a few more documents, and it seems that the first
step towards having something like Frank at home would be to get hold
of a Moshi Squeak image.

For instance, in "Implementing DBJr with Worlds" we can read:

"Try It Yourself!
The following steps will recreate our demo. (Important: this only works in our
"Moshi" Squeak image. Bring in Worlds2-aw.cs, WWorld-A-tk.1.cs,
WWorld-B-tk.4.cs,
Worlds-Morph-A-tk.5.cs, Worlds-DBJr-B-tk.1.cs, then look at file 'LStack WWorld
workspace') These instructions are here so that we won't lose them.
This demo was
difficult to get working."



But the Mythical Moshi image turned out to be surprisingly elusive.

For instance, all I've found in this list is this email message:

http://www.mail-archive.com/fonc@vpri.org/msg01037.html

"The other research that are based on the Moshi image equally
interesting, but the Moshi image is nowhere to be downloaded so one
can only read the code and papers about it."

Are we getting into military secret land? :D
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Project Nell

2012-03-13 Thread C. Scott Ananian
Chris Ball, Michael Stone, and I have just written a short paper on Project
Nell, which should be of interest to this list:
  http://cscott.net/Publications/OLPC/idc2012.pdf

The paper mentions TurtleScript (http://cscott.net/Projects/TurtleScript/)
which I've discussed on this list before.
 --scott

-- 
  ( http://cscott.net )
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Where is the Moshi image?

2012-03-13 Thread Julian Leviston
Just out of curiosity, what's this?

http://tinlizzie.org/dbjr/frank.lbox/

Julian

On 14/03/2012, at 11:10 AM, Martin Baldan wrote:

> I've been reading a few more documents, and it seems that the first
> step towards having something like Frank at home would be to get hold
> of a Moshi Squeak image.
> 
> For instance, in "Implementing DBJr with Worlds" we can read:
> 
> "Try It Yourself!
> The following steps will recreate our demo. (Important: this only works in our
> "Moshi" Squeak image. Bring in Worlds2-aw.cs, WWorld-A-tk.1.cs,
> WWorld-B-tk.4.cs,
> Worlds-Morph-A-tk.5.cs, Worlds-DBJr-B-tk.1.cs, then look at file 'LStack 
> WWorld
> workspace') These instructions are here so that we won't lose them.
> This demo was
> difficult to get working."
> 
> 
> 
> But the Mythical Moshi image turned out to be surprisingly elusive.
> 
> For instance, all I've found in this list is this email message:
> 
> http://www.mail-archive.com/fonc@vpri.org/msg01037.html
> 
> "The other research that are based on the Moshi image equally
> interesting, but the Moshi image is nowhere to be downloaded so one
> can only read the code and papers about it."
> 
> Are we getting into military secret land? :D
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-13 Thread BGB

On 3/13/2012 4:37 PM, Julian Leviston wrote:


On 14/03/2012, at 2:11 AM, David Barbour wrote:




On Tue, Mar 13, 2012 at 5:42 AM, Josh Grams > wrote:


On 2012-03-13 02:13PM, Julian Leviston wrote:
>What is "text"? Do you store your "text" in ASCII, EBCDIC,
SHIFT-JIS or
>UTF-8?  If it's UTF-8, how do you use an ASCII editor to edit
the UTF-8
>files?
>
>Just saying' ;-) Hopefully you understand my point.
>
>You probably won't initially, so hopefully you'll meditate a bit
on my
>response without giving a knee-jerk reaction.

OK, I've thought about it and I still don't get it.  I understand
that
there have been a number of different text encodings, but I
thought that
the whole point of Unicode was to provide a future-proof way out
of that
mess.  And I could be totally wrong, but I have the impression
that it
has pretty good penetration.  I gather that some people who use the
Cyrillic alphabet often use some code page and China and Japan use
SHIFT-JIS or whatever in order to have a more compact representation,
but that even there UTF-8 tools are commonly available.

So I would think that the sensible thing would be to use UTF-8 and
figure that anyone (now or in the future) will have tools which
support
it, and that anyone dedicated enough to go digging into your data
files
will have no trouble at all figuring out what it is.

If that's your point it seems like a pretty minor nitpick.  What am I
missing?


Julian's point, AFAICT, is that text is just a class of storage that 
requires appropriate viewers and editors, doesn't even describe a 
specific standard. Thus, another class that requires appropriate 
viewers and editors can work just as well - spreadsheets, tables, 
drawings.


You mention `data files`. What is a `file`? Is it not a service 
provided by a `file system`? Can we not just as easily hide a storage 
format behind a standard service more convenient for ad-hoc views and 
analysis (perhaps RDBMS). Why organize into files? Other than 
penetration, they don't seem to be especially convenient.


Penetration matters, which is one reason that text and filesystems 
matter.


But what else has penetrated? Browsers. Wikis. Web services. It 
wouldn't be difficult to support editing of tables, spreadsheets, 
drawings, etc. atop a web service platform. We probably have more 
freedom today than we've ever had for language design, if we're 
willing to stretch just a little bit beyond the traditional 
filesystem+text-editor framework.


Regards,

Dave


Perfectly the point, David. A "token/character" in ASCII is equivalent 
to a byte. In SHIFT-JIS, it's two, but this doesn't mean you can't 
express the equivalent meaning in them (ie by selecting the same 
graphemes) - this is called translation) ;-)


this is partly why there are "codepoints".
one can work in terms of codepoints, rather than bytes.

a text editor may internally work in UTF-16, but saves its output in 
UTF-8 or similar.

ironically, this is basically what I am planning/doing at the moment.

now, if/how the user will go about typing UTF-16 codepoints, this is not 
yet decided.



One of the most profound things for me has been understanding the 
ramifications of OMeta. It doesn't "just" parse streams of 
"characters" (whatever they are) in fact it doesn't care what the 
individual tokens of its parsing stream is. It's concerned merely with 
the syntax of its elements (or tokens) - how they combine to form 
certain rules - (here I mean "valid patterns of grammar" by rules). If 
one considers this well, it has amazing ramifications. OMeta invites 
us to see the entire computing world in terms of sets of 
problem-oriented-languages, where language is a liberal word that 
simply means a pattern of sequence of the constituent elements of a 
"thing". To PEG, it basically adds proper translation and true 
object-orientism of individual parsing elements. This takes a while to 
understand, I think.


Formats here become "languages", protocols are "languages", and so are 
any other kind of representation system you care to name (computer 
programming languages, processor instruction sets, etc.).


possibly.

I was actually sort of aware of a lot of this already though, but didn't 
consider it particularly relevant.



I'm postulating, BGB, that you're perhaps so ingrained in the current 
modality and approach to thinking about computers, that you maybe 
can't break out of it to see what else might be possible. I think it 
was turing, wasn't it, who postulated that his turing machines could 
work off ANY symbols... so if that's the case, and your programming 
language grammar has a set of symbols, why not use arbitrary (ie not 
composed of english letters) ideograms for them? (I think these days 
we call these things icons ;-))


You might say "but how will people name their variables" - well 
perhaps fo

Re: [fonc] Error trying to compile COLA

2012-03-13 Thread Max Orhai
But, that's exactly the cause for concern! Aside from the fact of
Smalltalk's obsolescence (which isn't really the point), the Squeak plugin
could never be approved by a 'responsible' sysadmin, *because it can run
arbitrary user code*! Squeak's not in the app store for exactly that
reason. You'll notice how crippled the allowed 'programming apps' are. This
is simple strong-arm bully tactics on the part of Apple; technical problems
 "solved" by heavy-handed legal means. Make no mistake, the iPad is the
anti-Dynabook.

-- Max

On Tue, Mar 13, 2012 at 9:28 AM, Mack  wrote:

> For better or worse, both Apple and Microsoft (via Windows 8) are
> attempting to rectify this via the "Terms and Conditions" route.
>
> It's been announced that both Windows 8 and OSX Mountain Lion will require
> applications to be installed via download thru their respective "App
> Stores" in order to obtain certification required for the OS to allow them
> access to features (like an installed camera, or the network) that are
> outside the default application sandbox.
>
> The acceptance of the App Store model for the iPhone/iPad has persuaded
> them that this will be (commercially) viable as a model for general public
> distribution of trustable software.
>
> In that world, the Squeak plugin could be certified as safe to download in
> a way that System Admins might believe.
>
>
> On Feb 29, 2012, at 3:09 PM, Alan Kay wrote:
>
> Windows (especially) is so porous that SysAdmins (especially in school
> districts) will not allow teachers to download .exe files. This wipes out
> the Squeak plugin that provides all the functionality.
>
> But there is still the browser and Javascript. But Javascript isn't fast
> enough to do the particle system. But why can't we just download the
> particle system and run it in a safe address space? The browser people
> don't yet understand that this is what they should have allowed in the
> first place. So right now there is only one route for this (and a few years
> ago there were none) -- and that is Native Client on Google Chrome.
>
>  But Google Chrome is only 13% penetrated, and the other browser fiefdoms
> don't like NaCl. Google Chrome is an .exe file so teachers can't
> download it (and if they could, they could download the Etoys plugin).
>
>
>
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>
>
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Where is the Moshi image?

2012-03-13 Thread Tom Koenig


" An LBox is a 'membrane' containing independent aspects that work together to be the box's look and behavior. Aspects communicate by publishing and subscribing to announcements of events"  See http://www.vpri.org/pdf/m2011002_lesserphic.pdf
Mar 13, 2012 09:01:14 PM, fonc@vpri.org wrote:
Just out of curiosity, what's this?http://tinlizzie.org/dbjr/frank.lbox/JulianOn 14/03/2012, at 11:10 AM, Martin Baldan wrote:> I've been reading a few more documents, and it seems that the first> step towards having something like Frank at home would be to get hold> of a Moshi Squeak image.> > For instance, in "Implementing DBJr with Worlds" we can read:> > "Try It Yourself!> The following steps will recreate our demo. (Important: this only works in our> "Moshi" Squeak image. Bring in Worlds2-aw.cs, WWorld-A-tk.1.cs,> WWorld-B-tk.4.cs,> Worlds-Morph-A-tk.5.cs, Worlds-DBJr-B-tk.1.cs, then look at file 'LStack WWorld> workspace') These instructions are here so that we won't lose them.> This demo was> difficult to get working."> > > > But the Mythical Moshi image turned out to be surprisingly elusive.> > For instance, all I've found in this list is this email message:> > http://www.mail-archive.com/fonc@vpri.org/msg01037.html> > "The other research that are based on the Moshi image equally> interesting, but the Moshi image is nowhere to be downloaded so one> can only read the code and papers about it."> > Are we getting into military secret land? :D> ___> fonc mailing list> fonc@vpri.org> http://vpri.org/mailman/listinfo/fonc___fonc mailing listfonc@vpri.orghttp://vpri.org/mailman/listinfo/fonc
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc