On 07/07/2016 02:11 PM, Marcus Daniels wrote:
I don't understand why you connect special purpose devices with paper math vs. 
computation.   I claim the problem with paper math is that 1) the former does not carry 
or enforce correctness checks, 2) it is not put in context -- things are pulled out of 
thin air as "the reader should know this", and 3) there isn't a formal mapping 
or harness to a universal computer.

Well, I disagree with all 3 of those assertions.  But it's a soft disagreement.  I rely on softening the definitions of 
"correctness checks", "put in context", "harness to universal computer".  Paper math is a 
social enterprise and that sociality is the correctness check.  Similarly, it is put in the context of its application 
and/or the larger body of math.  And the "universal computer" it is harnessed to is (proximally) the human 
brain/CNS and (distally) logic/reasoning as a whole.

Paper math is a semi-semantic computation.  This is nothing more than a restatement of 
Hilbert's program.  It is a (canonical) use case of a special purpose device: the human 
brain.  It's interesting and meaningful to ask whether or not computers can do the math 
humans do.  I think the answer keeps coming up "yes" ... but people smarter 
than me are not convinced.  So, we shouldn't be stubbornly reductionist.  It hurts nobody 
to let them have the distinction ... at least for now and possibly forever.

If all domain-specific artifacts were built up with machine readable 
ontologies, then the general intelligent agents will have threads to pull to 
start putting the artifacts in context.   Perhaps some kinds of agents, like 
humans, would benefit from additional `analogy modules' to assist with mapping 
large semantic graphs into similar pre-existing ones.  That would be an 
accelerator for learning, not a question of having a sufficient semantic 
representation.

Well, OK.  But there's still an assumption that the infrastructure will be 
complete, high quality, and credible.  Is there room for gaming and 
misinformation in such systems?  Can our ontological mesh lie to people?  ... 
create idiot savants? ... be used to rig elections?  If so, then it most 
assuredly _is_ a question of sufficient semantic grounding.

--
glen ep ropella ⊥ 971-280-5699

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Reply via email to