Something like what's discussed in the nytimes article *must* obtain for 
computers to ever be as embedded as the human brain.  We can make an analogy 
that helps explain why RussA's reified ideas argument is (slightly) flawed, but 
satisficing for a seemingly large number of tasks.  The analogy being CPU ⇔ 
thoughts.  As the nytimes article points out, the centralization of the 
computer's "thoughts" into the CPU has taken us really far, as has (perhaps) 
centralization-friendly philosophy like we got from Plato.  But CPUs and the 
thoughts of philosophers have *never* really been disembodied.  RussA's idea 
(contra Hoffman, I think) that there is a strong correlation between the world 
and thoughts, strong enough to imply that we can share/communicate ideas, 
relies on the hidden assumption that the communicating processes have the same 
embedding (eyeballs, fingers, ears, etc. for brains and disks, GPUs, RAM, etc. 
for CPUs).

The shared embedding is the source of the shared semantics ... It is the reason 
we (are tricked into thinking we can) share ideas.  This is also true for 
computational infrastructure like ANNs or GAs trained on particular data or in 
a particular context.  Making sense of the final configuration that seems to 
handle the I/O relation the way it "should", consists largely of studying the 
embedding of the configuration.  The meaning comes from the interaction with 
what's out there, not some decoupled internal structure.

I think this is at least part of why QM is appealing to philosophers and vice 
versa, because (e.g.) entanglement is a (very particular) type of environmental 
coupling.  What information is closed under which operations?  And what 
information is sensitive to couplings under which operations?


On 09/19/2017 12:00 PM, Marcus Daniels wrote:
> [mixing threads]
> 
> 
> Mermin’s “Shut up and calculate” view which to me seems like agreeing to be 
> blind because there is Braile.
> 
> This to me has the same feel as agreeing that `real’ being whatever “a 
> community of inquiry” says.    How can one generate hypothesis in a 
> productive way without any intuition or metaphysical foundation?  Why would 
> anyone want to?  It seems to me doing theory this way is something a computer 
> might as well do.   I _believe_ something because I can manipulate it, 
> visualize it, and anticipate a certain kind of result, not because it is 
> written in a textbook or because a prediction pops out of a supercomputer.   
> That formality is added value to the intuition, not a substitute for it.
> 
> 
> Suppose (and it is not just hypothetical) that a machine learning algorithm 
> could suggest how to design a battery with maximum capacity, develop recipes 
> that extended life, or find computationally efficient solutions to the 
> evolution of quantum systems, or answer any number of hard scientific 
> questions or solve any number of relevant engineering problems.   Suppose it 
> was completely mysterious to humans (at first) how it worked, but it worked 
> perfectly.   The systems never failed and the predictions were always 
> spot-on.   Has something `real’ been found?    The “Shut-up and calculate” 
> approach seems to say yes.   Why should I prefer to read papers or textbooks 
> describing human experiences?  Instead, perhaps find ways to unpack and 
> rationalize the machine representations (e.g. neural nets, rule-based 
> systems, whatever).
> 
> 
> Marcus
> 
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> *From:* Friam <friam-boun...@redfish.com> on behalf of Alfredo Covaleda Vélez 
> <alfr...@covaleda.co>
> *Sent:* Monday, September 18, 2017 8:09:01 PM
> *To:* The Friday Morning Applied Complexity Coffee Group
> *Subject:* [FRIAM] Maybe a new hardware approach to deal with AI developments
>  
> Probably It is the most interesting tech article that I have read in weeks.
> 
> https://mobile.nytimes.com/2017/09/16/technology/chips-off-the-old-block-computers-are-taking-design-cues-from-human-brains.html?emc=edit_th_20170917&nl=todaysheadlines&nlid=58593627&referer=


-- 
☣ gⅼеɳ
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to