On Thursday, April 18, 2019 at 4:33:48 AM UTC-5, Bruno Marchal wrote:
>
>
> On 18 Apr 2019, at 09:11, Philip Thrift <cloud...@gmail.com <javascript:>> 
> wrote:
>
>
>
> On Wednesday, April 17, 2019 at 8:29:25 PM UTC-5, Russell Standish wrote:
>>
>> On Wed, Apr 17, 2019 at 06:22:35PM -0700, 'Brent Meeker' via Everything 
>> List wrote: 
>> > 
>> > But how complete must the self-model be.  
>>
>> That is the 64 million dollar question. 
>>
>> > As Bruno has pointed out, it can't 
>> > be complete.  Current Mars Rovers have some "house 
>> keeping"self-knowledge, 
>> > like battery charge, temperature, power draw, next task, location, 
>> time,... 
>>
>> I don't think that's enough. I think it must have the ability to 
>> recognise other (perhaps similar) robots/machines as being like 
>> itself. 
>>
>> > Of course current rovers don't have AI which would entail them learning 
>> and 
>> > planning, which would require that they be able to run a simulation 
>> which 
>> > included some representation of themself; but that representation might 
>> be 
>> > very simple.  When you plan to travel to the next city your plan 
>> includes a 
>> > representation of yourself, but probably only as a location. 
>> > 
>>
>> Hod Lipson's starfish's representation of itself is no doubt rather 
>> simple and crude, but it does pose the question of whether it might 
>> have some sort of consciousness. 
>>
>>
>> -- 
>>
>> ---------------------------------------------------------------------------- 
>>
>> Dr Russell Standish                    Phone 0425 253119 (mobile) 
>> Principal, High Performance Coders 
>> Visiting Senior Research Fellow        hpc...@hpcoders.com.au 
>> Economics, Kingston University         http://www.hpcoders.com.au 
>> ---------------------------------------------------------------------------- 
>>
>>
>
>
>
>
> "self reference" has been long been a subject of AI, programming language 
> theory (program reflection), theorem provers (higher-order logic).
>
> I haven't seen yet what Hod Lipson has done
>
> *Columbia engineers create a robot that can imagine itself*
> January 30, 2019 / Columbia Engineering
> https://engineering.columbia.edu/press-releases/lipson-self-aware-machines
>
>
> but here is an interview with another researcher:
>
>
> *The Unavoidable Problem of Self-Improvement in AI*: An Interview with 
> Ramana Kumar, Part 1
> March 19, 2019/by Jolene Creighton
>
> https://futureoflife.org/2019/03/19/the-unavoidable-problem-of-self-improvement-in-ai-an-interview-with-ramana-kumar-part-1/
>
> *The Problem of Self-Referential Reasoning in Self-Improving AI*: An 
> Interview with Ramana Kumar, Part 2
> March 21, 2019/by Jolene Creighton
>
> https://futureoflife.org/2019/03/21/the-problem-of-self-referential-reasoning-in-self-improving-ai-an-interview-with-ramana-kumar-part-2/
>
>
> To break this down a little, in essence, theorem provers are computer 
> programs that assist with the development of mathematical correctness 
> proofs. These mathematical correctness proofs are the highest safety 
> standard in the field, showing that a computer system always produces the 
> correct output (or response) for any given input. Theorem provers create 
> such proofs by using the formal methods of mathematics to prove or disprove 
> the “correctness” of the control algorithms underlying a system. HOL 
> theorem provers, in particular, are a family of interactive theorem proving 
> systems that facilitate the construction of theories in higher-order logic. 
> Higher-order logic, which supports quantification over functions, sets, 
> sets of sets, and more, is more expressive than other logics, allowing the 
> user to write formal statements at a high level of abstraction.
>
> In retrospect, Kumar states that trying to prove a theorem about multiple 
> steps of self-reflection in a HOL theorem prover was a massive undertaking. 
> Nonetheless, he asserts that the team took several strides forward when it 
> comes to grappling with the self-referential problem, noting that they 
> built “a lot of the requisite infrastructure and got a better sense of what 
> it would take to prove it and what it would take to build a prototype agent 
> based on model polymorphism.”
>
> Kumar added that MIRI’s (the Machine Intelligence Research Institute’s) 
> Logical Inductors could also offer a satisfying version of formal 
> self-referential reasoning and, consequently, provide a solution to the 
> self-referential problem.
>
>
> Proving makes sense only in a theory. How could we know that the theory is 
> correct? That is precisely what Gödel and tarski showed to be impossible.
>
> Bruno
>
>
I think Lumar is just part of the "Gödel-Löb logic hacker"  gang (MIT, 
MIRI). They want working code, not "correctness".


cf. Löb’s Theorem
A functional pearl of dependently typed quining
https://people.csail.mit.edu/jgross/personal-website/papers/2016-lob-icfp-2016-draft.pdf

- pt

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to