Using relaxation seems like a good way to find approximate solutions to
difficult problems and that seems like a reasonable solution to many AGI
problems. However, there is not a lot of evidence that it has worked to
make AGI feasible. The problem however may be one of representation.
When we want to make more sweeping references to some subject matter we
figure out ways to express it, and we refine those expressions as we become
aware of how our ideas might be misinterpreted. We also need techniques
like this to think about things. However, this implies that we have a
sophisticated ability discern between relevant issues and non-relevant
issues and that we have a sophisticated ways to integrate ideas that refer
to some common subject, but which relate to the subject matter in very
different ways. So in order to avoid the complexity of knowing things
logically we seek ways that seem to be dependent on an unexplained
intuitive ability to discern and integrate.

It does not seem like we need to solve extremely difficult logical problems
and so it is a surprise that true AGI programs have not been created. I
came up with an argument supporting the theory that many animal are able to
sense that some complex logical problems are satisfiable but that they lack
the ability to quickly find solutions. That would then, in my theory,
explain logical intuitiveness. I am absolutely certain that AGI is feasible
even though I have not been able to do it myself. I sense that the problem
is satisfiable but I haven't been able to figure it out. My sense is not
based on naïve imagination. This theory would explain a lot of things about
human and animal behavior that seems a little difficult to
explain. However, I do not see this as being completely reasonable so I
would need greater proof that such a thing was feasible to accept the
theory.

The only reason I think solutions to logic have to be found in polynomial
time is that it has always seemed that could act as a basis for general
intelligence and the curious delay in the development of AGI has paralleled
the incremental rate that one would expect if logical satisfiability is the
basis for that intelligence.



Jim Bromer


On Fri, Apr 25, 2014 at 7:40 AM, YKY (Yan King Yin, 甄景贤) 
<[email protected]>wrote:

> On Wed, Apr 23, 2014 at 5:40 PM, Anastasios Tsiolakidis 
> <[email protected]>wrote:
>
>> Well, correct me if I am wrong, but I should be looking for the simplest
>> "logics", right? As semantics etc should be handled in other parts of the
>> AGI softosphere, in my book at least.
>>
>> I was alerted earlier this month to the fact (I haven't entirely verified
>> it yet), that the GGP general game playing language GDL does not allow for
>> rules that include past states, so even the chess implementations (the ones
>> I could find at least) did not include en passant and castling (as chess
>> players know you may be prohibited from castling in move 200 just because
>> you moved your king in move 5). Now, I cannot afford any logic with such
>> blatant flaws, but prop calculus is surely enough, or ...? Still, it is
>> quite obvious that people have been building on top of and inside Prolog,
>> and I am kinda out of touch with the reasons why (!)
>>
>
>
> ​The logic does not impose any restriction on whether you can represent
> time or not.  But the logic does impose some "expressiveness" constraints.​
>
> For example, if you want to use time as a variable, as in "castled(t_5)",
> ie, castled at time step = 5, then this is beyond the expressive power of
> propositional logic.  But you could still employ a propositional logic
> engine to reason about those problems, if encoded correctly.
>
> Propositional logic does not allow any structure *within* propositions.
>  So you have to somehow encode that information into various propositions,
> then call the inference engine.
>
> As a general rule, the more expressive a logic, the slower the inference
> engine.  So this is a trade-off situation, unless some breakthrough happens
> in inference algorithms.
>
> For AGI, it tends to require fairly expressive logics, but then the
> encoding into propositional logic will blow up exponentially.
>
> There was an error in my previous post: when boolean logic is relaxed into
> interval probabilistic logic, the number of logic formulas blow up
> exponentially (at least in a naive implementation).  So apparently it does
> not resolve the P!=NP issue.  But I will think about this problem deeper...
> =)
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/24379807-f5817f28> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to