wolframs three colored autonoma has 7 trillion some patterns with three
outcomes.

so 3 outcomes to the power of 3combination for each rule is 27rules
and so 3 combination for each rule to the power of 27
is 7 trillion or so. has already proven math to work in three colored
autonoma....well at least
he wrote.

On Jun 1, 2015 10:36 PM, "Jim Bromer" <[email protected]> wrote:
>
> I believe that many of the earlier AI methods will, once they are
energized by the polynomial solution to SAT, be shown to be powerful enough
to go beyond narrow AI as long as they are not bound completely by the
traditional models of their application. So application and relevancy both
seem like they are pretty fundamental terms both for computer programming
in general and AI in particular. The question is then whether they can be
formally defined relative to each other as AGI operators without eventually
bogging the system down in someway. If an 'operator' is applicable then it
must be relevant as far as the applicableness goes, and if an 'operator' is
relevant then it can be applied in someway. So it seems like they are good
terms for some basic underlying operational principles.
>
> Jim Bromer
>
> On Mon, Jun 1, 2015 at 10:27 PM, Piaget Modeler <[email protected]>
wrote:
>>
>> Applicable & Relevant.  Those are the appropriate terms in A.I. Planning.
>>
>> ~PM
>>
>> ________________________________
>> Date: Mon, 1 Jun 2015 21:29:51 -0400
>> Subject: Re: [agi] applicable : apply :: relevant : ?
>> From: [email protected]
>> To: [email protected]
>>
>>
>> That is a somewhat arbitrary definition.
>>
>> Jim Bromer
>>
>> On Mon, Jun 1, 2015 at 6:45 PM, Piaget Modeler <[email protected]>
wrote:
>>>
>>> The Backstory:
>>>
>>> The reason for the analogy is that I was coding functions to transform
a search node during state space search.
>>> An operator is applicable if the preconditions match a search node's
 state.  In which case we would apply the
>>> operator to the state to get the next state.  An operator is relevant
to a search node if the operator's effects
>>> match the goals of the search node.  Hence, depending upon whether
we're doing  progressive (forward) or
>>> regressive (backward) search, we'd either call Node_apply or
Node_relate.
>>>
>>> Flash forward to today:
>>>
>>> Posed the question on Quora, Facebook and here, since I wanted a quick
response. "Relate" won.
>>> Sent a complaint to Wolfram Alpha since they didn't understand similies
and I thought they should.
>>> Their staff replied that they're looking into it.
>>>
>>> That is all.
>>>
>>> ~PM
>>>
>>> ________________________________
>>> From: [email protected]
>>> Date: Tue, 2 Jun 2015 00:08:59 +0200
>>> Subject: Re: [agi] applicable : apply :: relevant : ?
>>> To: [email protected]
>>>
>>>
>>>
>>> On Mon, Jun 1, 2015 at 11:33 PM, Piaget Modeler <
[email protected]> wrote:
>>>>
>>>> Wolfram Alpha
>>>
>>>
>>>
>>> I am missing the point here, of course it could be tackled the narrow
AI way, but we are looking for something different, right? Are you trying
to outsource your analogies? Sell an analogy API? I think "in principle"
the analogy works when we can reuse a script, for example "compressing data
is like drying food, with a bit of time and technique you can use the
original while saving space and weight during transport and storage", and
it would take a bit of general intelligence to show all the different ways
in which the analogy does not work, just like so many of the analogies that
dominate our political debates.
>>>
>>> As always, it would be easier to derive or solve analogies with some
kind of logical decomposition, it would be a pity to waste the toolkit of
"physical primitives" in TRIZ, or the tentative search for "irreducible
cognitive dimensions" at CYC or yours truly. Which is more or less the
"thought vectors" that recently appeared in some patents. I believe the
main difference between the search for primitives and the new vectors is
that the vectors are more ad hoc, there is neither the assumption nor the
intention to look for irreducible quantities, fundamental symmetries etc,
the ambition is simply to capture as many parameters of a situation or a
concept in a vector and then "reason" with familiar algebraic tools.
>>>
>>> The discovery and application of anything that would look like
"cognitive DNA" would be the holy grail of AGI.
>>>
>>> AT
>>> AGI | Archives | Modify Your Subscription
>>> AGI | Archives | Modify Your Subscription
>>
>>
>> AGI | Archives | Modify Your Subscription
>> AGI | Archives | Modify Your Subscription
>
>
> AGI | Archives | Modify Your Subscription



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to