[agi] Re: Proposed AI Tests

2019-09-23 Thread James Bowery
The paper is pointless only to those without comprehension.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T76e694bfafa7b5f7-Mbfb4c73110249a78809593b5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Proposed AI Tests

2019-09-23 Thread Stefan Reich via AGI
That paper is pointless. We should implement the test case. By the end of
tomorrow I will have an editor that solves the example.

On Tue, 24 Sep 2019 at 02:44, James Bowery  wrote:

> That's a good start but the plural form of "example" needs extension to
> plurality and the "simple" needs to be made general.  Fortunately, this was
> published in the early 60s:
>
> "A Formal Theory of Inductive Inference Part I
> " and "Part II
> "
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>


-- 
Stefan Reich
BotCompany.de // Java-based operating systems

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T76e694bfafa7b5f7-M7c9540efba361fa663ee15a1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Proposed AI Tests

2019-09-23 Thread James Bowery
That's a good start but the plural form of "example" needs extension to 
plurality and the "simple" needs to be made general.  Fortunately, this was 
published in the early 60s:

"A Formal Theory of Inductive Inference Part I 
" and "Part II 
"
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T76e694bfafa7b5f7-Mcc055ab8a29240de155f4987
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Proposed AI Tests

2019-09-23 Thread Stefan Reich via AGI
I propose to test AI with simple examples.

For example, code editing.

User writes:

a = b
c = d
e = f

Then edits a = b into a := b.
Then edits c = d into c := d.

At this point, if you give the AI time to think, it has to propose editing
the third line into e := f.

There's your test.

-- 
Stefan Reich
BotCompany.de // Java-based operating systems

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T76e694bfafa7b5f7-Me7154d5ea4cebee5bf0fa82b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] MindForth is the brain for an autonomous robot.

2019-09-23 Thread James Bowery
In what way is coming up with the most predictive model given a set of 
observations not the essence of "is" as opposed to "ought"?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8d109c89dd30f9b5-M809a27d4193d355b7a36f425
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] MindForth is the brain for an autonomous robot.

2019-09-23 Thread Stefan Reich via AGI
> Compression is the "science" of AGI

Or maybe not

On Tue, 24 Sep 2019 at 01:52, James Bowery  wrote:

> Compression is the "science" of AGI and the value function parameterizing
> sequential decision theory is its "engineering".  So, yes, I did need to
> explicate the value function of this test, which is "minimize the size of
> the executable archive of *this* data set (as opposed to one of your own
> choosing based on your own value function my dear little AGI who I hope
> will not turn the universe into one huge quantum computer in order to
> compute the incomputable Kolmogorov Complexity of *this* data set".
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>


-- 
Stefan Reich
BotCompany.de // Java-based operating systems

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8d109c89dd30f9b5-M88d72c139c1f4ec4862adac3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] MindForth is the brain for an autonomous robot.

2019-09-23 Thread James Bowery
Compression is the "science" of AGI and the value function parameterizing 
sequential decision theory is its "engineering".  So, yes, I did need to 
explicate the value function of this test, which is "minimize the size of the 
executable archive of *this* data set (as opposed to one of your own choosing 
based on your own value function my dear little AGI who I hope will not turn 
the universe into one huge quantum computer in order to compute the 
incomputable Kolmogorov Complexity of *this* data set".
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8d109c89dd30f9b5-M37faf30652a6c5d69b6aca2f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The future of AGI

2019-09-23 Thread James Bowery
I didn't read your argument prior to posting so to that extent it was 
unintentional.  I'm responding more to the generality of language in 
epistemology, thence to the future of AGI.  As I stated in my Quora answer, it 
is VERY strange that Google adopted *perplexity* as the model selection 
criterion for the Billion Word Benchmark, for the reasons mentioned therein. 

The future of AGI seems increasingly dependent on Google DeepMind, so Google's 
missteps here bear mention.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbc419b4c00dd690d-M4aec2e29548470e2aa302b5a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] MindForth is the brain for an autonomous robot.

2019-09-23 Thread Stefan Reich via AGI
OK so what they're looking for is a "good pattern finder" algorithm, one
that is small enough that it carries its own weight in this benchmark.
Yeah, it's nice and all, compression is a great thing. But maybe the
algorithms grown here don't really translate to other AI problems all that
well. Image recognition really is a different task than image compression.

On Tue, 24 Sep 2019 at 00:17,  wrote:

> Compression has to do with the neural model, representations.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>


-- 
Stefan Reich
BotCompany.de // Java-based operating systems

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8d109c89dd30f9b5-Mfcf98a48196f1e1c43d1706d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] MindForth is the brain for an autonomous robot.

2019-09-23 Thread immortal . discoveries
Compression has to do with the neural model, representations.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8d109c89dd30f9b5-M33fc1d9d2b3ca223682d9ea3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] MindForth is the brain for an autonomous robot.

2019-09-23 Thread immortal . discoveries
Haha Stefan I was thinking that too my bro

I agree compression=intelligence. But even if maxed compressed, it won't be 
AGI. For example AGI has a reward system, passing this feat won't involve 
rewards.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8d109c89dd30f9b5-M4890f467ba5467756bfbe05c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] MindForth is the brain for an autonomous robot.

2019-09-23 Thread Stefan Reich via AGI
I don't quite understand that benchmark. When I have a compressor for text,
how would that give me any kind of AI function? Like a machine that answers
questions, recognizes things visually or what have you? Is this related to
AI at all?

On Mon, 23 Sep 2019 at 23:54, James Bowery  wrote:

> All anyone has to do to prove they've "solve AI" is best The Large Text
> Compression Benchmark .  As a
> fan of Chuck Moore, I eagerly await A. T. Murray's submission to that
> contest (but I'm not holding my breath).
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>


-- 
Stefan Reich
BotCompany.de // Java-based operating systems

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8d109c89dd30f9b5-Mb515eaf9482aa39423f486c9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] MindForth is the brain for an autonomous robot.

2019-09-23 Thread James Bowery
All anyone has to do to prove they've "solve AI" is best The Large Text 
Compression Benchmark .  As a fan of 
Chuck Moore, I eagerly await A. T. Murray's submission to that contest (but I'm 
not holding my breath).
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8d109c89dd30f9b5-M1ce34568d659f2a7e96aeea8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The future of AGI

2019-09-23 Thread Rob Freeman
On Tue, Sep 24, 2019 at 9:34 AM James Bowery  wrote:

> The use of perplexity as model selection criterion seems misguided to me.
> See my Quora answer to the question "What is the relationship between
> perplexity and Kolmogorov complexity?
> "
> I'd say "antiquated" rather than "misguided" but AFAIK Solomonoff's papers
> on the use of Kolmogorov Complexity as universal model selection criterion
> predated perplexity's use as language model selection criterion.
>

Are you addressing my argument here James?

Are you saying Pissanetzky selects models according to perplexity?

-Rob

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbc419b4c00dd690d-M6835333a43779b6fa9229f26
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The future of AGI

2019-09-23 Thread James Bowery
The use of perplexity as model selection criterion seems misguided to me.  See 
my Quora answer to the question "What is the relationship between perplexity 
and Kolmogorov complexity? 
"
  I'd say "antiquated" rather than "misguided" but AFAIK Solomonoff's papers on 
the use of Kolmogorov Complexity as universal model selection criterion 
predated  perplexity's use as language model selection criterion.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbc419b4c00dd690d-M2db242ad0bb4621f933dfc3c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-23 Thread korrelan
>From the reference/ perspective point of a single
intelligence/ brain there are no other brains; we are each a closed system and a
different version of you, exists in every other brain.


We don’t receive any information from other brains; we receive
patterns that our own brain interprets based solely on our own learning and
experience.  There is no actual
information encoded in any type of language or communication protocol, without
the interpretation/ intelligence of the receiver the data stream is meaningless.


:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M376259a89e4e444d954a3076
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-23 Thread John Rose
On Sunday, September 22, 2019, at 6:48 PM, rouncer81 wrote:
> actually no!  it is the power of time.    doing it over time steps is an 
> exponent worse.

Are you thinking along the lines of Konrad Zuse's Rechnender Raum?  I just had 
to go read some again after you mentioned this :)

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Md0f09f577b2797183834e1cf
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-23 Thread John Rose
On Sunday, September 22, 2019, at 8:42 AM, korrelan wrote:
> Our consciousness is like… just the surface froth, reading
between the lines, or the summation of interacting logical pattern recognition
processes.



That's a very good clear single brain description of it. Thanks for that.

I don't think a complete understanding of consciousness is possible from a 
single brain. Picture this - the rise of general intelligence in the human 
species and that collection of brains spread over time and space communicating. 
Each brain being a node in a graph. The consciousness piece is a component in 
each brain transceiver transmitting on graph edges to other brains and other 
non-brain environment related structure. On this model naturally there is much 
superfluous material that can be eliminated compared to a single brain model 
since a single brain has to survive independently in the real environment. And 
the graph model can be telescoped down into a single structure I believe.

To be more concise, consciousness can be viewed as a functional component in 
the brain’s transceiver. That’s essentially the main crux of my perspective. 
Could it be wrong? Oh ya totally… But that functionality, whether or not in 
consciousness itself is still integral to general intelligence IMO. And there 
are other related reasons…

It would be interesting analyzing single brain consciousness connectome 
structure based on the multi-brain intelligence model, why things happen as 
they do in their electrochemical patterns firing up the single brain model 
and getting it transceiving with other emulations.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M884a62d1fa8b831989377578
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-23 Thread rouncer81
Yeh thats what im talking about thanks,  all permutations of space and time and 
im getting a bit confused...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M0de418647e9834420ee6c80f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-23 Thread immortal . discoveries
The butterfly effect is like a tree, there is more and more paths as time goes 
on. In that sense, time is the amount of steps. Time powers Search Space. 
Time=Search Space/Tree. So even if you have a static tree, it is really time, 
frozen. Unlike real life, you can go forward/backward wherever want. Not sure 
time powers outer space though, outer space is outer space.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M1ed245101f58b767071b6a0f
Delivery options: https://agi.topicbox.com/groups/agi/subscription