Re: [agi] scholar references in Russian on AGI & symbolic AI systems

2019-07-19 Thread Costi Dumitrescu

Predictions can't be wrong.

The part of inference (if any other) involving statistics with
probability distributions and likelihood is Intuition only - the whole
is known and split into a distributed population (the probability
model). Predictions are real but cast from an ordered triple "I"
sequence of processes, where intuition is only the first. The yield is
100% certainty.

With the probabilistic approach, maybe spaces should be the turk's boxes
when YOLO (https://pjreddie.com/media/files/papers/yolo_1.pdf) applies
and punctuation should stay as noise.

The CV problem of detecting whether there is a transparent screen
between the single camera and the image cannot be solved with inferred
output. It is solved using binary inferred unsupervised learning and
matched output.
In other words, when the confusion between intuition and prediction is
removed so is the confusion between binary population and binary
distribution, intuitive (unsupervised) learning of a large number of
random samples in a binary stream should be able to predict (tell) where
the word starts and ends or the word size
https://reverseengineering.stackexchange.com/questions/18451/what-are-the-popular-machine-learning-unsupervised-approaches-to-learning-binary

In the text example only spaces should be removed and not the
punctuation. It is part of a language:
https://www.constant-content.com/content-writing-service/2016/05/4-key-differences-between-american-and-british-punctuation/

On 19.07.2019 06:55, Matt Mahoney wrote:



On Thu, Jul 18, 2019, 9:40 PM Costi Dumitrescu
mailto:costi.dumitre...@gmx.com>> wrote:

Write input text - remove spaces in the input text - compress - send -
decompress - AI - output text including spaces.


In 2000 I found that you could find most of the word boundaries in
text without spaces simply by finding the high entropy boundaries
using n-gram statistics.

https://cs.fit.edu/~mmahoney/dissertation/lex1.html


So, yes you could do this and encode just the locations where the
model makes errors.

But I was more interested in testing language models that simulate
language learning in children. In particular, babies can identify word
boundaries in speech at 7-10 months, which is before they learn any
words. Children also learn semantics before grammar, which is the
reverse of rule based language models.

I wanted to show that language is structured in a way that makes it
possible to learn it completely unsupervised. Using a deep
neural network, the layers are trained one at a time in the order that
children learn. And now we now have neural language models that
compress to one bit per character, within the uncertainty bounds of
Shannon's 1950 estimate of written English according to human
prediction tests.


*Artificial General Intelligence List
* / AGI / see discussions
 + participants
 + delivery options
 Permalink




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc1fd5fc7fae0a6a9-M9a3583dd5ebe0420b7064b80
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] scholar references in Russian on AGI & symbolic AI systems

2019-07-18 Thread Matt Mahoney
On Thu, Jul 18, 2019, 9:40 PM Costi Dumitrescu 
wrote:

> Write input text - remove spaces in the input text - compress - send -
> decompress - AI - output text including spaces.
>

In 2000 I found that you could find most of the word boundaries in text
without spaces simply by finding the high entropy boundaries using n-gram
statistics.

https://cs.fit.edu/~mmahoney/dissertation/lex1.html

So, yes you could do this and encode just the locations where the model
makes errors.

But I was more interested in testing language models that simulate language
learning in children. In particular, babies can identify word boundaries in
speech at 7-10 months, which is before they learn any words. Children also
learn semantics before grammar, which is the reverse of rule based language
models.

I wanted to show that language is structured in a way that makes it
possible to learn it completely unsupervised. Using a deep neural network,
the layers are trained one at a time in the order that children learn. And
now we now have neural language models that compress to one bit per
character, within the uncertainty bounds of Shannon's 1950 estimate of
written English according to human prediction tests.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc1fd5fc7fae0a6a9-Mb9b990dee4c827dce3deba0e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] scholar references in Russian on AGI & symbolic AI systems

2019-07-18 Thread Costi Dumitrescu

Write input text - remove spaces in the input text - compress - send -
decompress - AI - output text including spaces.



On 19.07.2019 01:23, Matt Mahoney wrote:

I agree we need less philosophy and speculation on which approaches to
AGI should work, and more experiments to back up untested ideas.
Obviously I haven't solved AGI, but you can find my work, mostly in
data compression, at http://mattmahoney.net/dc/

My main result that is relevant to AGI is the large text benchmark.
Compressing text is equivalent to predicting text, which is equivalent
to passing the Turing test. In my tests of thousands of versions of
200 programs, the best results are obtained by programs that model the
lexical, semantic, and grammatical categories of language using neural
networks with fixed low level features and trainable higher level
features using algorithms I implemented in the PAQ series of compressors.

Across all algorithms, prediction accuracy (measured by compression),
and thus intelligence, increases with the log of CPU speed and log of
memory. (I suspect log of software complexity too but the trend is not
clear). This suggests the reason for the pattern of AI and AGI
failures. Initial promising results tend to lead to underestimating
the difficulty of the problem. Most people, when asked, cannot say how
much their design will cost in lines of code or operations per second
or dollars, so they simply guess with no justification something they
can afford.

You will also find links to my papers and open source software. You
might find my paper on the cost of AI interesting, if not controversial.



On Wed, Jul 17, 2019, 12:36 AM Basile Starynkevitch
mailto:bas...@starynkevitch.net>> wrote:

To the AGI list

On 7/16/19 11:11 PM, WriterOfMinds wrote:

I don't have an elitist preference for formal academic work vs.
hobbyist work (mine is definitely the latter), but I still have
to agree that there is a lot of noise in the mailing list.


Mine is semi hobbyist work e.g. because the retirement-hobby
successor of my current at-office-work Bismon
 system (which /apparently/ is
not related at all to AGI, but in my mind is on purpose designed
to become /later/ a possible foundational work for a future AGI
embryonic system similar in spirit to CAIA

)
could, in five years, be presented here as a tiny basis for AGI.
But*hobbyist work on AGI requires some experimental work which can
be looked by peers* -other people with AGI interests- and IMNSHO
such experiment means open source or free software
 software
[sub-]systems related to AGI.

I see few of such academic free software software sub-systems
(somehow indirectly related to AGI) even mentioned on this mailing
list.


I would appreciate more sharing and discussion of results, less
pointless speculation and arguing about whose theory is best.
None of us really know how to build AGI, so it ends up being the
blind trying to lead the blind.


I fully agree with that. I don't claim to know how to build AGI. I
do have some beliefs (mostly shared with J.Pitrat's ones
), see
below (where I just repeat what I wrote before)


In my initial email asking about scholar references in Russian, I
wrote not only


However, I am (aged 60 and) more and more interested in
reading Russian academic papers -in particular experimental ones-
on the following topics (see Pitrat's blog
):

  * symbolic AGI
  * common sense reasoning
  * metaknowledge based AI systems
  * reflective and introspective AI systems.


but also, (initially in smaller fonts)


 I am interested by experimental free software AGI systems, not
by "pseudo-theoretical bullshit" or "I need a million US$ to make
AGI" kind of messages. I would like a reliable automatic free
software which filters such useless and annoying messages. *I
strongly believe that AGI is as difficult to achieve as e.g. a
human expedition to Mars, or a controlled nuclear fusion reactor
(à la ITER).*



--
Basile STARYNKEVITCH   ==http://starynkevitch.net/Basile
opinions are mine only - les opinions sont seulement miennes
Bourg La Reine, France; 

(mobile phone: cf my web page / voir ma page web...)

*Artificial General Intelligence List
* / AGI / see discussions
 + participants
 + delivery options
 Permalink

Re: [agi] scholar references in Russian on AGI & symbolic AI systems

2019-07-18 Thread Matt Mahoney
I agree we need less philosophy and speculation on which approaches to AGI
should work, and more experiments to back up untested ideas. Obviously I
haven't solved AGI, but you can find my work, mostly in data compression,
at http://mattmahoney.net/dc/

My main result that is relevant to AGI is the large text benchmark.
Compressing text is equivalent to predicting text, which is equivalent to
passing the Turing test. In my tests of thousands of versions of 200
programs, the best results are obtained by programs that model the lexical,
semantic, and grammatical categories of language using neural networks with
fixed low level features and trainable higher level features using
algorithms I implemented in the PAQ series of compressors.

Across all algorithms, prediction accuracy (measured by compression), and
thus intelligence, increases with the log of CPU speed and log of memory.
(I suspect log of software complexity too but the trend is not clear). This
suggests the reason for the pattern of AI and AGI failures. Initial
promising results tend to lead to underestimating the difficulty of the
problem. Most people, when asked, cannot say how much their design will
cost in lines of code or operations per second or dollars, so they simply
guess with no justification something they can afford.

You will also find links to my papers and open source software. You might
find my paper on the cost of AI interesting, if not controversial.



On Wed, Jul 17, 2019, 12:36 AM Basile Starynkevitch <
bas...@starynkevitch.net> wrote:

> To the AGI list
> On 7/16/19 11:11 PM, WriterOfMinds wrote:
>
> I don't have an elitist preference for formal academic work vs. hobbyist
> work (mine is definitely the latter), but I still have to agree that there
> is a lot of noise in the mailing list.
>
> Mine is semi hobbyist work e.g. because the retirement-hobby successor of
> my current at-office-work Bismon 
> system (which *apparently* is not related at all to AGI, but in my mind
> is on purpose designed to become *later* a possible foundational work for
> a future AGI embryonic system similar in spirit to CAIA
> )
> could, in five years, be presented here as a tiny basis for AGI. But*
> hobbyist work on AGI requires some experimental work which can be looked by
> peers* -other people with AGI interests- and IMNSHO such experiment means
> open source or free software 
> software [sub-]systems related to AGI.
>
> I see few of such academic free software software sub-systems (somehow
> indirectly related to AGI) even mentioned on this mailing list.
>
> I would appreciate more sharing and discussion of results, less pointless
> speculation and arguing about whose theory is best. None of us really know
> how to build AGI, so it ends up being the blind trying to lead the blind.
>
> I fully agree with that. I don't claim to know how to build AGI. I do have
> some beliefs (mostly shared with J.Pitrat's ones
> ), see below
> (where I just repeat what I wrote before)
>
>
> In my initial email asking about scholar references in Russian, I wrote
> not only
>
> However, I am (aged 60 and) more and more interested in reading
> Russian academic papers -in particular experimental ones- on the following
> topics (see Pitrat's blog
> ):
>
>- symbolic AGI
>- common sense reasoning
>- metaknowledge based AI systems
>- reflective and introspective AI systems.
>
> but also, (initially in smaller fonts)
>
>  I am interested by experimental free software AGI systems, not by
> "pseudo-theoretical bullshit" or "I need a million US$ to make AGI" kind of
> messages. I would like a reliable automatic free software which filters
> such useless and annoying messages. *I strongly believe that AGI is as
> difficult to achieve as e.g. a human expedition to Mars, or a controlled
> nuclear fusion reactor (à la ITER).*
>
>
> --
> Basile STARYNKEVITCH   == http://starynkevitch.net/Basile
> opinions are mine only - les opinions sont seulement miennes
> Bourg La Reine, France;  
> (mobile phone: cf my web page / voir ma page web...)
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc1fd5fc7fae0a6a9-M28a62921a08f19e19d535c16
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] scholar references in Russian on AGI & symbolic AI systems

2019-07-16 Thread Matt Mahoney
I don't speak Russian but I have been following data compression research
(which is a machine learning/AI problem) on encode.ru (in English). Most of
the leading researchers in this field in the 1990s were based in Russia and
many still are but I'm not aware of newer work published in Russian.

On Tue, Jul 16, 2019, 5:24 PM Mike Archbold  wrote:

> Actually, I just use a "filter" command in gmail, and the AGI list
> posts go in a folder, so I don't see a thing in the "important" stack.
> Then I browse quickly or not at all on some topics. For a while, I had
> one poster singled out for the trash bin automatically!
>
> So there are client based workarounds.
>
> Above, I should clarify that organization and discipline are indeed
> crucial.
>
> On 7/16/19, WriterOfMinds  wrote:
> > I don't have an elitist preference for formal academic work vs. hobbyist
> > work (mine is definitely the latter), but I still have to agree that
> there
> > is a lot of noise in the mailing list.  I would appreciate more sharing
> and
> > discussion of results, less pointless speculation and arguing about whose
> > theory is best. None of us really know how to build AGI, so it ends up
> being
> > the blind trying to lead the blind.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc1fd5fc7fae0a6a9-M3f6dbbecf1d46bd740d40cc4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] scholar references in Russian on AGI & symbolic AI systems

2019-07-16 Thread Mike Archbold
Actually, I just use a "filter" command in gmail, and the AGI list
posts go in a folder, so I don't see a thing in the "important" stack.
Then I browse quickly or not at all on some topics. For a while, I had
one poster singled out for the trash bin automatically!

So there are client based workarounds.

Above, I should clarify that organization and discipline are indeed crucial.

On 7/16/19, WriterOfMinds  wrote:
> I don't have an elitist preference for formal academic work vs. hobbyist
> work (mine is definitely the latter), but I still have to agree that there
> is a lot of noise in the mailing list.  I would appreciate more sharing and
> discussion of results, less pointless speculation and arguing about whose
> theory is best. None of us really know how to build AGI, so it ends up being
> the blind trying to lead the blind.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc1fd5fc7fae0a6a9-M5bff29798de4d992b83ac8c6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] scholar references in Russian on AGI & symbolic AI systems

2019-07-16 Thread WriterOfMinds
I don't have an elitist preference for formal academic work vs. hobbyist work 
(mine is definitely the latter), but I still have to agree that there is a lot 
of noise in the mailing list.  I would appreciate more sharing and discussion 
of results, less pointless speculation and arguing about whose theory is best. 
None of us really know how to build AGI, so it ends up being the blind trying 
to lead the blind.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc1fd5fc7fae0a6a9-M38f87ade27bf6cf992ef0468
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] scholar references in Russian on AGI & symbolic AI systems

2019-07-16 Thread Mike Archbold
On 7/15/19, Basile Starynkevitch  wrote:
> Hello list,
>
>
> I am able to poorly speak in Russian (since my parents spoke me Russian,
> when I was a kid, 50 years ago). But my native language is French, and
> my Russian writing and grammar is so bad that I never write Russian. And
> I am reading (and sometimes doing) AI in English (since my PhD thesis
> was written in French about what was then called AI and is today called
> AGI, and I defended it in 1990).
>
> However, I am (aged 60 and) more and more interested in reading
> Russian academic papers -in particular experimental ones- on the
> following topics (see Pitrat's blog
> ):
>
>   * symbolic AGI
>   * common sense reasoning
>   * metaknowledge based AI systems
>   * reflective and introspective AI systems.
>
> If some of you are academics (or established AGI researchers, having
> peer-reviewed publications related to one of these topics above) and
> know about interesting papers in Russian on some of the above topics,
> please give me, by private email, the references or the PDF files.
>
> Russia is widely known, since at least the 18^th century, to be a
> country with a highly competitive philosophical, musical, mathematical
> and (in the past century) computer science elite.
>
> Thanks.
>
> PS. My opinion about this AGI list is that it contains a majority of
> crap, but also a few messages which are real gems. However, the overall
> signal/noise ratio on this list is astonishingly bad. I am reading quite
> often that AGI list, but very rarely losing my time to write on it. I am
> interested by experimental free software AGI systems, not by
> "pseudo-theoretical bullshit" or "I need a million US$ to make AGI" kind
> of messages. I would like a reliable automatic free software which
> filters such useless and annoying messages. I strongly believe that AGI
> is as difficult to achieve as e.g. a human expedition to Mars, or a
> controlled nuclear fusion reactor (à la ITER ).
>


Without a working AGI I object to these overtures of elitism with the
tacit assumption that more organized and degreed persons are superior.
Again, there is not working AGI that I know of... This isn't a formal
academic type message list. There is a lot of just brain dumped ideas.




> --
> Basile STARYNKEVITCH   == http://starynkevitch.net/Basile
> opinions are mine only - les opinions sont seulement miennes
> Bourg La Reine, France; 
> (mobile phone: cf my web page / voir ma page web...)
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc1fd5fc7fae0a6a9-M012298add8f1d467888545ec
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] scholar references in Russian on AGI & symbolic AI systems

2019-07-16 Thread Costi Dumitrescu

There is a paper on learned talking heads and con-trolling Zuckerberg's
head below that

https://news.artnet.com/art-world/mona-lisa-deepfake-video-1561600

http://sk.ru/foundation/events/november2016/ai/


On 16.07.2019 08:50, Basile Starynkevitch wrote:


Hello list,


I am able to poorly speak in Russian (since my parents spoke me
Russian, when I was a kid, 50 years ago). But my native language is
French, and my Russian writing and grammar is so bad that I never
write Russian. And I am reading (and sometimes doing) AI in English
(since my PhD thesis was written in French about what was then called
AI and is today called AGI, and I defended it in 1990).

However, I am (aged 60 and) more and more interested in reading
Russian academic papers -in particular experimental ones- on the
following topics (see Pitrat's blog
):

  * symbolic AGI
  * common sense reasoning
  * metaknowledge based AI systems
  * reflective and introspective AI systems.

If some of you are academics (or established AGI researchers, having
peer-reviewed publications related to one of these topics above) and
know about interesting papers in Russian on some of the above topics,
please give me, by private email, the references or the PDF files.

Russia is widely known, since at least the 18^th century, to be a
country with a highly competitive philosophical, musical, mathematical
and (in the past century) computer science elite.

Thanks.

PS. My opinion about this AGI list is that it contains a majority of
crap, but also a few messages which are real gems. However, the
overall signal/noise ratio on this list is astonishingly bad. I am
reading quite often that AGI list, but very rarely losing my time to
write on it. I am interested by experimental free software AGI
systems, not by "pseudo-theoretical bullshit" or "I need a million US$
to make AGI" kind of messages. I would like a reliable automatic free
software which filters such useless and annoying messages. I strongly
believe that AGI is as difficult to achieve as e.g. a human expedition
to Mars, or a controlled nuclear fusion reactor (à la ITER
).

--
Basile STARYNKEVITCH   ==http://starynkevitch.net/Basile
opinions are mine only - les opinions sont seulement miennes
Bourg La Reine, France;
(mobile phone: cf my web page / voir ma page web...)
*Artificial General Intelligence List
* / AGI / see discussions
 + participants
 + delivery options
 Permalink




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc1fd5fc7fae0a6a9-M652b7c2796446658b0b9a7cb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] scholar references in Russian on AGI & symbolic AI systems

2019-07-15 Thread Basile Starynkevitch

Hello list,


I am able to poorly speak in Russian (since my parents spoke me Russian, 
when I was a kid, 50 years ago). But my native language is French, and 
my Russian writing and grammar is so bad that I never write Russian. And 
I am reading (and sometimes doing) AI in English (since my PhD thesis 
was written in French about what was then called AI and is today called 
AGI, and I defended it in 1990).


However, I am (aged 60 and) more and more interested in reading 
Russian academic papers -in particular experimental ones- on the 
following topics (see Pitrat's blog 
):


 * symbolic AGI
 * common sense reasoning
 * metaknowledge based AI systems
 * reflective and introspective AI systems.

If some of you are academics (or established AGI researchers, having 
peer-reviewed publications related to one of these topics above) and 
know about interesting papers in Russian on some of the above topics, 
please give me, by private email, the references or the PDF files.


Russia is widely known, since at least the 18^th century, to be a 
country with a highly competitive philosophical, musical, mathematical 
and (in the past century) computer science elite.


Thanks.

PS. My opinion about this AGI list is that it contains a majority of 
crap, but also a few messages which are real gems. However, the overall 
signal/noise ratio on this list is astonishingly bad. I am reading quite 
often that AGI list, but very rarely losing my time to write on it. I am 
interested by experimental free software AGI systems, not by 
"pseudo-theoretical bullshit" or "I need a million US$ to make AGI" kind 
of messages. I would like a reliable automatic free software which 
filters such useless and annoying messages. I strongly believe that AGI 
is as difficult to achieve as e.g. a human expedition to Mars, or a 
controlled nuclear fusion reactor (à la ITER ).


--
Basile STARYNKEVITCH   == http://starynkevitch.net/Basile
opinions are mine only - les opinions sont seulement miennes
Bourg La Reine, France; 
(mobile phone: cf my web page / voir ma page web...)


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc1fd5fc7fae0a6a9-M500e8626f189adbb5558eae6
Delivery options: https://agi.topicbox.com/groups/agi/subscription