Is that an anti-NN argument? Not exactly sure what you're saying there.

On Sun, 17 Feb 2019 at 15:42, Jim Bromer <jimbro...@gmail.com> wrote:

> These days a symbolic system is usually seen in the form of a network - as
> almost everyone in this groups know. The idea that a symbolic network will
> need deep NNs is seems like it is a little obscure except as an immediate
> practical matter.
> Jim Bromer
>
>
> On Sun, Feb 17, 2019 at 8:27 AM Ben Goertzel <b...@goertzel.org> wrote:
>
>> One can see the next steps from the  analogy of deep NNs for computer
>> vision
>>
>> First they did straightforward visual analytics, then they started
>> worrying more about the internal representations, and now in the last
>> 6 months or so there is finally a little progress in getting sensible
>> internal representations within deep NNs analyzing visual scenes.
>>
>> Don't get me wrong tho, I don't think this is the golden path to AGI
>> or anything....  However, the next step is clearly to try to tweak the
>> architecture to get more transparent internal representations.   As it
>> happens this would also be useful for interfacing such deep NNs with
>> symbolic systems or other sorts of AI algorithms...
>>
>> -- Ben
>>
>> On Sun, Feb 17, 2019 at 9:05 PM Stefan Reich via AGI
>> <agi@agi.topicbox.com> wrote:
>> >
>> > I'm not sure how one would go the next step from a
>> random-speech-generating network like that.
>> >
>> > We do want the speech to mean something.
>> >
>> > My new approach is to incorporate semantics into a rule engine right
>> from the start.
>> >
>> > On Sun, 17 Feb 2019 at 02:09, Ben Goertzel <b...@goertzel.org> wrote:
>> >>
>> >> Rob,
>> >>
>> >> These deep NNs certainly are not linear models, and they do capture a
>> >> bunch of syntactic phenomena fairly subtly, see e.g.
>> >>
>> >> https://arxiv.org/abs/1901.05287
>> >>
>> >> "I assess the extent to which the recently introduced BERT model
>> >> captures English syntactic phenomena, using (1) naturally-occurring
>> >> subject-verb agreement stimuli; (2) "coloreless green ideas"
>> >> subject-verb agreement stimuli, in which content words in natural
>> >> sentences are randomly replaced with words sharing the same
>> >> part-of-speech and inflection; and (3) manually crafted stimuli for
>> >> subject-verb agreement and reflexive anaphora phenomena. The BERT
>> >> model performs remarkably well on all cases."
>> >>
>> >> This paper shows some dependency trees implicit in transformer
>> networks,
>> >>
>> >> http://aclweb.org/anthology/W18-5431
>> >>
>> >> This stuff is not AGI and does not extract deep semantics nor do
>> >> symbol grounding etc.   For sure it has many limitations.   Bu it's
>> >> also not so trivial as you're suggesting IMO...
>> >>
>> >> -- Ben G
>> >>
>> >> On Sun, Feb 17, 2019 at 8:42 AM Rob Freeman <
>> chaotic.langu...@gmail.com> wrote:
>> >> >
>> >> > On the substance, here's what I wrote elsewhere in response to
>> someone's comment that it is an "important step":
>> >> >
>> >> > Important step? I don't see it. Bengio's NLM? Yeah, good, we need
>> distributed representation. That was an advance. but it was always a linear
>> model without a sensible way of folding in context. Now they try to fold in
>> a bit of context by bolting on another layer to spotlight other parts of
>> the sequence ad-hoc?
>> >> >
>> >> > I don't see any theoretical cohesiveness, any actual theory let
>> alone novelty of theory.
>> >> >
>> >> > What is the underlying model for language here? In particular what
>> is the underlying model for how words combine to create meaning? How do
>> parts of a sequence combine to become a whole, incorporating the whole
>> context? Linear combination with a bolt-on spotlight?
>> >> >
>> >> > I think all this ad-hoc tinkering will be thrown away when we figure
>> out a principled way to combine words which incorporates context
>> inherently. But nobody is even attempting that. They are just tinkering.
>> Limited to tinkering with linear models, because nothing else can be
>> "learned".
>> >> >
>> >> > On Sun, Feb 17, 2019 at 1:05 PM Ben Goertzel <b...@goertzel.org>
>> wrote:
>> >> >>
>> >> >> Hmmm...
>> >> >>
>> >> >> About this "OpenAI keeping their language model secret" thing...
>> >> >>
>> >> >> I mean -- clearly, keeping their language model secret is a pure PR
>> >> >> stunt... Their
>> >> >> algorithm is described in an online paper... and their model was
>> >> >> trained on Reddit text ... so anyone else with a bunch of $$ (for
>> >> >> machine-time and data-preprocessing hacking) can download Reddit
>> >> >> (complete Reddit archives are available as a torrent) and train a
>> >> >> language model similar or better
>> >> >> than OpenAI's ...
>> >> >>
>> >> >> That said, their language model is a moderate improvement on the
>> BERT
>> >> >> model released by Google last year.   This is good AI work.  There
>> is
>> >> >> no understanding of semantics and no grounding of symbols in
>> >> >> experience/world here, but still, it's pretty f**king cool to see
>> what
>> >> >> an awesome job of text generation can be done by these pure
>> >> >> surface-level-pattern-recognition methods....
>> >> >>
>> >> >> Honestly a lot of folks in the deep-NN/NLP space (including our own
>> >> >> SingularityNET St. Petersburg team) have been talking about applying
>> >> >> BERT-ish attention networks (with more comprehensive network
>> >> >> architectures) in similar ways... but there are always so many
>> >> >> different things to work on, and OpenAI should be congratulated for
>> >> >> making these particular architecture tweaks and demonstrating them
>> >> >> first... but not for the PR stunt of keeping their model secret...
>> >> >>
>> >> >> Although perhaps they should be congratulated for revealing so
>> clearly
>> >> >> the limitations of the "open-ness" in their name "Open AI."   I
>> mean,
>> >> >> we all know there are some cases where keeping something secret may
>> be
>> >> >> the most ethical choice ... but the fact that they're willing to
>> take
>> >> >> this step simply for a short-term one-news-cycle PR boost, indicates
>> >> >> that open-ness may not be such an important value to them after
>> all...
>> >> >>
>> >> >> --
>> >> >> Ben Goertzel, PhD
>> >> >> http://goertzel.org
>> >> >>
>> >> >> "Listen: This world is the lunatic's sphere,  /  Don't always agree
>> >> >> it's real.  /  Even with my feet upon it / And the postman knowing
>> my
>> >> >> door / My address is somewhere else." -- Hafiz
>> >> >
>> >> > Artificial General Intelligence List / AGI / see discussions +
>> participants + delivery options Permalink
>> >>
>> >>
>> >> --
>> >> Ben Goertzel, PhD
>> >> http://goertzel.org
>> >>
>> >> "Listen: This world is the lunatic's sphere,  /  Don't always agree
>> >> it's real.  /  Even with my feet upon it / And the postman knowing my
>> >> door / My address is somewhere else." -- Hafiz
>> >
>> >
>> >
>> > --
>> > Stefan Reich
>> > BotCompany.de // Java-based operating systems
>> > Artificial General Intelligence List / AGI / see discussions +
>> participants + delivery options Permalink
>> 
>> --
>> Ben Goertzel, PhD
>> http://goertzel.org
>> 
>> "Listen: This world is the lunatic's sphere,  /  Don't always agree
>> it's real.  /  Even with my feet upon it / And the postman knowing my
>> door / My address is somewhere else." -- Hafiz
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T581199cf280badd7-M3b0eb7cb8ec10f83acc920bb>
>


-- 
Stefan Reich
BotCompany.de // Java-based operating systems

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T581199cf280badd7-M9737656b20fbf554937f871e
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to