The argument of discrete vs weighted systems and GOFAI vs neural networks
seem pretty dated to me. So I guess I should be careful on the terms that I
use. A symbolic network would have to operate on different levels. One
thing I have learned is that you do not want to translate information in
terms of the lowest level - like sensor level for sensory devices - unless
you really have to. So when you combine sensors and symbolic networks I
think these symbolic relations would have to work on different levels but
you would need something  like perceptual reference points where higher
level symbolic knowledge could work with a lower level symbolic knowledge.
That seems significant. It is not just a matter of symbolic sub-nets that
need to operate with virtual relations but also on different levels of
resolution (and other kinds of stuff that I cannot think of.) One other
thing. What does a meaningful symbol sub-net look like? I do not know. But
maybe it has something in common with CAS type of complexity. We cannot
anticipate what these conceptual symbolic sub-nets would look like before
we create an effective program, but once we have a good program working we
(or our programs) might be able to detect similarities and differences in
different kinds of symbolic sub-nets that cannot be anticipated before hand.
Jim Bromer


On Sun, Feb 17, 2019 at 12:33 PM Jim Bromer <jimbro...@gmail.com> wrote:

> The most significant advancements seem to be made by using NNs with
> categorical feature detection or by using discrete systems in a network of
> some kind. The networks may not be explicit in discrete methods in but even
> in the earliest developments they were intrinsic to the case detection.
> Discrete systems should not use one taxonomy of relations and DNNS do not
> work without categorical feature detection. The future seems obvious. DNNs
> work because they are fast on (contemporary) GPU type of data systems and
> that is why they have pulled away from more discrete recognition systems
> that could employ inferences (data projections). If a symbolic method is
> implemented in a network it could do anything a simpler network only it
> would be slower. The GPUs can operate on weighted networks, so with a
> discrete recognition system meaningful relations could hypothetically be
> detected (although it would be relatively slow on a contemporary GPU.) I do
> not think that gamest detect situations based on the graphical situation -
> which is different than those of us who use sensors to figure out where we
> are. If sensory GPUs were further developed then basic situations (in the
> sensor space) could be used as rapid feature detectors. Since a GPU can be
> written to, in particular spaces, that means that perceptual projection
> could be used in simulate different kinds of situations (which could then
> be subsequently used in perceptual feature detection.) But the first step,
> direct detection of features in GPUs is missing because games operate on
> the principle of projecting the game space onto the visual output, not the
> other way around. (Sorry I don't have time to edit this to make it more
> readable.)
> Jim Bromer
>
>
> On Sun, Feb 17, 2019 at 9:47 AM Stefan Reich via AGI <agi@agi.topicbox.com>
> wrote:
>
>> Is that an anti-NN argument? Not exactly sure what you're saying there.
>>
>> On Sun, 17 Feb 2019 at 15:42, Jim Bromer <jimbro...@gmail.com> wrote:
>>
>>> These days a symbolic system is usually seen in the form of a network -
>>> as almost everyone in this groups know. The idea that a symbolic network
>>> will need deep NNs is seems like it is a little obscure except as an
>>> immediate practical matter.
>>> Jim Bromer
>>>
>>>
>>> On Sun, Feb 17, 2019 at 8:27 AM Ben Goertzel <b...@goertzel.org> wrote:
>>>
>>>> One can see the next steps from the  analogy of deep NNs for computer
>>>> vision
>>>>
>>>> First they did straightforward visual analytics, then they started
>>>> worrying more about the internal representations, and now in the last
>>>> 6 months or so there is finally a little progress in getting sensible
>>>> internal representations within deep NNs analyzing visual scenes.
>>>>
>>>> Don't get me wrong tho, I don't think this is the golden path to AGI
>>>> or anything....  However, the next step is clearly to try to tweak the
>>>> architecture to get more transparent internal representations.   As it
>>>> happens this would also be useful for interfacing such deep NNs with
>>>> symbolic systems or other sorts of AI algorithms...
>>>>
>>>> -- Ben
>>>>
>>>> On Sun, Feb 17, 2019 at 9:05 PM Stefan Reich via AGI
>>>> <agi@agi.topicbox.com> wrote:
>>>> >
>>>> > I'm not sure how one would go the next step from a
>>>> random-speech-generating network like that.
>>>> >
>>>> > We do want the speech to mean something.
>>>> >
>>>> > My new approach is to incorporate semantics into a rule engine right
>>>> from the start.
>>>> >
>>>> > On Sun, 17 Feb 2019 at 02:09, Ben Goertzel <b...@goertzel.org> wrote:
>>>> >>
>>>> >> Rob,
>>>> >>
>>>> >> These deep NNs certainly are not linear models, and they do capture a
>>>> >> bunch of syntactic phenomena fairly subtly, see e.g.
>>>> >>
>>>> >> https://arxiv.org/abs/1901.05287
>>>> >>
>>>> >> "I assess the extent to which the recently introduced BERT model
>>>> >> captures English syntactic phenomena, using (1) naturally-occurring
>>>> >> subject-verb agreement stimuli; (2) "coloreless green ideas"
>>>> >> subject-verb agreement stimuli, in which content words in natural
>>>> >> sentences are randomly replaced with words sharing the same
>>>> >> part-of-speech and inflection; and (3) manually crafted stimuli for
>>>> >> subject-verb agreement and reflexive anaphora phenomena. The BERT
>>>> >> model performs remarkably well on all cases."
>>>> >>
>>>> >> This paper shows some dependency trees implicit in transformer
>>>> networks,
>>>> >>
>>>> >> http://aclweb.org/anthology/W18-5431
>>>> >>
>>>> >> This stuff is not AGI and does not extract deep semantics nor do
>>>> >> symbol grounding etc.   For sure it has many limitations.   Bu it's
>>>> >> also not so trivial as you're suggesting IMO...
>>>> >>
>>>> >> -- Ben G
>>>> >>
>>>> >> On Sun, Feb 17, 2019 at 8:42 AM Rob Freeman <
>>>> chaotic.langu...@gmail.com> wrote:
>>>> >> >
>>>> >> > On the substance, here's what I wrote elsewhere in response to
>>>> someone's comment that it is an "important step":
>>>> >> >
>>>> >> > Important step? I don't see it. Bengio's NLM? Yeah, good, we need
>>>> distributed representation. That was an advance. but it was always a linear
>>>> model without a sensible way of folding in context. Now they try to fold in
>>>> a bit of context by bolting on another layer to spotlight other parts of
>>>> the sequence ad-hoc?
>>>> >> >
>>>> >> > I don't see any theoretical cohesiveness, any actual theory let
>>>> alone novelty of theory.
>>>> >> >
>>>> >> > What is the underlying model for language here? In particular what
>>>> is the underlying model for how words combine to create meaning? How do
>>>> parts of a sequence combine to become a whole, incorporating the whole
>>>> context? Linear combination with a bolt-on spotlight?
>>>> >> >
>>>> >> > I think all this ad-hoc tinkering will be thrown away when we
>>>> figure out a principled way to combine words which incorporates context
>>>> inherently. But nobody is even attempting that. They are just tinkering.
>>>> Limited to tinkering with linear models, because nothing else can be
>>>> "learned".
>>>> >> >
>>>> >> > On Sun, Feb 17, 2019 at 1:05 PM Ben Goertzel <b...@goertzel.org>
>>>> wrote:
>>>> >> >>
>>>> >> >> Hmmm...
>>>> >> >>
>>>> >> >> About this "OpenAI keeping their language model secret" thing...
>>>> >> >>
>>>> >> >> I mean -- clearly, keeping their language model secret is a pure
>>>> PR
>>>> >> >> stunt... Their
>>>> >> >> algorithm is described in an online paper... and their model was
>>>> >> >> trained on Reddit text ... so anyone else with a bunch of $$ (for
>>>> >> >> machine-time and data-preprocessing hacking) can download Reddit
>>>> >> >> (complete Reddit archives are available as a torrent) and train a
>>>> >> >> language model similar or better
>>>> >> >> than OpenAI's ...
>>>> >> >>
>>>> >> >> That said, their language model is a moderate improvement on the
>>>> BERT
>>>> >> >> model released by Google last year.   This is good AI work.
>>>> There is
>>>> >> >> no understanding of semantics and no grounding of symbols in
>>>> >> >> experience/world here, but still, it's pretty f**king cool to see
>>>> what
>>>> >> >> an awesome job of text generation can be done by these pure
>>>> >> >> surface-level-pattern-recognition methods....
>>>> >> >>
>>>> >> >> Honestly a lot of folks in the deep-NN/NLP space (including our
>>>> own
>>>> >> >> SingularityNET St. Petersburg team) have been talking about
>>>> applying
>>>> >> >> BERT-ish attention networks (with more comprehensive network
>>>> >> >> architectures) in similar ways... but there are always so many
>>>> >> >> different things to work on, and OpenAI should be congratulated
>>>> for
>>>> >> >> making these particular architecture tweaks and demonstrating them
>>>> >> >> first... but not for the PR stunt of keeping their model secret...
>>>> >> >>
>>>> >> >> Although perhaps they should be congratulated for revealing so
>>>> clearly
>>>> >> >> the limitations of the "open-ness" in their name "Open AI."   I
>>>> mean,
>>>> >> >> we all know there are some cases where keeping something secret
>>>> may be
>>>> >> >> the most ethical choice ... but the fact that they're willing to
>>>> take
>>>> >> >> this step simply for a short-term one-news-cycle PR boost,
>>>> indicates
>>>> >> >> that open-ness may not be such an important value to them after
>>>> all...
>>>> >> >>
>>>> >> >> --
>>>> >> >> Ben Goertzel, PhD
>>>> >> >> http://goertzel.org
>>>> >> >>
>>>> >> >> "Listen: This world is the lunatic's sphere,  /  Don't always
>>>> agree
>>>> >> >> it's real.  /  Even with my feet upon it / And the postman
>>>> knowing my
>>>> >> >> door / My address is somewhere else." -- Hafiz
>>>> >> >
>>>> >> > Artificial General Intelligence List / AGI / see discussions +
>>>> participants + delivery options Permalink
>>>> >>
>>>> >>
>>>> >> --
>>>> >> Ben Goertzel, PhD
>>>> >> http://goertzel.org
>>>> >>
>>>> >> "Listen: This world is the lunatic's sphere,  /  Don't always agree
>>>> >> it's real.  /  Even with my feet upon it / And the postman knowing my
>>>> >> door / My address is somewhere else." -- Hafiz
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > Stefan Reich
>>>> > BotCompany.de // Java-based operating systems
>>>> > Artificial General Intelligence List / AGI / see discussions +
>>>> participants + delivery options Permalink
>>>> 
>>>> --
>>>> Ben Goertzel, PhD
>>>> http://goertzel.org
>>>> 
>>>> "Listen: This world is the lunatic's sphere,  /  Don't always agree
>>>> it's real.  /  Even with my feet upon it / And the postman knowing my
>>>> door / My address is somewhere else." -- Hafiz
>>
>> --
>> Stefan Reich
>> BotCompany.de // Java-based operating systems
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
>> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
>> participants <https://agi.topicbox.com/groups/agi/members> + delivery
>> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
>> <https://agi.topicbox.com/groups/agi/T581199cf280badd7-M9737656b20fbf554937f871e>
>>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T581199cf280badd7-M85ca7af014559226023c2b37
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to