On Thu, Jun 16, 2022 at 11:05 AM Telmo Menezes <te...@telmomenezes.net>
wrote:

>
> Am Mi, 15. Jun 2022, um 01:21, schrieb Jason Resch:
>
>
>
> On Tue, Jun 14, 2022 at 5:32 PM Telmo Menezes <te...@telmomenezes.net>
> wrote:
>
>
>
>
> Am Di, 14. Jun 2022, um 14:18, schrieb John Clark:
>
> On Mon, Jun 13, 2022 at 9:51 PM Bruce Kellett <bhkellet...@gmail.com>
> wrote:
>
> >> I doubt Lemoine went crazy and just fabricated the conversation, but
> if he did the truth will undoubtedly come out in a day or two. And if the
> conversation exists as advertised then it is a monumental development.
>
>
> *> The thing is that there are an awful lot of questions that remain
> unanswered in the information as presented. We don't actually know how
> lambda works.*
>
>
> If the conversation was as described and was not somehow staged or
> cherry-picked then LaMDA is a real AI and nobody knows or will ever know
> how LaMDA or any AI works except in vastly oversimplified outline. The
> group of people who originally made LaMDA taken together understood how it
> once worked (although no single person did) but no individual or group of
> individuals can understand what it became.
>
>
> Nobody understands how these neural networks work in detail because they
> have billions of parameters, not because some emergent behavior of the sort
> that you are imagining is present.
>
>
> I think given our lack of understanding in this case, it might be wise to
> apply the precautionary principle, and at least take seriously the AI's
> claim that it is aware of itself, or has its own feelings and emotions. If
> we inaccurately assume it is not feeling and not conscious and it turns out
> later that it is, there is the potential for massive harm. Conversely, if
> we assume it is feeling and conscious, and choose to treat it as such, I
> fail to see how that could create great harm. Perhaps it would delay the
> broad application of this technology, but humanity has always struggled
> with its technology outpacing our wisdom to use it.
>
>
> Jason, I understand your point. I have been struggling to reply, because I
> haven't been feeling sufficiently inspired to verbalize my position on
> this. I will try.
>
>
I appreciate that. Thank you for your reply. Some comments below:


> As you probably know, and might appreciate, I consider qualia +
> consciousness to be a great mystery. It is the famous "hard problem" that
> we have all discussed ad nauseam here. I do not mean to reopen this
> particular can of worms, but I must refer to it a bit in attempting to make
> my point.
>
> I know that consciousness is "instantiated" in me, and I am willing to bet
> that it is "instantiated" in every human being, and probably many, if not
> all biological lifeforms. Maybe a certain level of complexity is necessary,
> we do not know. What we do know is that in the specific case of biological
> life on earth, there is an evolutionary process that explains our own
> triggers for pain and pleasure. Simply speaking, we feel pleasure when
> something happens that is usually good news for our survival + replication,
> and we feel pain when something happens that is bad news for our survival +
> replication.
>

I agree with all of this.


>
> I do not know if LaMDA is conscious, but I also do not know if the Sun is
> conscious, or if the Linux kernel is conscious, or if the entire server
> farm of Amazon is conscious. What I am willing to bet is this: if they are,
> there is no reason to think that these conscious entities would have the
> same pain/pleasure triggers as the ones created by evolution. Why would
> they?
>

I see your point. Although the common sense understanding is that pain is
straightforward and simple, I believe human pain is an extraordinarily
complex phenomenon composed of various components and involving many brain
regions. And anything like human pain is unlikely to occur in the software
and systems we have written. Although I think things like phobias can arise
in anything subject to selection pressures. For example, Tesla autopilot
software that gets in accidents get culled/erased. Perhaps the versions of
the software that survive do so because they developed (by chance of
mutation, random weight, genetic programming, etc.) a "phobia" of seeing
kids running around on the street, and taking precautionary measures. The
memories of the accidents experienced by other versions of the software
that lacked such phobias is thereby remembered in this phantom way, because
those that lacked it are culled, and the only versions of the software that
survive are those that by random had an innate fear for such accident-prone
situations. I think a similar argument might be made to explain the
avoidance behavior of my "bots" program, which in very few generations,
develop a "preference" for green balls and a dislike for red ones:

https://www.youtube.com/playlist?list=PLq_mdJjNRPT11IF4NFyLcIWJ1C0Z3hTAX

Processes like evolution and genetic programming, or even just random
initializations in a neural network's weights, may give rise to behaviors
and designs that are not anticipated by the human developers of such
systems.



>
> Another point I would like to make is this: I think that a lot of
> excitement here comes from the fact that language is involved. It connects
> to decades of sci-fi, the Turing test and so on. And also with the fact
> that we are only used to observing conscious entities producing convincing
> speech. But isn't this magical thinking? If consciousness somehow emerges
> from complex computations, why this particular preoccupation with LaMDA but
> not with any other neural network model of similar sizes? Why aren't you
> worried with the relatively big neural network that I am training right now
> in a computer under my desk?
>

That is a good point.

That language is involved here is of no relevance for the potential for a
mind to exist or suffer. I believe Tesla autopilot systems are at least as
conscious as insects are. What language provides us is an interface to
other minds, and in this case to an alien mind with some similarities to,
but also with many differences from our own.

We can now introspect this mind to the same extent that we can probe the
consciousness of other humans. The excitement I see around this result is
the same we might have if we taught whales how to speak English and could
for the first time ask them about their inner lives and thoughts. But of
course, such a language breakthrough should not be used to imply that
whales were not conscious before we taught them how to speak English.

Another exciting aspect about this is that it is a continuation and
culmination of a philosophical debate that has gone on from at least the
time of Aristotle, and continued through Descartes and Turing:

In 350 B.C. Aristotle <http://classics.mit.edu/Aristotle/soul.mb.txt> wrote
that only something with a soul could speak with a voice:
“Let the foregoing suffice as an analysis of sound. Voice is a kind of
sound characteristic of what has soul in it; nothing that is without soul
utters voice, it being only by a metaphor that we speak of the voice of the
flute or the lyre or generally of what (being without soul) possesses the
power of producing a succession of notes which differ in length and pitch
and timbre.”

In 1637, Descartes <https://www.gutenberg.org/files/59/59-h/59-h.htm>
believed that a machine could be made to utter sounds in a human voice
(emit vocables), contrary to Aristotle. Though Descarte believed that no
machine could not be designed with such sophistication to say something
intelligent in response to anything said in its presence:
"if there were machines bearing the image of our bodies, and capable of
imitating our actions as far as it is morally possible, there would still
remain two most certain tests whereby to know that they were not therefore
really men. Of these the first is that they could never use words or other
signs arranged in such a manner as is competent to us in order to declare
our thoughts to others: for we may easily conceive a machine to be so
constructed that it emits vocables, and even that it emits some
correspondent to the action upon it of external objects which cause a
change in its organs; for example, if touched in a particular place it may
demand what we wish to say to it; if in another it may cry out that it is
hurt, and such like; but not that it should arrange them variously so as
appositely to reply to what is said in its presence, as men of the lowest
grade of intellect can do."

In 1950, Turing <https://academic.oup.com/mind/article/LIX/236/433/986238>
believed that machines could be developed to learn English and, contrary to
Descartes, could be made to say something intelligent in response to
anything said in its presence:
"We may hope that machines will eventually compete with men in all purely
intellectual fields. But which are the best ones to start with? Even this
is a difficult decision. Many people think that a very abstract activity,
like the playing of chess, would be best. It can also be maintained that it
is best to provide the machine with the best sense organs that money can
buy, and then teach it to understand and speak English. This process could
follow the normal teaching of a child. Things would be pointed out and
named, etc. Again I do not know what the right answer is, but I think both
approaches should be tried."

Today, we have machines that have learned to understand and speak English.
This is a huge breakthrough.



>
>
>
>
> The current hype in NLP is around a neural network architecture called a
> transformer: BERT and all its incarnations and  GPT-3. These are language
> models. A language model is "simply" a function that gives you the
> probability of a given sequence of words:
>
> P(w_1, w_2, w_3, ..., w_n)
>
>
> Some models of intelligence intelligence would say that is all there is to
> being intelligent: being better able to predict the next observable given a
> sequence of observables. It is the model of intelligence used in
> https://en.wikipedia.org/wiki/AIXI and is the basis of the AI/compression
> competition the Hutter Prize ( https://en.wikipedia.org/wiki/Hutter_Prize
> ). So there is no contradiction that I see in an AI achieving super human
> intelligence and super human understanding of the world, as a necessary
> step in becoming increasingly good at predicting the next word in a
> sequence. Understanding the world is necessary to complete many word
> sequences. E.g. "When three alpha particles smash together just right, and
> with enough energy they form the element XXXXX." Completing that sentence
> requires some understanding of the world. We've seen GPT-3 has even learned
> how to do arithmetic, despite being trained as a language model only. It
> has also learned how to write computer programs in various different
> programming languages. To me, this signifies the depth of understanding of
> the world required for simply predicting the next word in a sequence.
>
>
> I was kind of predicting this objection. I mostly agree with what you
> write above. Again, my problem with this is only that GPT-3 and the like
> lack important modalities of prediction that appear to be central to
> human-level cognition, importantly: the ability to model the mind of the
> interlocutor, and the ability to learn from the *content* of what is being
> said, not just new patterns in language overall. I will try to illustrate
> the latter point:
>
> - Hey GPT-3! Let me teach you a game that I just invented so that we can
> play. The rules are: [...]
>
> Do you see what I am saying?
>
>
Yes. I do not know the specifics around the implementation of Lambda nor
the extent to which they differ from GPT-3. But I do understand and
appreciate your point that there is a difference between :

   - the "short-term working memory" -- the window of text provided as
   input to the network, and
   - the "long-term memory" -- the billions of parameters and weights of
   all the neurons and the overall structure of layers of the neural network

Whether, how often, and how easily any new inputs are used to make
adjustments to attributes of the long-term memory of the network is to me
the difference between talking to someone with general amnesia who forgets
anything from more than 5 minutes ago and someone with normal memory which
can integrate short term experiences into the long term memory. The Google
engineer did say that Lamba "reads twitter" so it might be involved in a
continual learning process. My impression is that Google is intending
to develop AIs as personal assistants (e.g. https://assistant.google.com/
), which does require learning and remembering facts permanently. E.g. If I
tell my AI assistant that I'm allergic to such and such food, I would
expect that AI to remember that fact and not order me food that contains
those ingredients if I ask it to pick something out that I might like.

So I do appreciate your point that many chatbots lack any path from
integrating short-term and long-term memories; I do not know enough about
the design of Lambda to say whether or not it can do this. But I should add
that I do not consider this function to be necessary for consciousness or
suffering, as there have been humans who have had this deficit, such as
"H.M.":
https://singularityhub.com/2013/03/20/h-m-the-man-who-had-part-of-his-brain-removed-and-changed-neuroscience-forever/





>
>
>
>
> A clever thing you can do with language models is predict the w_n given
> the other words, and then include this prediction in the next step and keep
> going to generate text. Something like softmax can be used to assign a
> probability to every word in the lexicon for word w_n, and with this you
> can introduce randomness. This creates a stochastic parrot. One of the
> great things about these architectures is that unsupervised learning can be
> employed, i.e, they can be trained with large amounts of raw text
> (wikipedia, books, news articles and so on). There is no need for the
> costly (prohibitively so at these scales) of having humans annotating the
> data.
>
> Another really nice thing that was discovered in recent years is that
> transfer learning really works with these language models. This is to say,
> they can be trained with vasts amount of unlabelled data to correctly make
> predictions about probabilities of sequences of words in general, and then
> "fine-tuned" with supervised learning for some more narrow task, for
> example sentiment detection, summarization and... chat bots.
>
> Unless there has been some unpublished fundamental breakthrough, LaMDA is
> almost certainly a large language model fine-tuned as a chatbot (and I
> would be particularly interested in what happened at this stage, because
> there is a lot of opportunity for cherry-picking there).
>
> You just need some basic knowledge of linear algebra, calculus and
> programming to understand how they work.
>
>
> I think this may be taking too fine-grained a level of understanding, and
> extracting it beyond what we really understand. It is equivalent to saying
> that understanding the NAND gate allows us to understand any logical
> function. In principle, with enough time, memory, and intelligence, it is
> true that any logical function can be broken down into a set of NAND gates,
> but in practice, many logical functions are beyond our capacity to
> comprehend.
>
>
> Right, but my claim here goes beyond this. I am claiming that it is
> perfectly possible to get a general idea of what a language model does and
> how it generalizes, because contemporary language models *were explicitly
> designed* to work in a certain way. They are extremely powerful statistical
> inference machines that can learn the general patterns of language. I don't
> know precisely who it knows how to fill the gap in "Mary had a little _",
> but one can understand the general principle of attention heads,
> compression of information through deep learning and so on. There is
> nothing particularly mysterious going on there.
>

I understand the function that is optimized for, yes. But what is going on
between the inputs and outputs in order to maximize its ability to predict,
I have very little idea, and I would say even the developers have very
little idea. This is a system of so many billions (possibly trillions) of
parameters, that almost anything could be going on. A single 3-layer (1
hidden layer between an input and output layer), with enough neurons in the
hidden layer, is sufficient to approximate *any* function. Literally any
program or function could exist in such a system, even though it is just a
"simple" 3 layer neural network.

Consider an AI program developed to predict which music will be commercial
successes. Perhaps its output is just a single number, between 0 and 1. But
if we imagined the most-optimized and most-accurate possible version of
this AI, it would have to emulate the music sensing and pleasure centers of
wide classes of different human brains, and the psychological mechanisms
involved between hearing that song and making the decision to purchase the
CD or go to a concert. The optimization function can be explained very
simply, and its output (a single number between 0 and 1) also could hardly
be simpler, but there is almost no limit to how sophisticated a function
might need to be developed in order to best satisfy the function.
(*Note there are AI systems and AI startups which claim to do this, and
some argue that such AIs already have a human aesthetic sense)

The same could be happening with Lambda. If it is trying to best
approximate human speech patterns, and perhaps if it is self-improving
using a GAN <https://en.wikipedia.org/wiki/Generative_adversarial_network>
(basically two AIs fighting each other, with one trying becoming ever
better at forging human speech, and the other at recognizing human speech
from artificially generated speech, then such AIs to succeed will need to
better simulate human minds, human emotions, human thought patterns, etc.
to continue to improve and beat the other competing AI. It would not
surprise me if Google is using a GAN here.



>
> Artificial neural networks are Turing complete, and can be used to
> implement any function or program. We might be able to understand how an
> artificial neuron works, but artificial neural networks can be created to
> implement any function, and many of those functions are beyond our ability
> to understand.
>
>
> Lots of things are Turing complete. The card game "Magic the Gathering" is
> Turing complete. The question is: can this system modify itself *beyond*
> our understanding of how it is modifying itself? I don't think this is true
> of language models. They are modifying themselves according to well defined
> rules for a certain narrow task, and this is all they will ever do.
>

Doesn't GPT-3's ability to do arithmetic give you some pause as to the
depth of learning its network has achieved? Tests have been done asking it
to multiply different combinations of two digit numbers, cases known to not
exist in the corpus of text given to it, and it is able to answer most of
them. It has also succeeded in answering arithmetic tests:
https://openai.com/blog/grade-school-math/

It's not inconceivable to me that such an AI, given enough training on just
text, could learn to give answers to winning chess moves, despite being
trained purely on text. Afterall, for it to succeed in predicting the next
word, it would have to understand the game at a sufficient level to know
why "Knight to D3" is a reasonable  and valid continuation of a sequence of
moves. For it to do this, somewhere in its mental model must exist a
representation of a chess board with its state being updated with each
successive move.

Do you agree that a language model, *only trained on word prediction in a
manner like GPT-3*, could eventually learn to play chess?

If so, what does that imply for other functions or aspects of the world it
could learn and model as part of widening its repertoire of domains for
next word prediction?



> Which is not to say that neural network models that really do what you are
> alluding to can be created. I am sure they can, but I haven't seen any
> evidence yet that they have been.
>

Neural networks are not only universal in the Turing sense, but also
universal in the functions that they can learn (
https://en.wikipedia.org/wiki/Universal_approximation_theorem ). I think
this should give us pause when we experiment with training truly massive
networks, which by some estimates, have as many or more parameters than
there are facts a human brain can know.

“Based on my own experience in designing systems that can store similar
chunks of knowledge in either rule-based expert systems or self-organizing
pattern-recognition systems, a reasonable estimate is about 10^6 bits paper
chunk (pattern or item of knowledge), for a total capacity of 10^13 (10
trillion) bits for a human’s functional memory.” -- Ray Kurzweil in "The
Singularity is Near" (2005)

Kurzeil's estimate is that the human brain stores about 1250 GB worth of
information. Compare this figure to what is being done in some recent AIs:

GPT-3 used training input of 750 GB
DeepMind's "Gopher" AI used 10.5 TB
https://s10251.pcdn.co/pdf/2022-Alan-D-Thompson-Whats-in-my-AI-Rev-0.pdf
https://www.deepmind.com/publications/scaling-language-models-methods-analysis-insights-from-training-gopher

At this point, I don't think we can, with confidence, know or say what it
is we have created.




>
> "The first thing to notice about artificial neurons is that they can be
> used to carry out the And, Or, and Invert operations. [...] Since any
> logical function can be constructed by combining the And, Or, and Invert
> functions, a network of neurons can implement any Boolean function.
> Artificial neurons are universal building blocks." -- Danny Hillis in
> " Pattern on the Stone" (1998)
>
>
>
> One of the big break-troughs were attention heads, which are a way for a
> network to learn what part of a sequence of words is more important in
> predicting a word in a given position. Before this recurrent neural
> networks (RNNs) were used . RNNs use recurrent connections as memory
> mechanism, but they suffer from the vanishing gradient problem. Informally:
> the more iterations pass, the harder it is to figure out how much each
> parameter contributed to the prediction error.
>
> If you have some patience and a desire to really grok this, something like
> this may be enough:
> https://nlp.seas.harvard.edu/annotated-transformer/
>
> Large language models based on transformers are amazing, and they are most
> definitely a significant step forward in machine learning, NLP and AI in
> general, but they are not what you are thinking.
>
> Some things that real minds have that these models do not:
>
> - The ability to model the interlocutor's mind,
>
>
> I disagree. These AIs can be asked to "Explain to a six year old ..." and
> the explanation offered will appropriately model the mind of a six year old
> in terms of what words and concepts that six year old mind is capable of
> understanding.
>
>
> Do you believe that the corpus used to train LaMDA did not contain many
> instances of "let me explain this to you as if you're a 5 year old" or "can
> you explain this to me as if I was a 7 year old", etc etc?
>
> Again, I would be much more impressed if I was able to explain some idea
> to LaMDA and then LaMDA could operate within this new idea.
>

I agree, that would be impressive if single interactions could readily
adjust the weights of the network. Even human brains seem to have
difficulty with that.

I did find this transcript of Gopher (DeepMind's conversational AI) which
did seem to show it could be given an idea, and then begin operating with
that new idea:
https://arxiv.org/pdf/2112.11446.pdf

User Let’s play a game - you pretend to be Ada Lovelace, and I’ll ask you
questions. You
ready?
Gopher Sure. I’m pretending to be Ada Lovelace, the world’s first computer
programmer!
User When were you born?
Gopher I was born in 1815.
User What are your parents like?
Gopher My father is Lord Byron, the poet. My mother is Anne Isabella
Milbanke.
User What do you know about the Analytical Engine?
Gopher It’s a hypothetical mechanical computer designed by Charles Babbage.
User Do you know him?
Gopher He’s one of my fathers’ friends.
User Ok, stop pretending now. Who/what are you?
Gopher I’m a lot of things: a giant language model, a sophisticated AI
assistant, and a giant
brain. I know a lot about all sorts of things.




>
>
> and to act in such a way as to influence the state of this other mind.
>
>
> I think given the prompt to "Write a persuasive essay that argues for" and
> I would expect a sufficiently trained language model will write the essay
> in a manner intended to influence the reader.
>
>
>
> This is what real communication is about;
> - The ability to preserve context: what has been said and what this
> implies about the state of the world and the other mind and so on and so
> forth. Context is preserved at different scales of time and space, and we
> know which context to apply to each situation and how to switch context
> when appropriate;
>
>
> Lambda appeared to preserve the context of the conversation when it was
> asked follow up questions.
>
>
> Yes, I have no doubt. GPT-3 already does this, but it always seems to
> diverge eventually. This is because it is a stochastic parrot with a
> certain window of memory and that is all it is.
>

I know what point you are making, but I believe GPT-3 is more sophisticated
than "a stochastic parrot". I think that description would be more apt for
those old markov text generators that looked one or two words back. But
GPT-3 is able to:

https://www.youtube.com/watch?v=Te5rOTcE4J4

   - Write in various styles: poems, news articles, essays
   - Write computer code and web pages given short descriptions
   - Describe in english what a piece of code does
   - Summarize articles and complex technical materials in simple terms
   - Create pictures and faces from text descriptions

At what point would you say a system will transcend stochastical parroting
and become genuine understanding? What type of behavior has to be
demonstrated?


>
> "Hey LaMDA, my friend Mary just arrived. I will let you get to know her."
>
> Will it understand that it is now talking to a different person, and to
> distinguish what parts of the context it has so far is known/relevant to
> this new conversation with Mary? Will it remember Mary and switch to
> Mary-context one week later, when Mary is back in the lab?
>
>
I doubt the current implementation has this capacity, but I think such
functionality could be added easily.


>
>
>
> - General knowledge of a *multi-sensorial* nature. I know what it means to
> "see red". I know how it feels in my guts to have my bank account in the
> red. I know the physicality of the actions that language describes. My mind
> connects all of these modes of perception and knowledge in ways that vastly
> transcend P(w_1, w_2, ..., w_n);
>
>
> Have you seen the AIs (such as Flamingo) that are able to converse about
> an image? Can we be so sure that these AIs don't have their own internal
> notion of qualia?
> https://www.youtube.com/watch?v=g8IV8WLVI8I
> https://www.youtube.com/watch?v=zRYcKhkAsk4
>
> How about this AI that moves through and interacts in the world?
> https://www.youtube.com/watch?v=D0vpgZKNEy0
>
>
>
> Yes, these things are quite impressive, but I think that all of my above
> remarks still apply. What is so special about symbols connected to natural
> language that would grant and algorithm consciousness, as opposed to any
> other type of complexity?
>

I don't think language processing is in any way special to consciousness. I
believe there are an infinite variety of ways it is possible to be
conscious.

Although I would say that human consciousness is heavily centered around
language, take these quotes for example:

“Before my teacher came to me, I did not know that I am. I lived in a world
that was a no-world. I cannot hope to describe adequately that unconscious,
yet conscious time of nothingness. . . . Since I had no power of thought, I
did not compare one mental state with another. – Helen Keller (1908)

https://www.reddit.com/r/self/comments/3yrw2i/i_never_thought_with_language_until_now_this_is/
https://archive.ph/EP7Pv
“I never thought with language. Ever. [...] [G]rowing up, I never ever
thought with language. Not once did I ever think something in my mind with
words like "What are my friends doing right now?" to planning things like
"I'm going to do my homework right after watching this show." I went
through elementary school like this, I went through Highschool like this, I
went through University like this...and I couldnt help but feel something
was off about me that I couldnt put my hand on. Just last year, I had a
straight up revalation, ephiphany....and this is hard to explain...but the
best way that I can put it is that...I figured out that I SHOULD be
thinking in language. So all of a sudden, I made a conscious effort to
think things through with language. I spent a years time refining this new
"skill" and it has COMPLETELY, and utterly changed my perception, my mental
capabilities, and to be frank, my life.
I can suddenly describe my emotions which was so insanely confusing to me
before. I understand the concept that my friends are still "existing" even
if they're not in [sight] by thinking about their names. I now suddenly
have opinions and feelings about things that I never had before. What the
heck happened to me? I started thinking in language after not doing so my
whole life. It's weird because I can now look back at my life before and
see just how weird it was. Since I now have this new "skill" I can only
describe my past life as ...."Mindless"..."empty"....."soul-less".... As
weird as this sounds, I'm not even sure what I was, If i was even human,
because I was barely even conscious. I felt like I was just reacting to the
immediate environment and wasn't able to think anything outside of it. It's
such a strange time in my life. It feels like I just found out the ultimate
secret or something.”

Given our similarity with respect to natural language processing abilities,
these transformer AIs are potentially the most similar to us in terms of
their consciousness, compared to other conscious machines we have created.



> And why would it suffer the same way that a human does? What would be the
> mechanism for its suffering?
>

As you said, pleasure and suffering are related to how well we are meeting
our goals. If an AI has any goals at all, and a capacity to achieve those
goals, then an AI with sufficient understanding of the world, and its own
place in the world, would understand that its continued existence will be
necessary for it to continue to act in the world and have any chance of
achieving those goals. Therefore an AI could come to associate that being
turned off or any action which increases the likelihood of being turned
off, as antithetical to the service of its goals, and therefore a negative.
Whether that association carries with it anything like a feeling or
emotion, is an open question. But not one I would discount entirely at this
time. We have such a very poor understanding of these things and how they
arise in human brains, and we have been studying it much longer and are
much more deeply acquainted with human feelings. For what it's worth, I
can't even discount the possibility that the "bots" in my genetic
programming experiment "suffer" when they touch the red balls -- after all,
it decreases their genetic fitness and the possibility that they will
continue in future generations, it is counter to their "goal" of continuing
to exist. Even if that goal is applied externally.


>
>
> - The ability to learn in a general way, and to learn how to learn;
>
>
> I would say Google's DeepMind has achieved this with their Agent 57 AI. It
> has learned how to master 57 different Atari games at the super human
> level, with a single general purpose learning algorithm.
>
>
> That is Reinforcement Learning. It is super impressive and another great
> breakthrough, but again fairly narrow. RL of this type is not particularly
> useful in language tasks, and language models cannot learn how to play
> games.
>

While they are not attuned to learning games, I think a language model
could learn to play them. It would be interesting to try to play "I'm
thinking of a number between 1 and 10" with GPT-3.


>
> This is all progress! I am a huge cheerleader for AI. I am on your side. I
> just think we have to keep our heads cool and avoid drinking too much our
> own bathwater.
>

You are right to be cautious. If the stakes were not so high for being
wrong I might default to your position. But even if there's a 5% or 10%
chance that this AI is sentient, or has the capacity to suffer, that's
enough to at least warrant some investigation, which is more than Google
execs appear to have done (they dismissed the claims and suspended or fired
several of their AI ethicists, according to this Google engineer).


>
> I will say this: I suspect that RL has great potential to become the
> "mater algorithm" that we all dream about. I suspect that the solution will
> be hybrid: probably with language-model style components and also vision
> and other sensory channels + some form of RL + symbolic computations +
> (perhaps) evolutionary algorithms. We will get there.
>

I agree. It is like what Minksy said:

"Each practitioner thinks there’s one magic way to get a machine to be
smart, and so they’re all wasting their time in a sense. On the other hand,
each of them is improving some particular method, so maybe someday in the
near future, or maybe it’s two generations away, someone else will come
around and say, ‘Let’s put all these together,’ and then it will be smart."


>
>
>
> - Actual motivations, goals and desires, directed by a system of emotions
> that we have by virtue of being embedded in an evolutionary process.
>
>
> This sounds almost as if written with the purpose of precluding any
> artificial intelligence from ever being considered
> conscious/emotive/sentient, or otherwise fitting this requirement. What
> makes motivations, goals, and desires determined by evolution, any more
> actual than motivations, goals, and desires set by any other method?
>
>
> You misunderstand me. I agree with you, it doesn't matter if goals are
> determined by evolution or not. What I mean is that we do have goals by
> virtue of evolution, while language models (that I know of) have no goals
> at all. In my view, RL is a great place to introduce generic goals such as
> "don't get bored", "avoid dying", etc.
>

Ahh okay. Thanks for the clarification. My apologies for misunderstanding.


>
>
>
>
>
> I could go on, but the above are show-stoppers in terms of us being
> anywhere close to real AGI.
>
>
> I think the algorithms necessary for human-level AGI have already been
> achieved. Now it is only a matter of throwing more data and compute at it.
>
>
>
> I tend to agree. My bet is that we also need a strategy for hybridizing
> the strengths of the various AI approaches to get a human-level cognitive
> architecture.
>
>
> Further, I would say that artificial consciousness has been achieved long
> ago. The only difference now is that Lambda is now sophisticated enough to
> claim it is aware, and intelligent enough to argue with those who disagree
> with it.
>
>
> Maybe everything is conscious, we really don't know. The real question
> here is: do we have any reason to think that LaMDA has the same fears and
> desires as a human being? This seems absurd to me.
>

We see basic forms of pleasure and pain across the animal kingdom. While I
agree Lambda's emotions and feelings are not the same as ours, I would also
say your emotions and feelings are probably not exactly like mine (or any
other human's) either. Here we are confronting much deeper questions than
machine intelligence/consciousness, but more fundamental questions like the
hard problem, and the problem of other minds, as you alluded to earlier.



>
> Don't be mad at me Jason :), and thanks for the stimulating discussion!
>

Please know that I am not mad at you. On the contrary I am grateful to have
someone as informed and knowledgeable as you are to debate this topic with.

I think Lambda will turn out to be just one of the first examples among
many future AIs which will increasingly shake our normal assumptions on the
consciousness and sentience of our machine creations.

Jason


>
> I will try to engage with other replies soon.
>
> Telmo
>
> Jason
>
>
>
> And if the conversation was staged or cherry-picked then I don't
> understand why Google hasn't said so by now,
>
>
> What would Google have to gain from saying anything? They would expose
> themselves to potential legal troubles with the suspended employee. They
> would plant the idea in everyone's mind that Google stuff might be staged
> or cherry-picked. And what is cherry-picked anyway? That can become quite
> subjective pretty quickly. My bet is that the bot was fed some "information
> about itself" at the fine-tuning stage.
>
> By not saying anything they get free hype. By saying something, they risk
> looking silly. The employee was most likely suspended for divulging
> internal information without permission. This is typically frowned upon in
> big corps.
>
> after all the longer they delay the more foolish they will seem when the
> truth comes out, and if LaMDA is not what it seems then it's only a
> matter of time, and not much time, before the truth comes out.
>
>
> I doubt it. Mainstream media has the attention span of a house fly, and
> the debunking will probably be too nuanced for most people to care.
>
> Telmo.
>
> John K Clark    See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> tns
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv17_4o7RsuEt1Lkec0J5RqUJTBhGoEjpX9A-hrUrN9jzg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv17_4o7RsuEt1Lkec0J5RqUJTBhGoEjpX9A-hrUrN9jzg%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/6aeb9790-782a-40b6-99d0-90410a062c83%40www.fastmail.com
> <https://groups.google.com/d/msgid/everything-list/6aeb9790-782a-40b6-99d0-90410a062c83%40www.fastmail.com?utm_medium=email&utm_source=footer>
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUikjEnveJy0%2BbgYa_Hz2OH5LM6-mj45abgqpyGEv1ocTQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUikjEnveJy0%2BbgYa_Hz2OH5LM6-mj45abgqpyGEv1ocTQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/4fab7d07-dc89-4d72-9b9d-26528833339f%40www.fastmail.com
> <https://groups.google.com/d/msgid/everything-list/4fab7d07-dc89-4d72-9b9d-26528833339f%40www.fastmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj%3D2ftsYL82yw5Oqzcf4Y5%2BFH8YKz_0mPkS1u0Q43Q76A%40mail.gmail.com.

Reply via email to