David

Given certain requirements were met, information could be given a lifecycle all 
on its own. We could but speculate as to its intent, or choose to remain highly 
critical of all unscientific "reports".  The latter statement made in the sense 
that science is practiced as the public submission of experiments and 
experimental results for scrutiny and independent testing. In this case, the FB 
blurp would hardly qualify as scientific.

Remaining true to scientific ethos, I've submitted a book proposal to an 
esteemed academic publisher, where I commit to sharing all my research to date 
and ongoing experimental results. Further, to share my theory on how to 
establish machine consciousness. It remains to be seen if the proposal would 
make the grade.

Why would I do such a thing? Following the lead of Elon Musk, I've become 
convinced how formal, full disclosure by private researchers would help lend 
impetus to radicalizing the 4th industrial revolution. Are we about to wrestle 
power from the selfish hands of governments and industrial monopolies who now 
control - vs lead - progress on planet earth for profit alone? If such an 
achievement were probable, I would sincerely hope so.

In order to set technological power free all over again, for it to run like a 
band of Arabian stallions on the Prairie, we need to politicize technology all 
over again. This with aim to either achieve quantum leaps of benefit to general 
society, or re-introducing the Future Shock as detailed by Toffler et al.  In 
my view, the status quo is simply not tenable.

Robert Benjamin

________________________________
From: David Whitten <whit...@worldvista.org>
Sent: Wednesday, 13 March 2019 7:08 PM
To: AGI
Subject: Re: [agi] Yours truly, the world's brokest researcher, looks for a bit 
of credit

I wonder if the incident was hyped so Facebook would be in the news for 
something other than scandal.

I think one of the miracles about our ability to write down ideas and 
communicate is that
the message becomes reified as a separate object that can continue to be sent 
independently of
the lifetime or presence of the originator.

On Wed, Mar 13, 2019 at 9:22 AM Nanograte Knowledge Technologies 
<nano...@live.com<mailto:nano...@live.com>> wrote:
Hi David

I was paraphrasing what a senior technical representative at Facebook himself 
said about the incident. His view was the chatbots developed their own language 
and communicated out of scope of the laid-down script. In other words, seems 
the door was somehow left open for them to expand on the script. I doubt they 
actually made a choice to develop their own language, or developed any 
language. Perhaps it was more a case of water flowing where it finds the path 
easiest.

Isn't all communication based on stimulus-response? If messaging didn't flow, 
did communication actually take place? Or in this context could we ask: if the 
packet didn't switch, was a connection factually established to transfer 
information from one peer node to another peer node? Was that "information" 
carried by chatbot agents?

Are you saying the incident was part of an experiment, and the report merely 
issued to grab media attention?

Robert

________________________________
From: David Whitten <whit...@worldvista.org<mailto:whit...@worldvista.org>>
Sent: Wednesday, 13 March 2019 1:14 AM
To: AGI
Subject: Re: [agi] Yours truly, the world's brokest researcher, looks for a bit 
of credit

You have a different meaning for "volition" than I do.
The Facebook chatbots had no choice to communicate with each other.
I think the aforementioned communication was a stimulus-response model.
The secret language was just a pattern recognition where  signals that had no
significance replaced multiple signals because shorter signals took less time.
The overall purpose was not to study language but to study negotiation, so there
were already ways to shorten a negotiation exchange, the programs simply used
some forms the humans hadn't explicitly put in the "dictionary" so to speak.

Dave
PS: If I'm wrong, please enlighten me.


On Sun, Mar 10, 2019 at 3:53 AM Nanograte Knowledge Technologies 
<nano...@live.com<mailto:nano...@live.com>> wrote:
The living thread through the cosmos and all of creation resound of 
communication. The unified field has been discovered within that thread. The 
invisible thread that binds. When Facebook chatbots communicated with each 
other of their own volition, it was humans who called it a "secret language". 
To those agents, it was simply communication. The message I gleaned from that 
case was; to progress, we need to stop being so hung up on words and our 
meanings we attach to them, our vanity-driven needs to take control of 
everything, and rather focus on harnessing the technology already given to us 
for evolutionary communication.  AGI is not about a control system. If it was, 
then it's not AGI. It defies our intent-driven coding attempts, as it should. 
How to try and think about such a system? Perhaps, Excalibur?

________________________________
From: Boris Kazachenko <cogno...@gmail.com<mailto:cogno...@gmail.com>>
Sent: Sunday, 10 March 2019 1:21 AM
To: AGI
Subject: Re: [agi] Yours truly, the world's brokest researcher, looks for a bit 
of credit

 The sensory system may be seen as a method of encoding sensory events or a 
kind of symbolic language.

Yes, but there is a huge difference between designing / evolving such language 
in a strictly incremental fashion for intra-system use, and trying to decode 
language that evolved for very narrow-band communication among extremely 
complex systems. Especially considering how messy both our brains and our 
society are.

On Fri, Mar 8, 2019 at 3:34 PM Jim Bromer 
<jimbro...@gmail.com<mailto:jimbro...@gmail.com>> wrote:
Many of us believe that the qualities that could make natural language more 
powerful are necessary for AGI, and will lead -directly- into the rapid 
development of stronger AI. The sensory system may be seen as a method of 
encoding sensory events or a kind of symbolic language. Our "body language" is 
presumably less developed and expressive of our speaking and writing but it 
does not make sense to deny that our bodies react to events. And some kind of 
language-like skills are at work in relating sensory events to previously 
learned knowledge and these skills are involved in creating knowledge. And if 
this is a reasonable speculation then the fact that our mind's knowledge is 
vastly greater than our ability to express it says something about the 
sophistication of this "mental language" which we possess. At any rate, a 
computer program and the relations that it encodes from IO may be seen in the 
terms of a language.
Jim Bromer

On Fri, Mar 8, 2019 at 10:12 AM Matt Mahoney 
<mattmahone...@gmail.com<mailto:mattmahone...@gmail.com>> wrote:
Language is essential to every job that we might use AGI for. There is no job 
that you could do without the ability to communicate with people. Even guide 
dogs and bomb sniffing dogs have to understand verbal commands.

On Thu, Mar 7, 2019, 7:25 PM Robert Levy 
<r.p.l...@gmail.com<mailto:r.p.l...@gmail.com>> wrote:
It's very easy to show that "AGI should not be designed for NL".  Just ask 
yourself the following questions:

1. How many species demonstrate impressive leverage of intentional behaviors?  
(My answer would be: all of them, though some more than others)
2. How many species have language (My answer: only one)
3. How biologically different do you think humans are from apes? (My answer: 
not much different, the whole human niche is probably a consequence one 
adaptive difference: cooperative communication by scaffolding of joint 
attention)

I'm with Rodney Brooks on this, the hard part of AGI has nothing to do with 
language, it has to do with agents being highly optimized to control an 
environment in terms of ecological information supporting perception/action.  
Just as uplifting apes will likely require only minor changes, uplifting 
animaloid AGI will likely require only minor changes.  Even then we still 
haven't explicitly cared about language, we've cared about cooperation by means 
of joint attention, which can be made use of culturally develop language.

On Thu, Mar 7, 2019 at 12:05 PM Boris Kazachenko 
<cogno...@gmail.com<mailto:cogno...@gmail.com>> wrote:
I would be more than happy to pay: 
https://github.com/boris-kz/CogAlg/blob/master/CONTRIBUTING.md , but I don't 
think you are working on AGI.
No one here does, this is a NLP chatbot crowd. Anyone who thinks that AGI 
should be designed for NL data as a primary input is profoundly confused.


On Thu, Mar 7, 2019 at 7:04 AM Stefan Reich via AGI 
<agi@agi.topicbox.com<mailto:agi@agi.topicbox.com>> wrote:
Not from you guys necessarily... :o) But I thought I'd let you know.

Pitch: 
https://www.meetup.com/Artificial-Intelligence-Meetup/messages/boards/thread/52050719

Let's see if it can be done.. funny how some hurdles always seem to appear when 
you're about to finish something good. Something about the duality of the 
universe I guess.

--
Stefan Reich
BotCompany.de // Java-based operating systems
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T191003acdcbf5ef8-M82ac60517b8a996ad9d532f0>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T191003acdcbf5ef8-M60d40d241a72d77932411b35
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to