On Wed, Aug 27, 2014 at 1:14 AM, Terren Suydam <terren.suy...@gmail.com>
wrote:

> Hi Telmo,
>
> I think if it were as simple as you make it seem, relative to what we have
> today, we'd have engineered systems like that already.
>

It wasn't my intention to make it look simple. What I claim is that we
already have a treasure trove of very interesting algorithms. None of them
is AGI, but what they can do becomes more impressive with more computing
power and access to data.

Take google translator. It's far from perfect, but way ahead anything we
had a decade ago. As far as I can tell, this was achieved with algorithms
that had been known for a long time, but that now can operate on the
gigantic dataset and computer farm available to google.

Imagine what a simple minimax search tree could do with immense computing
power and data access.


> You're talking about an AI that arrives at novel solutions, which requires
> the ability to invent/simulate/act on new models in new domains (AGI).
>

Evolutionary computation already achieves novelty and invention, to a
degree. I concur that it is still not AGI. But it could already be a
threat, given enough computational resources.


> I'm not saying this is impossible, in fact I see this as inevitable on a
> longer timescale. I'm saying that I doubt that the military is committing
> any significant resources into that kind of research when easier approaches
> are much more likely to bear fruit... but I really have no idea what the
> military is researching, so it's just a hunch.
>

Why does it matter if it's the military that does this? To a sufficiently
advanced AI, we are just monkeys throwing rocks at each other. It will
surely figure out a way to take control of our resources, including
weaponry.


>
> What I would wager on is that the military is developing drones along the
> same lines as what Google has achieved with its self-driving cars. Highly
> competent, autonomous drones that excel in very specific environments. The
> utility functions involved would be specified explicitly in terms of
> "hard-coded" representations of stimuli. For AGI they would need to be
> equipped to invent new models of the world, articulate those models with
> respect to self and with respect to existing goal structures, simulate
> them, and act on them. I think we are a long way from those kinds of AIs.
> The only researcher I see making inroads towards that kind of AI is Steve
> Grand.
>

But again, a reasonable fear is that a sufficiently powerful conventional
AI is already a threat (due to increasing autonomy and data access + our
possible inability to cover all the loopholes in utility functions).

Cheers
Telmo.


>
>
> Terren
>
>
> On Tue, Aug 26, 2014 at 6:11 PM, Telmo Menezes <te...@telmomenezes.com>
> wrote:
>
>> Hi Terren,
>>
>> On Tue, Aug 26, 2014 at 7:31 PM, Terren Suydam <terren.suy...@gmail.com>
>> wrote:
>>
>>> For what it's worth, the kind of autonomous human-level (or greater)
>>> type of AI, or AGI, will *most likely* require an architecture, yet to be
>>> well understood, that is of an entirely different nature relative to the
>>> kind of architectures being engineered by those interests who desire highly
>>> complicated slaves.
>>>
>>
>> The problem is that you need to define a utility function for these
>> slaves. Even with currently known algorithms but more computational power,
>> the machines might be able to take steps to maximize their utility function
>> that are beyond our own intelligence. If we try to constrain the utility
>> function in ways that would threaten our well-being, we can only impose
>> such constraints up to the horizon of our own intelligence, but not further.
>>
>> So your cleaning system might end up figuring out a way to contaminate
>> your house with enough radiation to keep humans from interfering with the
>> cleaning operation for millennia. This is not a real threat because I can
>> anticipate it, but what lies beyond the anticipatory powers of the most
>> intelligent human alive?
>>
>> Telmo.
>>
>>  In other words, I'm not losing any sleep about the military accidentally
>>> unleashing a terminator.  If I'm going to lose sleep over a predictable
>>> sudden loss of well being, I will focus instead on the much less technical
>>> and much more realistic threats arising from economic/societal collapse.
>>>
>>> Terren
>>>
>>>
>>> On Tue, Aug 26, 2014 at 12:39 PM, 'Chris de Morsella' via Everything
>>> List <everything-list@googlegroups.com> wrote:
>>>
>>>> We can engage and do so without overarching understanding of what we
>>>> are doing and stuff will emerge out of our activities. AI will be (and is!)
>>>> in my opinion emergent phenomena. We don’t really understand it, but we are
>>>> accelerating its emergence never the less.
>>>>
>>>> Modern software systems with millions of lines of code are not fully
>>>> understood by anybody anymore, people know about small specific regions of
>>>> a system and some architects have a fuzzy and rather vague understanding of
>>>> system dynamics as a whole, but mysterious stuff is already happening (ex.
>>>> Google (or some researchers from Google) has recently reported that its
>>>> photo recognition smart systems are acting in ways that the programmers
>>>> don’t fully comprehend and that are not deterministic – i.e. explicable
>>>> based on working through the code)
>>>>
>>>> If you look at where the money is in AI research and development, it is
>>>> largely focused on military, security state, and other allied sectors, with
>>>> perhaps an anomaly in the financial sector where big money is being thrown
>>>> at smart arbitrage systems.
>>>>
>>>> We will get the kind of AI we pay for.
>>>>
>>>> -Chris
>>>>
>>>>
>>>>
>>>> *From:* everything-list@googlegroups.com [mailto:
>>>> everything-list@googlegroups.com] *On Behalf Of *Platonist Guitar
>>>> Cowboy
>>>> *Sent:* Tuesday, August 26, 2014 7:57 AM
>>>> *To:* everything-list@googlegroups.com
>>>> *Subject:* Re: AI Dooms Us
>>>>
>>>>
>>>>
>>>> If we engage a class of problems on which we can't reason, and throw
>>>> tech at that, we'll catch the occasional fish, but we won't really know how
>>>> or why. Some marine life is poisonous however, which might not be obvious
>>>> in the catch.
>>>>
>>>> I prefer "keep it simple approaches to novelty":
>>>>
>>>>
>>>> From G. Kreisel's "Obituary of K. Gödel":
>>>>
>>>> *Without losing sight of the permanent interest of his work, Gödel
>>>> repeatedly stressed... how little novel mathematics was needed; only
>>>> attention to some quite commonplace distinctions; in the case of his most
>>>> famous work: between arithmetical truth on the one hand and derivability by
>>>> formal rules on the other. Far from being uncomfortable about so to speak
>>>> getting something from nothing, he saw his early successes as special cases
>>>> of a fruitful general, but neglected scheme:*
>>>>
>>>> *By attention or, equivalently, analysis of suitable traditional
>>>> notions and issues, adding possibly a touch of precision, one arrives
>>>> painlessly at appropriate concepts, correct conjectures, and generally easy
>>>> proofs- *
>>>>
>>>> Kreisel, 1980.
>>>>
>>>>
>>>>
>>>> On Tue, Aug 26, 2014 at 12:37 AM, LizR <lizj...@gmail.com> wrote:
>>>>
>>>> "I'll be back!"
>>>>
>>>>
>>>>
>>>> On 26 August 2014 07:20, 'Chris de Morsella' via Everything List <
>>>> everything-list@googlegroups.com> wrote:
>>>>
>>>> a super-intelligent machine devoted to the killing of "enemy" human
>>>> beings (+ opposing drones I suppose as well)
>>>>
>>>> This does not bode well for a benign super-intelligence outcome does it?
>>>>
>>>>
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Everything List" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to everything-list+unsubscr...@googlegroups.com.
>>>> To post to this group, send email to everything-list@googlegroups.com.
>>>> Visit this group at http://groups.google.com/group/everything-list.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>>
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Everything List" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to everything-list+unsubscr...@googlegroups.com.
>>>> To post to this group, send email to everything-list@googlegroups.com.
>>>> Visit this group at http://groups.google.com/group/everything-list.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Everything List" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to everything-list+unsubscr...@googlegroups.com.
>>>> To post to this group, send email to everything-list@googlegroups.com.
>>>> Visit this group at http://groups.google.com/group/everything-list.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>  --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-list+unsubscr...@googlegroups.com.
>>> To post to this group, send email to everything-list@googlegroups.com.
>>> Visit this group at http://groups.google.com/group/everything-list.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to