I kind of feel this way too. It should be easy to get neural nets
embedded in VR to achieve the intelligence of say magpies, or finches.
But the same approaches you might use, top-down ones, may not scale to
human level.
Given a 100x increase in workstation capacity I don't see why we can't
start
comments below...
--- On Sat, 8/23/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> The last post by Eliezer provides handy imagery for this
> point (
> http://www.overcomingbias.com/2008/08/mirrors-and-pai.html
> ). You
> can't have an AI of perfect emptiness, without any
> goals at all,
> becaus
Terren:> Just wanted to add something, to bring it back to feasibility of
embodied/unembodied approaches. Using the definition of embodiment I
described, it needs to be said that it is impossible to specify the goals of
the agent, because in so doing, you'd be passing it information in an
unemb
On Sat, Aug 23, 2008 at 11:38 PM, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> Just wanted to add something, to bring it back to feasibility of
> embodied/unembodied approaches. Using the definition of embodiment
> I described, it needs to be said that it is impossible to specify the goals
> of the
Just wanted to add something, to bring it back to feasibility of
embodied/unembodied approaches. Using the definition of embodiment I described,
it needs to be said that it is impossible to specify the goals of the agent,
because in so doing, you'd be passing it information in an unembodied way
No worries, that's why I heartily advocate doing exactly what you did, but not
sending it. It's a lesson I've learned the hard way more times than I care to
admit.
--- On Sat, 8/23/08, Eric Burton <[EMAIL PROTECTED]> wrote:
> Thanks Terren, I shouldn't have got angry so fast. One
> thing I wor
Thanks Terren, I shouldn't have got angry so fast. One thing I worry
about constantly when going places or discussing anything is the
quality of discourse.
On 8/23/08, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> Eric,
>
> You lower the quality of this list with comments like that. It's the kind o
Eric,
You lower the quality of this list with comments like that. It's the kind of
thing that got people wondering a month ago whether moderation is necessary on
this list. If we're all adults, moderation shouldn't be necessary.
Jim, do us all a favor and don't respond to that, as tempting as
Yeah, that's where the misunderstanding is... "low level input" is too fuzzy a
concept.
I don't know if this is the accepted mainstream definition of embodiment, but
this is how I see it. The thing that distinguishes an embodied agent from an
unembodied one is whether the agent is given pre-st
Stupid fundamentalist troll garbage
On 8/22/08, Jim Bromer <[EMAIL PROTECTED]> wrote:
> I just discovered that I made a very obvious blunder on my theory
> about Logical Satisfiability last November. It was a, "what was I
> thinking," kind of error. No sooner did I discover this error a
> couple
>These have profound impacts on AGI design. First, AIXI is (provably) not
>computable,
>which means there is no easy shortcut to AGI. Second, universal intelligence
>is not
>computable because it requires testing in an infinite number of environments.
>Since
>there is no other well accepted test
On Sat, Aug 23, 2008 at 7:00 AM, William Pearson <[EMAIL PROTECTED]> wrote:
> 2008/8/23 Matt Mahoney <[EMAIL PROTECTED]>:
>> Valentina Poletti <[EMAIL PROTECTED]> wrote:
>>> I was wondering why no-one had brought up the information-theoretic aspect
>>> of this yet.
>>
>> It has been studied. For e
2008/8/23 Matt Mahoney <[EMAIL PROTECTED]>:
> Valentina Poletti <[EMAIL PROTECTED]> wrote:
>> I was wondering why no-one had brought up the information-theoretic aspect
>> of this yet.
>
> It has been studied. For example, Hutter proved that the optimal strategy of
> a rational goal seeking agent
13 matches
Mail list logo