Re: [agi] Vector processing and AGI

2008-12-12 Thread Steve Richfield
Andi and Ben,

On 12/12/08, wann...@ababian.com  wrote:
>
> I don't remember what references there were earlier in this thread, but I
> just saw a link on reddit to some guys in Israel using a GPU to greatly
> accelerate a Bayesian net.  That's certainly an AI application:
>
> http://www.cs.technion.ac.il/~marks/docs/SumProductPaper.pdf
>
> http://www.reddit.com/r/programming/comments/7j1gr/accelerating_bayesian_network_200x_using_a_gpu/


My son was trying to get me interested in doing this ~3 years ago, but I
blew him off because I couldn't see a workable business model around it. It
is 100% dependent on pasting together a bunch of hardware that is designed
to do something ELSE, and even a tiny product change would throw software
compatibility and other things out the window.

Also, the architecture I am proposing promises ~3 orders of magnitude more
speed, along with a really fast global memory that completely obviates
the.complex caching they are proposing.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Vector processing and AGI

2008-12-12 Thread Steve Richfield
Ben,

On 12/12/08, Ben Goertzel  wrote:
>
> >> > There isn't much that an MIMD machine can do better than a
> similar-sized
> >> > SIMD machine.
> >>
> >> Hey, that's just not true.
> >>
> >> There are loads of math theorems disproving this assertion...
> >
> >
> > Oops, I left out the presumed adjective "real-world". Of course there are
> > countless diophantine equations and other math trivia that aren't
> > vectorizable.
> >
> > However, anything resembling a brain in that the process can be done by
> > billions of slow components must by its very nature vectorizable. Hence,
> in
> > the domain of our discussions, I think my statement still holds
>
> I'm not so sure, but for me to explore this area would require a lot
> of time and I don't
> feel like allocating it right now...


No need, so long as
1.  You see some possible future path to vectorizability, and
2.  My or similar vector processor chips aren't a reality yet.

I'm also not so sure our current models of brain mechanisms or
> dynamics are anywhere near
> accurate, but that's another issue...


I finally cracked the"theory of everything in cognition puzzle" discussed
here ~4 months ago, which comes with an understanding of the super-fast
learning observed in biological systems, e.g. visual systems the tune
themselves up in the first few seconds after an animal's eyes open for the
first time. I am now trying to translate it from "Steveze" to readable
English which hopefully should be done in a week or so. Also, insofar as
possible, I am translating all formulas into grammatically correct English
statements, for the mathematically challenged readers. Unless I missed
something really BIG, it will change everything from AGI to NN to ???. Most
especially, AGI is largely predicated on the INability to perform such fast
learning, which is where experts enter the picture. With this theory,
modifying present AGI approaches to learn fast shouldn't be all that
difficult.

After any off-line volunteers have first had their crack, I'll post it here
for everyone to beat it up.

Do I hear any volunteers out there in Cyberspace who want to help "hold my
feet to the fire" off-line regarding those pesky little details that so
often derail grand theories?

>> Indeed, AGI and physics simulation may be two of the app areas that have
> >> the easiest times making use of these 80-core chips...
> >
> >
> > I don't think Intel is even looking at these. They are targeting embedded
> > applications.
>
> Well, my bet is that a main app of multicore chips is ultimately gonna
> be gaming ...
> and gaming will certainly make use of fancy physics simulation ...


Present gaming video chips have special processors that are designed to
perform the 3D to 2D transformations needed for gaming, and for maintaining
3D models. It is hard (though not impossible) compete with custom hardware
that has been refined for a particular application.

Also, it would seem to be a terrible waste of tens of terraflops just to
operate a video game.

and
> I'm betting it will
> also make use of early-stage AGI...


There is already some of that creeping into some games, including actors who
perform complex jobs in changing virtual envrionments.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Vector processing and AGI

2008-12-12 Thread wannabe
I don't remember what references there were earlier in this thread, but I
just saw a link on reddit to some guys in Israel using a GPU to greatly
accelerate a Bayesian net.  That's certainly an AI application:

http://www.cs.technion.ac.il/~marks/docs/SumProductPaper.pdf
http://www.reddit.com/r/programming/comments/7j1gr/accelerating_bayesian_network_200x_using_a_gpu/

andi


Ben:
> Well, my bet is that a main app of multicore chips is ultimately gonna
> be gaming ...
> and gaming will certainly make use of fancy physics simulation ... and
> I'm betting it will
> also make use of early-stage AGI...




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Vector processing and AGI

2008-12-12 Thread Ben Goertzel
Hi,

>> > There isn't much that an MIMD machine can do better than a similar-sized
>> > SIMD machine.
>>
>> Hey, that's just not true.
>>
>> There are loads of math theorems disproving this assertion...
>
>
> Oops, I left out the presumed adjective "real-world". Of course there are
> countless diophantine equations and other math trivia that aren't
> vectorizable.
>
> However, anything resembling a brain in that the process can be done by
> billions of slow components must by its very nature vectorizable. Hence, in
> the domain of our discussions, I think my statement still holds

I'm not so sure, but for me to explore this area would require a lot
of time and I don't
feel like allocating it right now...

I'm also not so sure our current models of brain mechanisms or
dynamics are anywhere near
accurate, but that's another issue...

>> Indeed, AGI and physics simulation may be two of the app areas that have
>> the easiest times making use of these 80-core chips...
>
>
> I don't think Intel is even looking at these. They are targeting embedded
> applications.

Well, my bet is that a main app of multicore chips is ultimately gonna
be gaming ...
and gaming will certainly make use of fancy physics simulation ... and
I'm betting it will
also make use of early-stage AGI...

ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Vector processing and AGI

2008-12-11 Thread Steve Richfield
Ben,

On 12/11/08, Ben Goertzel  wrote:
>
> > There isn't much that an MIMD machine can do better than a similar-sized
> > SIMD machine.
>
> Hey, that's just not true.
>
> There are loads of math theorems disproving this assertion...


Oops, I left out the presumed adjective "real-world". Of course there are
countless diophantine equations and other math trivia that aren't
vectorizable.

However, anything resembling a brain in that the process can be done by
billions of slow components must by its very nature vectorizable. Hence, in
the domain of our discussions, I think my statement still holds

>>
> >> OO and generic design patterns do buy you *something* ...
> >
> >
> > OO is often impossible to vectorize.
>
> The point is that we've used OO design to wrap up all
> processor-intensive code inside specific objects, which could then be
> rewritten to be vector-processing friendly...


As long as the OO is at a high enough level so as not to gobble up a bunch
of time in the SISD control processor, then no problem.

> There is an 80-core chip due out any time now. Intel has had BIG problems
> > finding anything to run on them, so I suspect that they would be more
> than
> > glad to give you a few if you promise to do something with them.
>
> Indeed, AGI and physics simulation may be two of the app areas that have
> the easiest times making use of these 80-core chips...


I don't think Intel is even looking at these. They are targeting embedded
applications.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Vector processing and AGI

2008-12-11 Thread Ben Goertzel
Hi,

> There isn't much that an MIMD machine can do better than a similar-sized
> SIMD machine.

Hey, that's just not true.

There are loads of math theorems disproving this assertion...

>>
>> OO and generic design patterns do buy you *something* ...
>
>
> OO is often impossible to vectorize.

The point is that we've used OO design to wrap up all
processor-intensive code inside specific objects, which could then be
rewritten to be vector-processing friendly...

> There is an 80-core chip due out any time now. Intel has had BIG problems
> finding anything to run on them, so I suspect that they would be more than
> glad to give you a few if you promise to do something with them.

Indeed, AGI and physics simulation may be two of the app areas that have
the easiest times making use of these 80-core chips...

> I listened to an inter-processor communications plan for the 80 core chip
> last summer, and it sounded SLOW - like there was no reasonable plan for
> global memory.

I haven't put in the time to assess this for myself

> I suspect that your plan in effect requires FAST global
> memory (to avoid crushing communications bottlenecks),

True

>and this is NOT
> entirely simple on MIMD architectures.

True also

> My SIMD architecture will deliver equivalent global memory speeds of ~100x
> the clock speed, which still makes it a high-overhead operation on a machine
> that peaks out at ~20K operations per clock cycle.

Well, we're writing our code to run on the hardware we have now, while
making the design as flexible & modular as possible to as to minimize
the pain and suffering that will be incurred if/when radically
different hardware becomes the smartest option to use...

ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Vector processing and AGI

2008-12-11 Thread Steve Richfield
Ben,

Before I comment on your reply, note that my former posting was about my
PERCEPTION rather than the REALITY of your understanding, with the
difference being taken up in the answer being less than 1.00 bit of
information.

Anyway, that said, on with a VERY interesting (to me) subject.

On 12/11/08, Ben Goertzel  wrote:
>
> Well, the conceptual and mathematical algorithms of NCE and OCP
> (my AI systems under development) would go more naturally on MIMD
> parallel systems than on SIMD (e.g. vector) or SISD systems.


There isn't much that an MIMD machine can do better than a similar-sized
SIMD machine. The usual problem is in finding a way to make such a large
SIMD machine. Anyway, my proposed architecture (now under consideration at
AMD) also provides for limited MIMD operation, where the processors could be
at different places in a single complex routine.

Anyway, I was looking at a 10,000:1 speedup over SISD, and then giving up
~10:1 to go from probabilistic logic equations to matrices that do the same
things, which is how I came up with the 1000:1 from the prior posting.

I played around a bunch with MIMD parallel code on the Connection Machine
> at ANU, back in the 90s


The challenge is in geometry - figuring out how to get the many processors
to communicate and coordinate with each other without spending 99% of their
cycles in coordination and communication.

However, indeed the specific software code we've written for NCE and OCP
> is intended for contemporary {distributed networks of multiprocessor
> machines}
> rather than vector machines or Connection Machines or whatever...
>
> If vector processing were to become a superior practical option for AGI,
> what would happen to the code in OCP or NCE?
>
> That would depend heavily on the vector architecture, of course.
>
> But one viable possibility is: the AtomTable, ProcedureRepository and
> other knowledge stores remain the same ... and the math tools like the
> PLN rules/formulas and Reduct rules remain the same ... but the MindAgents
> that use the former to carry out cognitive processes get totally
> rewritten...


I presume that everything is table driven, so the code could completely
vectorized to execute the table on any sort of architecture including SIMD.

However, if you are actually executing CODE, e.g. as compiled from a reality
representation, then things would be difficult for an SIMD architecture,
though again, you could also interpret tables containing the same
information at the usual 10:1 slowdown, which is what I was expecting
anyway.

This would be a big deal, but not the kind of thing that means you have to
> scrap all your implementation work and go back to ground zero


That's what I figured.

OO and generic design patterns do buy you *something* ...


OO is often impossible to vectorize.

Vector processors aside, though ... it would be a much *smaller*
> deal to tweak my AI systems to run on the 100-core chips Intel
> will likely introduce within the next decade.


There is an 80-core chip due out any time now. Intel has had BIG problems
finding anything to run on them, so I suspect that they would be more than
glad to give you a few if you promise to do something with them.

I listened to an inter-processor communications plan for the 80 core chip
last summer, and it sounded SLOW - like there was no reasonable plan for
global memory. I suspect that your plan in effect requires FAST global
memory (to avoid crushing communications bottlenecks), and this is NOT
entirely simple on MIMD architectures.

My SIMD architecture will deliver equivalent global memory speeds of ~100x
the clock speed, which still makes it a high-overhead operation on a machine
that peaks out at ~20K operations per clock cycle.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com