Re: Savants and user-interfaces [was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread Linas Vepstas
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
 On Tue, Jul 1, 2008 at 8:31 AM, Linas Vepstas [EMAIL PROTECTED] wrote:

 Why binary?

 I once skimmed a biography of Ramanujan, he started
 multiplying numbers in his head as a pre-teen. I suspect
 it was grindingly boring, but given the surroundings, might
 have been the most fun thing he could think of.   If you're
 autistic, then focusing obsessively on some task might
 be a great way to pass the time, but if you're more or less
 normal, I doubt you'll get very far with obsessive-compulsive
 self-training -- and that's the problem, isn't it?


 If the signals have properties of their own, I'm afraid they will
 start interfering with each other, which won't allow the circuit to
 execute in real time. Binary signals, on the other hand, can be
 encoded by the activation of nodes of the circuit, active/inactive. If
 you have an AND gate that leads from symbols S1 and S2 to S3, you
 learn to remember S3 only when you see both S1 and S2

What are you trying to accomplish here? I don't see where
you are trying to go with this.

I don't think a human can consciously train one or two neurons
to do something, we train millions at a time. -- I'm guessing
savants only employ a few tens of million neurons (give or take a
few orders of magnitude) -- to do their stuff.

Still, an array of 1K by 1K electrodes is well within current
technology, we just don't know where to hook this up to,
with the exception of simple motor areas, retina, and bit
of the auditory circuits.

--linas


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-07-01 Thread Linas Vepstas
2008/6/16 Abram Demski [EMAIL PROTECTED]:
 I previously posted here claiming that the human mind (and therefore
 an ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)

I missed your earlier posts. However, I believe that there
are models of computation can compute things that turing
machines cannot, and that this is not arcane, just not widely
known or studied.  Here is a quick sketch:

Topological finite automata, or geometric finite automata,
(of which the quantum finite automata is a special case)
generalize the notion of non-deterministic finite automata
by replacing its powerset of states with a general topological
or geometric space (complex projective space in the quantum
case). It is important to note that these general spaces are
in general uncountable (have the cardinality of the continuum).

It is well known that the languages accepted by quantum
finite automata are not regular languages, they are bigger
and more complex in some ways. I am not sure what is
known about the languages accepted by quantum push-down
automata, but intuitively these are clearly different (and bigger)
than the class of context-free languages.

I believe the concepts of topological finite automata extend
just fine to a generalization of turing machines, but I also
believe this is a poorly-explored area of mathematics.
I beleive such machines can compute things that turing
machiens can't ..  this should not be a surprise, since,
after all, these systems have, in general, an uncountably
infinite number of internal states (cardinality of the
continuum!), and (as a side effect of the definition),
perform infinite-precision addition and multiplication
in finite time.

So yes, I think there are perfectly fine, rather simple
definitions for computing machines that can (it seems
like) perform calculations that turing machines cannot.
It should really be noted that quantum computers fall
into this class.

Considerably more confusing is the relationship of
such machines (and the languages they accept) to
lambda calculus, or first-order (or higher-order) logic.
This is where the rubber hits the road, and even for
the simplest examples, the systems are poorly
understood, or not even studied.  So, yeah, I think
there's plenty of room for the uncomputable in
some rather simple mathematical models of generalized
computation.

--linas


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Savants and user-interfaces [was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread Vladimir Nesov
On Tue, Jul 1, 2008 at 10:02 AM, Linas Vepstas [EMAIL PROTECTED] wrote:

 What are you trying to accomplish here? I don't see where
 you are trying to go with this.

 I don't think a human can consciously train one or two neurons
 to do something, we train millions at a time. -- I'm guessing
 savants only employ a few tens of million neurons (give or take a
 few orders of magnitude) -- to do their stuff.

 Still, an array of 1K by 1K electrodes is well within current
 technology, we just don't know where to hook this up to,
 with the exception of simple motor areas, retina, and bit
 of the auditory circuits.


Certainly nothing to do with individual neurons. Basically, it's
possible to train a finite state automaton in the mind through
association. You see a certain combination of properties, you think
the symbol that describes this combination. If such automaton is
trained not just to handle natural data (such as language), but to a
specifically designed circuit plan, it'll probably be possible to use
it as a directly accessible 'add-on' to the brain that implements
specific simple function efficiently, such as some operation with
numbers using a clever algorithm in a way alien to normal deliberative
learning. You don't learn to perform a task, but to execute individual
steps of an algorithm that performs a task.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Savants and user-interfaces [was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread Brad Paulsen
I was nearly kicked out of school in seventh grade for coming up with a method 
of manipulating (multiplying, dividing) large numbers in my head using what I 
later learned was a shift-reduce method.  It was similar to this:


http://www.metacafe.com/watch/742717/human_calculator/

My seventh grade math teacher was so upset with me, he almost struck me 
(physically -- you could get away with that back them).  His reason?  Wasting 
valuable math class time.


The point is, you can train yourself to do this type of thing and look very 
savant-like.  The above link is just one in a series of videos where the teacher 
presents this system.  It takes practice, but not much more than learning the 
standard multiplication table.


Cheers,

Brad


Vladimir Nesov wrote:

Interesting: is it possible to train yourself to run a specially
designed nontrivial inference circuit based on low-base
transformations (e.g. binary)? You start by assigning unique symbols
to its nodes, train yourself to stably perform associations
implementing its junctions, and then assemble it all together by
training yourself to generate a problem as a temporal sequence
(request), so that it can be handled by the overall circuit, and
training to read out the answer and convert it to sequence of e.g.
base-10 digits or base-100 words keying pairs of digits (like in
mnemonic)? Has anyone heard of this attempted? At least the initial
steps look straightforward enough, what kind of obstacles this kind of
experiment can run into?

On Tue, Jul 1, 2008 at 7:43 AM, Linas Vepstas [EMAIL PROTECTED] wrote:

2008/6/30 Terren Suydam [EMAIL PROTECTED]:

savant

I've always theorized that savants can do what they do because
they've been able to get direct access to, and train, a fairly
small number of neurons in their brain, to accomplish highly
specialized (and thus rather unusual) calculations.

I'm thinking specifically of Ramanujan, the Hindi mathematician.
He appears to have had access to a multiply-add type circuit
in his brain, and could do symbolic long division and
multiplication as a result -- I base this on studying some of
the things he came up with -- after a while, it seems to be
clear  how he came up with it (even if the feat is clearly not
reproducible).

In a sense, similar feats are possible by using a modern
computer with a good algebra system.  Simon Plouffe seems
to be a modern-day example of this: he noodles around with
his systems, and finds various interesting relationships that
would otherwise be obscure/unknown.  He does this without
any particularly deep or expansive training in math (whence
some of his friction with real academics).  If Simon could
get a computer-algebra chip implanted in his brain, (i.e.
with a very, very user-freindly user-interface) so that he
could work the algebra system just by thinking about it,
I bet his output would resemble that of Ramanujan a whole
lot more than it already does -- as it were, he's hobbled by
a crappy user interface.

Thus, let me theorize: by studying savants with MRI and
what-not, we may find a way of getting a much better
man-machine interface.  That is, currently, electrodes
are always implanted in motor neurons (or visual cortex, etc)
i.e. in places of the brain with very low levels of abstraction
from the real word. It would be interesting to move up the
level of abstraction, and I think that studying how savants
access the magic circuits in thier brain will open up a
method for high-level interfaces to external computing
machinery.

--linas


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com








---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread William Pearson
2008/6/30 Terren Suydam [EMAIL PROTECTED]:

 Hi Will,

 --- On Mon, 6/30/08, William Pearson [EMAIL PROTECTED] wrote:
 The only way to talk coherently about purpose within
 the computation is to simulate self-organized, embodied
 systems.

 I don't think you are quite getting my system. If you
 had a bunch of
 programs that did the following

 1) created new programs, by trial and error and taking
 statistics of
 variables or getting arbitrary code from the outside.
 2) communicated with each other to try and find programs
 that perform
 services they need.
 3) Bid for computer resources, if a program loses its
 memory resources
 it is selected against, in a way.

 Would this be sufficiently self-organised? If not, why not?
 And the
 computer programs would be as embodied as your virtual
 creatures. They
 would just be embodied within a tacit economy, rather than
 an
 artificial chemistry.

 It boils down to your answer to the question: how are the resources 
 ultimately allocated to the programs?  If you're the one specifying it, via 
 some heuristic or rule, then the purpose is driven by you. If resource 
 allocation is handled by some self-organizing method (this wasn't clear in 
 the article you provided), then I'd say that the system's purpose is 
 self-defined.

I'm not sure how the system qualifies. It seems to be half way between
the two definitions you gave. The programs can have special
instructions in that bid for a specific resource with as much credit
as they want (see my recent message replying to Vladimir Nesov for
more information about banks, bidding and credit). The instructions
can be removed or not done, the amount of credit bid can be changed.
The credit is given to some programs by a fixed function, but they
have instructions they can execute (or not) to give it to other
programs forming an economy. What say you, self-organised or not?

 As for embodiment, my question is, how do your programs receive input?  
 Embodiment, as I define it, requires that inputs are merely reflections of 
 state variables, and not even labeled in any way... i.e. we can't pre-define 
 ontologies. The embodied entity starts from the most unstructured state 
 possible and self-structures whatever inputs it receives.

Bits and bytes from the outside world, or bits and bytes from reading
other programs programing and data. No particular ontology.

 That said, you may very well be doing that and be creating embodied programs 
 in this way... if so, that's cool because I hadn't considered that 
 possibility and I'll be interested to see how you fare.

It is going to take a while. Virtual machine writing is very
unrewarding programming. I have other things to do right now, I'll get
back to the rest of the message in a bit.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread Mike Tintner
Terren:It's to make the larger point that we may be so immersed in our own 
conceptualizations of intelligence - particularly because we live in our 
models and draw on our own experience and introspection to elaborate them - 
that we may have tunnel vision about the possibilities for better or 
different models. Or, we may take for granted huge swaths of what makes us 
so smart, because it's so familiar, or below the radar of our conscious 
awareness, that it doesn't even occur to us to reflect on it.


No 2 is more relevant - AI-ers don't seem to introspect much. It's an irony 
that the way AI-ers think when creating a program bears v. little 
resemblance to the way programmed computers think. (Matt started to broach 
this when he talked a while back of computer programming as an art). But 
AI-ers seem to have no interest in the discrepancy - which again is ironic, 
because analysing it would surely help them with their programming as well 
as the small matter of understanding how general intelligence actually 
works.


In fact  - I just looked - there is a longstanding field on psychology of 
programming. But it seems to share the deficiency of psychology and 
cognitive science generally which is : no study of the 
stream-of-conscious-thought, especially conscious problemsolving. The only 
AI figure I know who did take some interest here was Herbert Simon who 
helped establish the use of verbal protocols.





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Paper rec: Complex Systems: Network Thinking

2008-07-01 Thread Kingma, D.P.
Idem dito.

On Mon, Jun 30, 2008 at 10:33 PM, Daniel Allen [EMAIL PROTECTED] wrote:

 Thanks.  I have downloaded the paper and pre-ordered the book.
 --
   *agi* | Archives http://www.listbox.com/member/archive/303/=now
 http://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttp://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Savants and user-interfaces [was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread Linas Vepstas
2008/7/1 Vladimir Nesov [EMAIL PROTECTED]:
 On Tue, Jul 1, 2008 at 10:02 AM, Linas Vepstas [EMAIL PROTECTED] wrote:

 What are you trying to accomplish here? I don't see where
 you are trying to go with this.

 I don't think a human can consciously train one or two neurons
 to do something, we train millions at a time. -- I'm guessing
 savants only employ a few tens of million neurons (give or take a
 few orders of magnitude) -- to do their stuff.

 Still, an array of 1K by 1K electrodes is well within current
 technology, we just don't know where to hook this up to,
 with the exception of simple motor areas, retina, and bit
 of the auditory circuits.


 Certainly nothing to do with individual neurons. Basically, it's
 possible to train a finite state automaton in the mind through
 association. You see a certain combination of properties, you think
 the symbol that describes this combination. If such automaton is
 trained not just to handle natural data (such as language), but to a
 specifically designed circuit plan, it'll probably be possible to use
 it as a directly accessible 'add-on' to the brain that implements
 specific simple function efficiently, such as some operation with
 numbers using a clever algorithm in a way alien to normal deliberative
 learning. You don't learn to perform a task, but to execute individual
 steps of an algorithm that performs a task.

Yes, but isn't the interesting case in the other direction?
We have ordinary computers that can already do quite
well computationally. What we *don't* have a a good
man-machine interface.  For example, modern disk drives
hold more bytes than the human mind can.  I don't want
to train myself for feats of memorization, I want automatic
and instant access to a disk drive.

So, perhaps by studying savants who are capable of
memorization feats, perhaps we can find the sort of neural
circuitry needed to interface to a disk drive. It is, perhaps
because savants have these unusual abilities, that it sheds
light on the kind of wiring that would be needed for electrodes.

--linas


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] Simple example of the complex systems problem, for those in a hurry

2008-07-01 Thread Richard Loosemore

John G. Rose wrote:

Could you say that it takes a complex system to know a complex system? If an
AGI is going to try to say predict the weather, it doesn't have infinite cpu
cycles to simulate so it'll have to come up with something better. Sure it
can build a probabilistic historical model but that is kind of cheating. So
for it to emulate the weather, I think, or to semi-understand it there has
to be some complex systems activity going on there in its cognition. No?

I'm not sure that this what Richard is taking about but an AGI is going to
bump into complex systems all over the place. Also it will encounter what
seems to be complex and later on it may determine that it is not. And
perhaps, a key component in the cognition engine in order for it to
understand complexity differentials in systems from a relationist standpoint
it would need some sort of complexity .. not a comparator but a...sort of
harmonic leverage. Can't think of the right words

Either way this complexity thing is getting rather annoying because on one
hand you think it can drasticly enhance an AGI and is required and on the
other hand you think it is unnecessary - I'm not talking about creativity or
thought emergence or similar but complexity as integral component in a
computational cognition system.


There has always been a lot of confusion about what exactly I mean by 
the complex systems problem (CSP), so let me try, once again, to give 
a quick example of how it could have an impact on AGI, rather than what 
the argument is.


(One thing to bear in mind is that the complex systems problem is about 
how researchers and engineers should go about building an AGI.  The 
whole point of the CSP is to say that IF intelligent systems are of a 
certain sort, THEN it will be impossible to build intelligent systems 
using today's methodology).


What I am going to do is give an example of how the CSP might make an 
impact on intelligent systems.  This is only a made-up example, so try 
to see it is as just an illustration.


Suppose that when evolution was trying to make improvements to the 
design of simple nervous systems, it hit upon the idea of using 
mechanisms that I will call concept-builder units, or CB units.  The 
simplest way to understand the CB units is to say that each one is 
forever locked into a peculiar kind of battle with the other units.  The 
CBs spend a lot of energy engaging in the battle with other CB units, 
but they also sometimes do other things, like fall asleep (in fact, most 
of them are asleep at any given moment), or have babies (they spawn new 
CB units) and sometimes they decide to lock onto a small cluster of 
other CB units and become obsessed with what those other CBs are doing.


So you should get the idea that these CB units take part in what can 
only be described as organized mayhem.


Now, if we were able to look inside a CB system and see what the CBs are 
doing [Note:  we can do this, to a limited extent:  it is called 
introspection], we would notice many aspects of CB behavior that were 
quite regular and sensible.  We would say, for example, that the CB 
units appear to be representing concepts like [chair] and [upside-down] 
and [desperation], and we would also say that when some CB units have 
babies, it looks rather like a couple of existing concepts being 
combined to form a new concept.


In fact, we might notice so many regular, ordered, understandable things 
happening in the CB-system that we would start to believe that the CB 
units were not engaging in what I just called organized mayhem at all! 
 We might say that the whole thing was pretty comprehensible and ordered.


In fact, we might be tempted to try to build a version of the system in 
which the behaviors were tidied up and cleaned  -  a system in which the 
'meaning' of each CB unit was precisely defined, and in which the 
building of new CBs always proceeded in a very precise, understandable 
way.  And then, after we started our project to build a cleaned-up 
version of a CB system, we would say that all we were doing was 
eliminating a lot of wasteful noise and inefficiency in the original CB 
system that was built by evolution.


But now, here is a little problem that we have to deal with.  It turns 
out that the CB system built by evolution was functioning *because* of 
all that chaotic, organized mayhem, *not* in spite of it.  It was not 
really a nice, organized, understandable mechanism plus a bit of noise 
and wastfulness . it was a mechanism whose proper functioning 
absolutely depended on a proper balance of those fighting CB units.  In 
fact, the overall intelligence of the system would drop like a stone if 
some of those mechanisms were taken away.  It was like an ecology:  all 
the competing species are in perfect balance, not because they are 
cooperating so that everyone gets the resources they need, but because 
nobody is cooperating with anyone else at all.


Now, here comes a crucial idea that many 

Re: [agi] Approximations of Knowledge

2008-07-01 Thread Russell Wallace
On Mon, Jun 30, 2008 at 8:10 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 My scepticism comes mostly from my personal observation that each complex
 systems scientist I come across tends to know about one breed of complex
 system, and have a great deal to say about that breed, but when I come to
 think about my preferred breed (AGI, cognitive systems) I cannot seem to
 relate their generalizations to my case.

That's not very surprising if you think about it. Suppose we postulate
the existence of a grand theory of complexity. That's a theory of
everything that is not simple (in the sense being discussed here) -
but a theory that says something about _every nontrivial thing in the
entire Tegmark multiverse_ is rather obviously not going to say very
much about any particular thing.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-01 Thread Derek Zahn
 
Thanks again Richard for continuing to make your view on this topic clear to 
those who are curious.
 
As somebody who has tried in good faith and with limited but nonzero success to 
understand your argument, I have some comments.  They are just observations 
offered with no sarcasm or insult intended.
 
1) The presentations would be a LOT clearer if you did not always start with 
Suppose that... and then make up a hypothetical situation.  As a reader I 
don't care about the hypothetical situation, and it is frustrating to be forced 
into trying to figure out if it is somehow a metaphor for what I *am* 
interested in, or what exactly the reason behind it is.  In this case, if you 
are actually talking about a theory of how evolution produced a significant 
chunk of human cognition (a society of CBs), then just say so and lead us to 
the conclusions about the actual world.  If you are not theorizing that the 
evolution/CBs thing is how human minds work, then I do not see the benefit of 
walking down the path.  Note that the basic CB idea you user here strikes me as 
a good one; it resonates with things like Minsky's Society of Mind, as well as 
the intent behind things like Hall's Sigmas and Goertzel's subgraphs.
 
2) Similarly, when you say 
 if we were able to look inside a CB system and see what the CBs are  doing 
 [Note: we can do this, to a limited extent: it is called  introspection], 
 we would notice many aspects of CB behavior ...
 
It would be a lot better if you left out the if and the would.  Say when 
we look inside this CB system... and we do notice any aspects... if that is 
what you mean.  If again this is some sort of strange hypothetical universe as 
a reader I am not very interested in speculations about it.
 
3) When you say
 
 But now, here is a little problem that we have to deal with. It turns  out 
 that the CB system built by evolution was functioning *because* 
 of all that chaotic, organized mayhem, *not* in spite of it.
 
Assuming that you are actually talking about human minds instead of a 
hypothetical universe, this is a very strong statement.  It is a theory about 
human intelligence that needs some support.  It is not necessarily a theory 
about intelligence-in-general; linking it to intelligence in general would be 
another theory requring support.  You may or may not think that intelligence 
in general is a coherent concept; given your recent statements that there can 
be no formal definition of intelligence, it's hard to say whether 
intelligence that is not isomorphic to human intelligence can exist in your 
view.
 
4) Regarding:
 
 Evolution explored the space of possible intelligent mechanisms. In the 
 course 
 of doing so, it discovered a class of systems that work, but it may well be  
 that the ONLY systems in the whole universe that can function as well as  a 
 human intelligence involve a small percentage of weirdness that just  
 balances out to make the system work. There may be no cleaned-up  versions 
 that work.
 
The natural response is:  sure, this may well be, but it just as easily may 
well not be.  This is addressed in your concluding points, which say that it 
is not definite, but is very likely.  As a reader, I do not see a reason to 
suppose that this is true.  You offer only the circumstantial evidence that AI 
has failed for 50 years, but there are many other possible reasons for this:
 
- maybe it's just hard.  many aspects of the universe took more than 50 years 
to understand, many are still not understood.  i personally think that if this 
is true we are unlikely to be just a few years from the solution, but it does 
seem like a reasonable viewpoint.
 
- maybe logic just stinks as a tool for modeling the world.  it seemed 
natural but looking at the things and processes in the human universe logically 
seems like a pretty poor idea to me.  maybe probabilistic logic of one sort 
or another will help.  but the point here is that it might not be a complex 
systems issue, it might just be a knowledge representation and reasoning issue. 
 perhaps generated or evolved program fragments will fare better; perhaps 
things that look like neural clusters will work, perhaps we haven't 
discovered a good way to model the universe yet.
 
- maybe we haven't ripped the kitchen sink out of the wall yet... maybe 
intelligence will turn out to be a conglomeration of 1543 different 
representation schemes and reasoning tricks, but we've only put a fraction 
together so far and therefore only covered a small section of what intelligence 
needs to do.
 
5) Of course, the argument would be strengthened by a somewhat detailed 
suggestion of how AI research *should* proceed; you give some arguments for why 
certain (unspecified) approaches *might* not work, but nothing beyond the 
barest hint of what to do about it, which doesn't motivate anybody to give much 
more than a shrug to your comments.  I wonder what it is that you expect people 
to do in response to 

RE: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-01 Thread Derek Zahn
Oh, one last point:
 
I find your thoughts in this message quite interesting personally because I 
think that puzzling out exactly what concept builders need to do, and how 
they might be built to do it, is the most interesting thing in the whole world. 
 I am resistant to the idea that it is impossible because all efforts to do so 
must be destined to result in insufficient results.  I admit to stubbornness on 
this point, and it will take strong deprogramming to stop me from taking an 
interest in recipes for the philosophers' stone.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-01 Thread Derek Zahn
Sorry for three messages in short succession.  Regarding concept builders, I 
have been writing in my bumbling way about this (and will continue to muse on 
fundamental issues) in my little blog:
 
http://agiblog.net


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-01 Thread John G. Rose
Well I can spend a lot of time replying this since it is a tough subject.
The CB system is a good example my thinking doesn't involve CB's yet so the
organized mayhem would be of a different form and I was thinking of the
complexity being integrated differently.

What you are saying makes sense in terms of evolution finding the right
combination. The reliance on the complexity, yes sure, possible. What I
think of this system you describe is like if you design a complicated
electronic circuit with much theory but little hands-on experience you run
into complexity issues from component value deviations and environmental
factors that need to be tamed and filtered out before your theoretical
electronic emergence comes to life. In that case the result is highly
dependent on the interoperating components clean design. BUT there are some
circuits I believe, can't think of any offhand, where the opposite is true.
It just kind of works based on based on complex subsystems interoperational
functionality and it was discovered, not designed intentionally.

If the CS problem is such that you describe then there is a serious
obstacle. I personally think that getting close to the human brain isn't
going to do it. A monkey brain is close. Can we get closer with a
simulation? Also I think there are other designs that Earth evolution just
didn't get. Those others designs may have the complex reliance.

Building a complex based intelligence much different from the human brain
design but still basically dependant on complexity is not impossible just
formidable. Working with software systems that have designed complexity and
getting predicted emergence and in this case cognition, well that is
something that takes special talent. We have tools now that nature and
evolution didn't have. We understand things through collective knowledge
accumulated over time. It can be more than trial and error. And the existing
trial and error can be narrowed down.

The part that I wonder about is why this complex ingredient is there (if it
is). Is it because of the complexity spectrum inherent in nature? Is it
fully non-understandable, can it be derived based on nature's complexity
structure? Or is there such a computational resource barrier that it is just
prohibitively inefficient to calculate. Or are we perhaps using the wrong
mathematics to try to understand it? Can it be estimated and does it
converge to anything we know of or is it just so randomish and exact.

I feel though that the human brain had to evolve though that messy data
space of nature and what we have is a momentary semi-reflection of that
historical environmental complexity. So our form of intelligence is somewhat
optimized for that. And if you take an intersecting subset with other
theoretical forms of intelligence would the complexity properties somehow
correlate or are they highly dependent on the environment of the evolution?
Or does our atomic based universe define what that evolutionary cognitive
complexity dependency is. I suppose that is the basis of arguments for or
against. 

John


 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 
 There has always been a lot of confusion about what exactly I mean by
 the complex systems problem (CSP), so let me try, once again, to give
 a quick example of how it could have an impact on AGI, rather than what
 the argument is.
 
 (One thing to bear in mind is that the complex systems problem is about
 how researchers and engineers should go about building an AGI.  The
 whole point of the CSP is to say that IF intelligent systems are of a
 certain sort, THEN it will be impossible to build intelligent systems
 using today's methodology).
 
 What I am going to do is give an example of how the CSP might make an
 impact on intelligent systems.  This is only a made-up example, so try
 to see it is as just an illustration.
 
 Suppose that when evolution was trying to make improvements to the
 design of simple nervous systems, it hit upon the idea of using
 mechanisms that I will call concept-builder units, or CB units.  The
 simplest way to understand the CB units is to say that each one is
 forever locked into a peculiar kind of battle with the other units.  The
 CBs spend a lot of energy engaging in the battle with other CB units,
 but they also sometimes do other things, like fall asleep (in fact, most
 of them are asleep at any given moment), or have babies (they spawn new
 CB units) and sometimes they decide to lock onto a small cluster of
 other CB units and become obsessed with what those other CBs are doing.
 
 So you should get the idea that these CB units take part in what can
 only be described as organized mayhem.
 
 Now, if we were able to look inside a CB system and see what the CBs are
 doing [Note:  we can do this, to a limited extent:  it is called
 introspection], we would notice many aspects of CB behavior that were
 quite regular and sensible.  We would say, for example, that the CB
 units appear to be representing 

Re: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-01 Thread Richard Loosemore

Derek Zahn wrote:





Thanks again Richard for continuing to make your view on this topic
clear to those who are curious.

As somebody who has tried in good faith and with limited but nonzero
success to understand your argument, I have some comments.  They are
just observations offered with no sarcasm or insult intended.


Thanks for the thoughtful comments.

I wonder if it would help if I reiterated that this was supposed to be 
an illustration of the *manner* in which the CSP is likely to manifest 
itself, rather than the reasons why we should believe it will manifest 
itself?


In other words, what I was trying to achieve was an illustration of the 
kind of situation that would arise *if* the argument itself was sound. 
I was trying to do this because there are many people who misinterpret 
the argument's supposed impact.  In particular, many people assume that 
what I am saying is that intelligence is a completely emergent property 
of the human mind ... so drastically emergent that it just springs out 
of apparent chaos.  By attempting to give a more detailed example I was 
hoping to show that the type of situation that could arise might be very 
subtle - little obvious evidence of 'complexity' - and yet at the same 
time quite devastating in its impact.  It was that contrast between the 
small complexity footprint and big kick that I was trying toi bring out.


Alas, most of your observations and questions bring us back to the 
background arguments and reasoning (which is what I was trying to leave 
out).


Let me try to address some of them.


1) The presentations would be a LOT clearer if you did not always
start with Suppose that... and then make up a hypothetical
situation.  As a reader I don't care about the hypothetical
situation, and it is frustrating to be forced into trying to figure
out if it is somehow a metaphor for what I *am* interested in, or
what exactly the reason behind it is.  In this case, if you are
actually talking about a theory of how evolution produced a
significant chunk of human cognition (a society of CBs), then just
say so and lead us to the conclusions about the actual world.  If you
are not theorizing that the evolution/CBs thing is how human minds
work, then I do not see the benefit of walking down the path.  Note
that the basic CB idea you user here strikes me as a good one; it
resonates with things like Minsky's Society of Mind, as well as the
intent behind things like Hall's Sigmas and Goertzel's subgraphs.


My strategy was as follows.  (1) Suppose that the human mind is built in 
such-and-such a way.  (2) One consequence of it being that way would be 
that it would be critically dependent on some mechanisms that give it 
stability without their contribution being at all obvious.  (3) Although 
this hypothetical mind design is just a guess, it illustrates an entire 
class of designs that can be very, very different from one another, but 
which all share the common feature that their stability would be 
dependent on mechanisms that supply stability without doing so in a way 
that is understandable.  (4) Systems in this general class are, of 
course, the ones that are called complex, and the reason that I chose 
a simple example to illustrate the class is that there are many other 
examples in which it is much harder to see the linkage between global 
behavior and local mechanisms ... so I was just trying to pick an 
example where it becomes as easy to comprehend as possible.  (5) One 
thing we know for sure is that the human mind has all the ingredients 
that normally give rise to complexity of the 'mild' sort shown in this 
example, and so it would be a truly astonishing fact if the human mind 
did not, in some way, have some global-local disconnects tucked away 
somewhere.  (6) I do not necessarily believe that the particular type of 
global-local disconnect that I used in my example is exactly the one 
that manifests itself in the human mind, but if I avoid specific 
examples and instead talk in the abstract, people find it very hard to 
imagine what it might mean to say that a small amount of complexity 
might make it impossible to build an intelligence as good as the human mind.


Unfortunately, my example is a little ambiguous:  do I really   think 
this is true in the human case, or is it just a made-up example?  Well, 
it is a little bit of both.  I actually think that it could be true, but 
I am not in a position to claim it to be true.  So it is partly a 
metaphor and partly real.  I can see how that might be frustrating from 
the reader's point of view.  My bad.


It is important to understand, though, that in creating this hypthetical 
example I was merely trying to illustrate an abstract concept that would 
otherwise leave many people perplexed.



2) Similarly, when you say

if we were able to look inside a CB system and see what the CBs are
 doing [Note: we can do this, to a limited extent: it is called 
introspection], we would notice many aspects 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread Terren Suydam

Will,

I think the original issue was about purpose. In your system, since a human is 
the one determining which programs are performing the best, the purpose is 
defined in the mind of the human. Beyond that, it certainly sounds as if it is 
a self-organizing system. 

Terren

--- On Tue, 7/1/08, William Pearson [EMAIL PROTECTED] wrote:
 I'm not sure how the system qualifies. It seems to be
 half way between
 the two definitions you gave. The programs can have special
 instructions in that bid for a specific resource with as
 much credit
 as they want (see my recent message replying to Vladimir
 Nesov for
 more information about banks, bidding and credit). The
 instructions
 can be removed or not done, the amount of credit bid can be
 changed.
 The credit is given to some programs by a fixed function,
 but they
 have instructions they can execute (or not) to give it to
 other
 programs forming an economy. What say you, self-organised
 or not?




  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] You Say Po-tay-toe, I Sign Po-toe-tay...

2008-07-01 Thread Brad Paulsen

Greetings Fellow Knowledge Workers...

WHEN USING GESTURES, RULES OF GRAMMAR REMAIN THE SAME
http://www.physorg.com/news134065200.html

The link title is a bit misleading.  You'll see what I mean when you read it.

Enjoy,

Brad


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread Terren Suydam

Hi Mike,

My points about the pitfalls of theorizing about intelligence apply to any and 
all humans who would attempt it - meaning, it's not necessary to characterize 
AI folks in one way or another. There are any number of aspects of intelligence 
we could highlight that pose a challenge to orthodox models of intelligence, 
but the bigger point is that there are fundamental limits to the ability of an 
intelligence to observe itself, in exactly the same way that an eye cannot see 
itself. 

Consciousness and intelligence are present in every possible act of 
contemplation, so it is impossible to gain a vantage point of intelligence from 
outside of it. And that's exactly what we pretend to do when we conceptualize 
it within an artificial construct. This is the principle conceit of AI, that we 
can understand intelligence in an objective way, and model it well enough to 
reproduce by design.

Terren

--- On Tue, 7/1/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Terren:It's to make the larger point that we may be so
 immersed in our own 
 conceptualizations of intelligence - particularly because
 we live in our 
 models and draw on our own experience and introspection to
 elaborate them - 
 that we may have tunnel vision about the possibilities for
 better or 
 different models. Or, we may take for granted huge swaths
 of what makes us 
 so smart, because it's so familiar, or below the radar
 of our conscious 
 awareness, that it doesn't even occur to us to reflect
 on it.
 
 No 2 is more relevant - AI-ers don't seem to introspect
 much. It's an irony 
 that the way AI-ers think when creating a program bears v.
 little 
 resemblance to the way programmed computers think. (Matt
 started to broach 
 this when he talked a while back of computer programming as
 an art). But 
 AI-ers seem to have no interest in the discrepancy - which
 again is ironic, 
 because analysing it would surely help them with their
 programming as well 
 as the small matter of understanding how general
 intelligence actually 
 works.
 
 In fact  - I just looked - there is a longstanding field on
 psychology of 
 programming. But it seems to share the deficiency of
 psychology and 
 cognitive science generally which is : no study of the 
 stream-of-conscious-thought, especially conscious
 problemsolving. The only 
 AI figure I know who did take some interest here was
 Herbert Simon who 
 helped establish the use of verbal protocols.
 
 
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-07-01 Thread Terren Suydam

Nevertheless, generalities among different instances of complex systems have 
been identified, see for instance:

http://en.wikipedia.org/wiki/Feigenbaum_constants

Terren

--- On Tue, 7/1/08, Russell Wallace [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
  My scepticism comes mostly from my personal
 observation that each complex
  systems scientist I come across tends to know about
 one breed of complex
  system, and have a great deal to say about that breed,
 but when I come to
  think about my preferred breed (AGI, cognitive
 systems) I cannot seem to
  relate their generalizations to my case.
 
 That's not very surprising if you think about it.
 Suppose we postulate
 the existence of a grand theory of complexity. That's a
 theory of
 everything that is not simple (in the sense being discussed
 here) -
 but a theory that says something about _every nontrivial
 thing in the
 entire Tegmark multiverse_ is rather obviously not going to
 say very
 much about any particular thing.
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-01 Thread Terren Suydam

--- On Tue, 7/1/08, John G. Rose [EMAIL PROTECTED] wrote:

 BUT there are some
 circuits I believe, can't think of any offhand, where
 the opposite is true.
 It just kind of works based on based on complex subsystems
 interoperational
 functionality and it was discovered, not designed
 intentionally.

Perhaps you are thinking of this:

http://www.damninteresting.com/?p=870

The story of a guy who evolved FPGA's to detect specific audio tones. After 
4000 generations, his simple 10 by 10 array of logic gates could perfectly 
discriminate the tones. But the best part, from the article:

Dr. Thompson peered inside his perfect offspring to gain insight into its 
methods, but what he found inside was baffling. The plucky chip was utilizing 
only thirty-seven of its one hundred logic gates, and most of them were 
arranged in a curious collection of feedback loops. Five individual logic cells 
were functionally disconnected from the rest– with no pathways that would allow 
them to influence the output– yet when the researcher disabled any one of them 
the chip lost its ability to discriminate the tones. Furthermore, the final 
program did not work reliably when it was loaded onto other FPGAs of the same 
type.

Turns out the evolutionary process incorporated electromagnetic field effects 
unique to that particular FPGA chip. I love this story because it illustrates 
perfectly what I've been saying about the limitations of design versus the 
creativity of the evolved approach.

Terren





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com