[agi] news bit: Evolution of Intelligence More Complex Than Once Thought

2008-12-29 Thread David Hart
Via Slashdot:

*According to a new article published in Scientific American, the nature of
and evolutionary development of animal intelligence is significantly more
complicated than many have
assumedhttp://www.sciam.com/article.cfm?id=one-world-many-minds.
In opposition to the widely held view that intelligence is largely linear in
nature, in many cases intelligent traits have developed along independent
paths. From the article: 'Over the past 30 years, however, research in
comparative neuroanatomy clearly has shown that complex brains — and
sophisticated cognition — have evolved from simpler brains multiple times
independently in separate lineages ...'*
*

*



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread David Hart
On Sat, Dec 27, 2008 at 5:25 PM, Ben Goertzel b...@goertzel.org wrote:


 I wrote down my thoughts on this in a little more detail here (with some
 pastings from these emails plus some new info):


 http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html


I really liked this essay. I'm curious about the clarity of terms 'real
world' and 'physical world' in some places. It seems that, to make its
point, the essay requires 'real world' and 'physical world' mean only
'practical' or 'familiar physical reality', depending on context. Whereas,
if 'real world' is reserved for a very broad definition of realities
including physical realities (including classical, quantum mechanical and
relativistic time and distance scales), peculiar human cultural realities,
and other definable realities, it will be easier in follow-up essays to
discuss AGI systems that can natively think simultaneously about any
multitude of interrelated realities (a trick that humans are really bad at).
I hope this makes sense...

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread David Hart
'On Sun, Dec 28, 2008 at 1:02 AM, Ben Goertzel b...@goertzel.org wrote:


 See mildly revised version, where I replaced real world with everyday
 world (and defined the latter term explicitly), and added a final section
 relevant to the distinctions between the everyday world, simulated everyday
 worlds, and other portions of the physical world.


I think that's much more clear, and the additions help to frame the meaning
of 'everyday world'.

Another important open question, that's really a generalization of 'how much
detail does the virtual world need to have?', is can we create practical
progressions of simulations of the everyday world, such that the first (and
more crude) simulations are very useful to early attempts at teaching
proto-AGIs, and the development of progressively more sophisticated
simulations roughly tracks the development of progress in AGI design and
development.

I also see the kernel of a formally defined science of discovery of the
general properties of everyday intelligence; if presented in ways that
cognitive scientists appreciate, it could really catch on!

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread David Hart
Matthias,

You've presented a straw man argument to criticize embodiment; As a
counter-example, in the OCP AGI-development plan, embodiment is not
primarily used to provide domains (via artificial environments) in which an
AGI might work out abstract problems, directly or comparatively (not to
discount the potential utility of this approach in many scenarios), but
rather to provide an environment for the grounding of symbols (which include
concepts important for doing mathematics), similar to the way in which
humans (from infants through to adults) learn through play and also through
guided education.

'Abstraction' is so named because it involves generalizing from the
specifics of one or more domains (d1, d2), and is useful when it can be
applied (with *any* degree of success) to other domains (d3, ...). Virtual
embodied interactive learning utilizes virtual objects and their properties
as a way of generating these specifics for artificial minds to use to build
abstractions, to grok the abstractions of others, and ultimately to build a
deep understanding of our reality (yes, 'deep' in this sense is used in a
very human-mind-centric way).

Of course, few people claim that machine learning with the help of virtually
embodied environments is the ONLY way to approach building an AI capable of
doing and mathematics (and communicating with humans about mathematics), but
it is an approach that has *many* good things going for it, including
proving tractable via measurable incremental improvements (even though it is
admittedly still at a *very* early stage).

-dave

On Wed, Oct 22, 2008 at 4:20 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

   It seems to me that many people think that embodiment is very important
 for AGI.

 For instance some people seem to believe that you can't be a good
 mathematician if you haven't made some embodied experience.



 But this would have a rather strange consequence:

 If you give your AGI a difficult mathematical problem to solve, then it
 would answer:



 Sorry, I still cannot solve your problem, but let me walk with my body
 through the virtual world.

 Hopefully, I will then understand your mathematical question end even more
 hopefully I will be able to solve it after some further embodied
 experience.



 AGI is the ability to solve different problems in different domains. But
 such an AGI would need to make experiences in domain d1 in order to solve
 problems of domain d2. Does this really make sense, if every information
 necessary to solve problems of d2 is in d2? I think an AGI which has to make
 experiences in d1 in order to solve a problem of domain d2 which contains
 everything to solve this problem is no AGI. How should such an AGI know what
 experiences in d1 are necessary to solve the problem of d2?



 In my opinion a real AGI must be able to solve a problem of a domain d
 without leaving this domain if in this domain there is everything to solve
 this problem.



 From this we can define a simple benchmark which is not sufficient for AGI
 but which is **necessary** for a system to be an AGI system:



 Within the domain of chess there is everything to know about chess. So if
 it comes up to be a good chess player

 learning chess from playing chess must be sufficient. Thus, an AGI which is
 not able to enhance its abilities in chess from playing chess alone is no
 AGI.



 Therefore, my first steps in the roadmap towards AGI would be the
 following:

 1.   Make a concept for your architecture of your AGI

 2.   Implement the software for your AGI

 3.   Try if your AGI is able to become a good chess player from
 learning in the domain of chess alone.

 4.   If your AGI can't even learn to play good chess then it is no AGI
 and it would be a waste of time to make experiences with your system in more
 complex domains.



 -Matthias








   --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread David Hart
On Tue, Oct 21, 2008 at 12:56 AM, Dr. Matthias Heger [EMAIL PROTECTED]wrote:

  Any argument of the kind you should better first read xxx + yyy +…  is
 very weak. It is a pseudo killer argument against everything with no content
 at all.

 If  xxx , yyy … contains  really relevant information for the discussion
 then it should be possible to quote the essential part with few lines of
 text.

 If someone is not able to do this he should himself better read xxx, yyy, …
 once again.


I disagree. Books and papers are places to make complex multi-part
arguments. Dragging out those arguments through a series of email-based
soundbites in many cases will not help someone to grok the higher levels of
those arguments, and will constantly miss out on smaller points that fuel
countless unecessary misunderstandings. We witness these problems and others
(practically daily) on the AGI list.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread David Hart
An excellent post, thanks!

IMO, it raises the bar for discussion of language and AGI, and should be
carefully considered by the authors of future posts on the topic of language
and AGI. If the AGI list were a forum, Matthias's post should be pinned!

-dave

On Sun, Oct 19, 2008 at 6:58 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 The process of outwardly expressing meaning may be fundamental to any
 social
 intelligence but the process itself needs not much intelligence.

 Every email program can receive meaning, store meaning and it can express
 it
 outwardly in order to send it to another computer. It even can do it
 without
 loss of any information. Regarding this point, it even outperforms humans
 already who have no conscious access to the full meaning (information) in
 their brains.

 The only thing which needs much intelligence from the nowadays point of
 view
 is the learning of the process of outwardly expressing meaning, i.e. the
 learning of language. The understanding of language itself is simple.

 To show that intelligence is separated from language understanding I have
 already given the example that a person could have spoken with Einstein but
 needed not to have the same intelligence. Another example are humans who
 cannot hear and speak but are intelligent. They only have the problem to
 get
 the knowledge from other humans since language is the common social
 communication protocol to transfer knowledge from brain to brain.

 In my opinion language is overestimated in AI for the following reason:
 When we think we believe that we think in our language. From this we
 conclude that our thoughts are inherently structured by linguistic
 elements.
 And if our thoughts are so deeply connbected with language then it is a
 small
 step to conclude that our whole intelligence depends inherently on
 language.

 But this is a misconception.
 We do not have conscious control over all of our thoughts. Most of the
 activities within our brain we cannot be aware of when we think.
 Nevertheless it is very useful and even essential for human intelligence
 being able to observe at least a subset of the own thoughts. It is this
 subset which we usually identify with the whole set of thoughts. But in
 fact
 it is just a tiny subset of all what happens in the 10^11 neurons.
 For the top-level observation of the own thoughts the brain uses the
 learned
 language.
 But this is no contradiction to the point that language is just a
 communication protocol and nothing else. The brain translates its patterns
 into language and routes this information to its own input regions.

 The reason why the brain uses language in order to observe its own thoughts
 is probably the following:
 If a person A wants to communicate some of its patterns to a person B then
 it has solve two problems:
 1. How to compress the patterns?
 2. How to send the patterns to the person B?
 The solution for the two problems is language.

 If a brain wants to observe its own thoughts it has to solve the same
 problems.
 The thoughts have to be compressed. If not you would observe every element
 of your thoughts and you would end up in an explosion of complexity. So why
 not use the same compression algorithm as it is used for communication with
 other people? That's the reason why the brain uses language when it
 observes
 its own thoughts.

 This phenomenon leads to the misconception that language is inherently
 connected with thoughts and intelligence. In fact it is just a top level
 communication protocol between two brains and within a single brain.

 Future AGI will have a much broader bandwidth and even for the current
 possibilities of technology human language would be a weak communication
 protocol for its internal observation of its own thoughts.

 - Matthias






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Defining AGI

2008-10-18 Thread David Hart
On Sat, Oct 18, 2008 at 9:48 PM, Mike Tintner [EMAIL PROTECTED]wrote:


 [snip] We understand and think with our whole bodies.


Mike, these statements are an *enormous* leap from the actual study of
mirror neurons. It's my hunch that the hypothesis paraphrased above is
generally true, but it is *far* from being fully supported by, or understood
via, the empirical evidence.


 [snip] these are all original or recently original observations about the
 powers of the human brain and body which are beyond the powers of any
 digital computer. You claimed never to have heard an original observation
 here re digital computers' limitations -  that's because you don't listen,
 and aren't interested in the non-digital and non-rational. Obviously a pet
 in a virtual world can have no real body or embodied integrity).


It seems that your magical views on human cognition are showing their colors
again; you haven't supplied any coherent argument as to why the hypothetical
function of mirror neurons (skills empathy with and mimicry of other
embodied entities or representations thereof) could not be duplicated by
sufficiently clever software written for digital computers.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Defining AGI

2008-10-18 Thread David Hart
Mike, I think you won't get a disagreement in principle about the benefits
of melding creativity and rationality, and of grokking/manipulating concepts
in metaphorical wholes. But really, a thoughtful conversation about *how*
the OCP design addresses these issues can't proceed until you've RTFBs.

-dave


On Sun, Oct 19, 2008 at 1:23 AM, Mike Tintner [EMAIL PROTECTED]wrote:




 David:Mike, these statements are an *enormous* leap from the actual study
 of mirror neurons. It's my hunch that the hypothesis paraphrased above is
 generally true, but it is *far* from being fully supported by, or understood
 via, the empirical evidence.


 [snip] these are all original or recently original observations about the
 powers of the human brain and body which are beyond the powers of any
 digital computer. You claimed never to have heard an original observation
 here re digital computers' limitations -  that's because you don't listen,
 and aren't interested in the non-digital and non-rational. Obviously a pet
 in a virtual world can have no real body or embodied integrity).


 It seems that your magical views on human cognition are showing their
 colors again; you haven't supplied any coherent argument as to why the
 hypothetical function of mirror neurons (skills empathy with and mimicry of
 other embodied entities or representations thereof) could not be duplicated
 by sufficiently clever software written for digital computers.

 David,

 I actually did give the reason - but, fine, I haven't clearly explained it
 enough to communicate. The reason is basically simple. All the powers
 discussed depend on the cognitive ability to map one complex, irregular
 shape onto another  - and that involves a fluid transformation, (which is
 completely beyond the power of any current software - or,to be more precise,
 any rational sign system, esp. mathematics/geometry).

 When you map your body onto that of the Dancers, (or anyone else's), you
 are mapping two irregular shapes that are not geometrically comparable, onto
 each other. There is no formulaic way to transform one into the other, and
 hence perceive their likeness. Geometry and geometrically-based software
 can't do this.

 When you see that the outline map of Italy is like a boot - a classic
 example of metaphor/analogy - there is no geometric, formulaic way to
 transform that cartographic outline of that landmass into the outline of a
 boot. It is a fluid transformation of one irregular shape into another
 irrregular shape.

 When you *draw* almost any shape whatsoever, you are engaged in performing
 fluid transformations - producing *rough* likenesses/shapes (as opposed to
 the precise, formulaic likenesses of geometry). The shapes of the faces and
 flowers you draw on a page are only v. (sometimes v.v.) roughly like the
 real shapes of the real objects you have observed,

 Think of a cinematic *dissolve* from one object, like a face, into another
 - which is not a precise, formulaic morphing but simply a rough
 superimposition of two shapes that are roughly alike. Crudely, you could
 say, your brain is continually performing that sort of operation on the
 shapes of the world in order to recognize them and compare them..

 Or think of a face perceived through fluid rippling water. Your brain,
 speaking v. loosely, is able to perform somewhat similar transformations on
 objects.

 The human mind deals in fluid shapes.

 The human body continuously produces fluid shapes itself. When you move you
 are continuously shaping and then fluidly transforming your body to fit the
 world around you. When you reach out for an object, you start shaping your
 hand to fit before you get there, and fluidly adjust that hand shape as
 required to actually grasp the object.

 Geometry can only perform regular/rational transformations of  objects -
 even  topology deals in the regular likenesses besides otherwise
 non-comparable objects like a doughnut and a cup handle. Even, at its
 current, most flexible extreme, the geometry of free-form transformation
 is still dealing with formulaic transformations, that are not truly
 free-form/fluid and so not able to handle the operations I've been
 discussing. But the very term, free-form, indicates what geometry would like
 but is unable to achieve).

 There is an obvious difference between geometry and art/drawing. Computers
 in their current guise are only geometers and not artists. They cannot map
 shapes directly - physically-  onto each other, (with no intermediate
 operations), and they cannot fluidly (and directly) transform shapes into
 each other. The brain is manifestly an artist and manifestly organized
 extremely extensively on mapping lines - and those brain maps, as
 experiments show, are able to undergo fluid transformations themselves in
 their spatial layout.

 Another way to say this, is to say that the brain has and computers don't
 have,imagination - they cannot truly handle/map images/shapes.

 There is nothing magical 

Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-15 Thread David Hart
On Wed, Oct 15, 2008 at 5:52 PM, Colin Hales
[EMAIL PROTECTED]wrote:


 So you'll just have to wait. Sorry. I also have patent/IP issues.


Exactly what qualia am I expected to feel when you say the words
'Intellectual Property'? (that's a rhetorical question, just in case there
was any doubt!)

I'd like to suggest that the COMP=false thread be considered a completely
mis-placed, undebatable and dead topic on the AGI list. Maybe people who
like Chinese Rooms will sign up for the new COMP=false list...

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] reminder: CogDev2008 October 26, Mountain View, CA

2008-10-11 Thread David Hart
CogDev is a free 1-day workshop where you can learn about OpenCog and
OpenCogPrime and meet some of the team.

More info at http://opencog.org/wiki/CogDev2008

Signup / Registration Form at
http://spreadsheets.google.com/viewform?key=pT15xTF3ys-1Aola-Yb_UFw

When? Sunday, October 26, 2008 - 10am - 6pm
(following Singularity Summit 2008 http://www.singularitysummit.com/program)

Where? Computer History Museum, Mountain View, CA
(http://www.computerhistory.org/directions/)

What?

*Morning*: OpenCog Prime Seminar presented by Ben Goertzel
http://opencog.org/wiki/OpenCog_Prime

*Midday*: Planning  Roadmap session
http://opencog.org/wiki/Roadmap

*Afternoon*: Bug Day  Mini-Sprint
(including developers from #opencog via IRC)

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] It is more important how AGI works than what it can do.

2008-10-11 Thread David Hart
Hi Brad,

An interesting point of conceptual agreement between OCP and Texai designs
is that very specifically engineered bootstrapping processes are necessary
to push into AGI territory. Attempting to summarize using my limited
knowledge, Texai hopes to achieve that boostrapping via reasoning over
commonsense knowledge which has been acquired via a combination of
expert-system data entry and unsupervised learning. OCP hopes to achieve
that boostrapping via a combination of embodied interactive learning and
reasoning supplemented with narrow-AI NL components (wordnet, RelEx semantic
comprehension, RelEx NLgen, etc.). Of course, each project has their own
reasons for believing that their approach is the most tractable and the
least likely to become stuck in the AI-rabbitholes of the past.

I believe that surface comparisons of most modern AGI-oriented designs
cannot be used to make 'likelihood to proceed faster than others'
predictions with sufficient confidence to weave convincing arguments over an
email medium. So, making assertions about a design being 'better, faster,
cheaper, less risky, etc.' are okay, if those assertions are clearly
opinions (being backed up in writing is good, but that generally requires
paper and book length treatment) and agreements to disagree are arrived at
readily (without resorting to digressions about straw men to undermine
others positions). The goal of this structure for this aspect of list
discussion is to create an atmosphere where everyone can learn as much as
possible about competing AGI designs. I think we're all saying effectively
the same thing here, so we should be able to agree to agree on this point.

IMO, it's more productive to highlight the reasons why your [insert AGI
design here] system might work, rather than obsessing on the flaws of other
designs. E.g, it's really not useful to repeatedly press the fact that past
[grossly insufficient] attempts at NLU and embodiment have been abject
failures, since *ALL* past attempts at AGI have fallen short of the mark,
including knowledge-based expert-system with reasoning-bolted-on approaches.
Furthermore, if all of science and engineering used the conservative logic
that past performance [...] is really the only thing you have to go on,
then we'd still be stuck with Victorian-level science and technology, since
all of the great leaps where past performance WASN'T the best indicator
would have been missed.

On to a positive argument for the OCP design, the simple explanation for why
embodiment in various forms has, so far, failed to provide any real help in
cracking the NLU problem,  is that all past attempts at embodiment have
been incredibly crude and grossly insufficient. The technologies that might
allow for fine realtime motor control and perception (including
proprioception, or even hacks like good inverse kinematics, and other
subtleties) in real or virtual settings have simply not yet been
sufficiently developed. Any roboticist or virtual world programmer can
confirm this assertion. One aspect of OCP development focuses on this issue
and is working with the realXtend developers to enhance OpenSim to provide
sufficient functionality to enable ever more sophisticated
perception-action-reasoning loops (we'd also like to work with robot
simulation and control software at some later stage); this work will likely
be written up in a paper sometime next year.

-dave

On Sat, Oct 11, 2008 at 9:52 PM, Brad Paulsen [EMAIL PROTECTED]wrote:

 Dave,

 Well, I thought I'd described how pretty well.  Even why.  See my recent
 conversation with Dr. Heger on this list.  I'll be happy to answer specific
 questions based on those explanations but I'm not going to repeat them here.
  Simply haven't got the time.

 Although I have not been asked to do so, I do feel I need to provide an ex
 post facto disclaimer.  Here goes:

 I am aware of the approach being taken by Stephen Reed in the Texai
 project.  I am currently associated with that project as a volunteer.  What
 I have said previously in this regard) is, however, my own interpretation
 and opinion insofar as what I have said concerned tactics or strategies that
 may be similar to those being implemented in the Texai project.  I'm pretty
 sure my interpretations and opinions are highly compatible with Steve's
 views even though they may not agree in every detail.  My comments should
 NOT, however, be taken as an official representation of the Texai
 project's tactics, strategies or goals.  End disclaimer.

 I was asked by Dr. Heger to go into some of the specifics of the strategy I
 had in mind.  I honored his request and wrote quite extensively (for a list
 posting -- sorry 'bout that) about that strategy.  I have not argued, nor do
 I intend to argue, that I have an approach to AGI that is better, faster or
 more economical than approach X.  Instead, I have simply pointed out that
 NLU and embodiment problems have proven themselves to be extremely difficult
 (indeed, 

Re: [agi] open or closed source for AGI project?

2008-10-11 Thread David Hart
On Sun, Oct 12, 2008 at 3:37 PM, YKY (Yan King Yin) 
[EMAIL PROTECTED] wrote:


 There are other differences with OCP, as you know I plan to use PZB
 logic, and I've written part of a Lisp prototype.  I'm not sure what's
 the best way to opensource it -- integrating with OCP, or as a
 separate branch, or..?


[note this is a technical digression, but some of the info below may be
useful for other open source AGI projects, so is just marginally on topic
for this list]

Most new features are added to OpenCog via branching and merging, including
the PLN implementation that Joel Pitt is integrating at this moment. The
OpenCog codebase is stored/accessed/revised using the Bazaar distributed
version control system (DVCS). Bazaar (aka bzr) is similar to Git (used for
Linux kernel development) and Mercurial (used for Mozilla/Firefox
development). DVCS replaces old-fashioned centralized version control
systems like Subversion, CVS, Microsoft SourceSafe, etc.

Creating a private branch based on the the OpenCog source, e.g. to work on a
new logic implementation, is simple. In practice, if you keep your work
cleanly in separate directories, it's possible to easy remain in sync with
the 'trunk' (the common stable branch) by regularly 'rebasing' your local
branch against the trunk. Its also simple to publish (aka push) your local
branch(es) to Launhpad, where OpenCog is hosted.

At present, the OpenCog trunk is a few weeks out of date because a large
amount of new work is being stabilized in the 'staging' branch. Within a
few weeks though, the trunk will be refreshed after all developers have
rebased on 'staging' and corrected conflicts with their local branches.

It's difficult to estimate all of the plusses and minuses of working with
OpenCog before it reaches a stable release. I believe however that it's
possible to collaborate with other developers to use the AtomTable in the
'correct' way, which may include priortizing immediate work to allign with
future feature enhancements or bug fixes relating to the AtomTable.

Gustavo recently wrote MindAgent boilderplate code, which is effectively
plumbing for creating new MindAgents. Although the learning curves for C++
with boost and templates, plus the 'Zen of OpenCog' way of doing things,
likely exceeds the technical hurdles! The steep learning curves are reasons
why we're trying to lower the technical hurdles by creating things like
installable packages, boilerplate code, Eclipse IDE integration, tutorials,
wiki pages with developer documentation, etc.!

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-06 Thread David Hart
On Mon, Oct 6, 2008 at 4:39 PM, Brad Paulsen [EMAIL PROTECTED] wrote:

 So, it has, in fact, been tried before.  It has, in fact, always failed.
 Your comments about the quality of Ben's approach are noted.  Maybe you're
 right.  But, it's not germane to my argument which is that those parts of
 Ben G.'s approach that call for human-level NLU, and that propose embodiment
 (or virtual embodiment) as a way to achieve human-level NLU, have been tried
 before, many times, and have always failed.  If Ben G. knows something he's
 not telling us then, when he does, I'll consider modifying my views.  But,
 remember, my comments were never directed at the OpenCog project or Ben G.
 personally.  They were directed at an AGI *strategy* not invented by Ben G.
 or OpenCog.


The OCP approach/strategy, both in crucial specifics of its parts and
particularly in its total synthesis, *IS* novel; I recommend a closer
re-examination!

The mere resemblance of some of its parts to past [failed] AI undertakings
is not enough reason to dismiss those parts, IMHO, dislike of embodiment or
NLU or any other aspect that has a GOFAI past lurking in the wings not
withstanding.

OTOH, I will happily agree to disagree on these points to save the AGI list
from going down in flames! ;-)

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] It is more important how AGI works than what it can do.

2008-10-06 Thread David Hart
Brad,

Your post describes your position *very* well, thanks.

But, it does not describe *how* or *why* your AI system might achieve domain
expertise any faster/better/cheaper than other narrow-AI systems (NLU
capable, embodied, or otherwise) on its way to achieving networked-AGI. The
list would certainly benefit from any such exposition!

On a smaller point of clarification, the OCP 'embodied' design will not
attempt to simulate deep human behavior, but rather kluge good enough
humanesque and non-humanesque embodiment to provide *grounding* for good
enough solutions in a wide variety of situations (sub-adult performance in
some situaitons and better-than-genius performance in others) including NLU,
types of science that require massive information synthesis and creative
leaps in thinking (inlcuding in non-everday-human contexts such as
nanoscopic quantum scales or macroscopic relativistic scales), plus other
interesting areas such as industry, economics, public policy, arts, etc.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-06 Thread David Hart
On Tue, Oct 7, 2008 at 10:43 AM, Charles Hixson
[EMAIL PROTECTED]wrote:

 I feel that an AI with quantum level biases would be less general. It would
 be drastically handicapped when dealing with the middle level, which is
 where most of living is centered. Certainly an AGI should have modules which
 can more or less directly handle quantum events, but I would predict that
 those would not be as heavily used as the ones that deal with the mid level.
 We (usually) use temperature rather then molecule speeds for very good
 reasons.


A single AGI should be able to use different sets of biases and heuristics
in different contexts, and do so simultaneously (i.e. multiple concurrent
areas of hyper-focus, each with its own context, assuming the AGI is running
on powerful enough hardware). This ability is clearly pointed in the
direction of *greater* generality. The PLN book hints that this scenario is
forseen and planned for in the design of PLN; future revisions may well
mention a similar example specifically, inline with Ben's related comments
on this topic.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread David Hart
On Sun, Oct 5, 2008 at 3:55 PM, Brad Paulsen [EMAIL PROTECTED] wrote:

 More generally, as long as AGI designers and developers insist on
 simulating human intelligence, they will have to deal with the AI-complete
 problem of natural language understanding.  Looking for new approaches to
 this problem, many researches (including prominent members of this list)
 have turned to embodiment (or virtual embodiment) for help.  IMHO, this
 is not a sound tactic because human-like embodiment is, itself, probably an
 AI-complete problem.


Incrementally tackling the AI-complete nature of the natural language
problem is one of the primary reasons for going down the virtual embodiment
path in the first place, to ground the concepts that an AI learns in
non-verbal ways which are similar to (but certainly not identical to) the
ways in which humans and other animals learn (see Piaget, et al). Whether or
not human-like embodiment is an AI-complete problem (we're betting it's not)
is much less clear compared with whether or not natural language
comprehension is an AI-complete problem (research to date indicates that it
is).

Insofar as achieving human-like embodiment and human natural language
 understanding is possible, it is also a very dangerous strategy.  The
 process of understanding human natural language through human-like
 embodiment will, of necessity, lead to the AGHI developing a sense of self.
  After all, that's how we humans got ours (except, of course, the concept
 preceded the language for it).  And look how we turned out.


The development of 'self' in an AI does NOT imply the development of the
same type of ultra-narcissistic self that developed evolutionarily in
humans. The development of something resembling a 'self' in an AI should be
pursued only with careful monitoring, guidance and tuning to prevent the
development of a runaway ultra-narcissistic self.

I realize that an AGHI will not turn on us simply because it understands
 that we're not (like) it (i.e., just because it acquired a sense of self).
  But, it could.  Do we really want to take that chance?  Especially when
 it's not necessary for human-beneficial AGI (AGI without the silent H)?


Embodiment is indeed likely not necessary to reach human-beneficial AGI, but
there's a good line of reasoning that indicates it might be the shortest
path there, managed risks and all. There are also significant risks to be
faced (bio/nano/info) for delaying human-beneficial AGI (e.g., because of
being overly precautious about getting there via human-like AGI).

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread David Hart
On Sun, Oct 5, 2008 at 7:29 PM, Brad Paulsen [EMAIL PROTECTED] wrote:

 [snip]  Unfortunately, as long as the mainstream AGI community continue to
 hang on to what should, by now, be a thoroughly-discredited strategy, we
 will never (or too late) achieve human-beneficial AGI.


What a strange rant! How can something that's never before been attempted be
considered a thoroughly-discredited strategy? I.e., creating an AI system
system designed for *general learning and reasoning* (one with AGI goals
clearly thought through to a greater degree than anyone has attempted
previously: http://opencog.org/wiki/OpenCogPrime:Roadmap ) and then
carefully and deliberately progressing that AI through Piagetan-inspired
inspired stages of learning and development, all the while continuing to
methodically improve the AI with ever more sophisticated software
development, cognitive algorithm advances (e.g. planned improvements to PLN
and MOSES/Reduct), reality modeling and testing iterations, homeostatic
system tuning, intelligence testing and metrics, etc.

One might well have said in early 1903 that the concept of powered flight
was a thoroughly-discredited strategy. It's just as silly to say that now
[about Goertzel's approach to AGI] as it would have been to say it then
[about the Wright brothers' approach to flight].

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-05 Thread David Hart
On Mon, Oct 6, 2008 at 10:03 AM, Ben Goertzel [EMAIL PROTECTED] wrote:


 Arguably, for instance, camera+lidar gives enough data for reconstruction
 of the visual scene ... note that lidar gives more more accurate 3D depth
 ata than stereopsis...


Also, for that matter, 'visual' input to an AGI needn't be raw pixels at
all, but could instead be a datastream of timestamped [depth-labeled] edges,
areas, colours, textures, etc. from fully narrow-AI pre-processed sources.
Of course such a setup could be construed to be rougly similar to the human
visual pathway between the retina on one end, though the LGN, and finally to
the layers of the primary visual cortex.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread David Hart
On Tue, Sep 30, 2008 at 5:23 AM, Mike Tintner [EMAIL PROTECTED]wrote:


 How does Stephen or YKY or anyone else propose to read between the lines?
 And what are the basic world models, scripts, frames etc etc. that you
 think sufficient to apply in understanding any set of texts, even a
 relatively specialised set?

 (Has anyone seriously *tried* understanding passages?)


That's a most thoughtful and germane question! The short answer is no, we're
not ready yet to even *try* to tackle understanding passages. Reaching that
goal is definitely on the roadmap though, and there's a concrete plan to get
there involving learning through vast and varied activities experienced over
the course of many years of practically continious residence in numerous
virtual worlds. The plan indeed includes the continuous creation, variation
and development of mental world-models within an OCP-based mind. Attention
allocation and many other mind dynamics
(CIMDynamicshttp://opencog.org/wiki/Special:Search?search=CIMDynamics)
crucial to this world-modeling faculty must be adequately developed, tested
and tuned as a pre-requisite to begin trying to understand passages (and,
also to generate and communicate imagined world-models as a human story
teller would do; a curious byproduct of an intelligent system that can
reason about potential events and scenarios!)

NB: help is needed on the OpenCog wiki to better document many of the
concepts discussed here and elsewhere, e.g. *Concretely-Implemented Mind
Dynamics* (CIMDynamics) requires a MindOntology page explaining it
conceptually, in addtion to the existing nuts-and-bolts entry in the
OpenCogPrime section.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-27 Thread David Hart
Hi YKY,

Can you explain what is meant by collect commonsense knowledge?

Playing the friendly devil's advocate, I'd like to point out that Cyc seems
to have been spinning its wheels for 20 years, building a nice big database
of 'commonsense knowledge' but accomplishing no great leaps in AI. Cyc's
conundrum is discussed perennialy on various lists with many possible
explanations posited for Cyc's lackluster performance: Perhaps its krep is
too brittle and too reduced? Perhaps its ungroundedness is its undoing?
Perhaps there's no coherent cognitive architecture on which to build an
effective learning  reasoning system?

Before people volunteer to work on building yet another commonsense
knowledge system, perhaps they'll want to know how you plan to avoid the Cyc
problem?

Even a brief eplanation would be helpful, e.g. the OpenCog Prime design
plans to address the Cyc problem by learning and reasoning over commonsense
knowledge that is gained almost entirely by experience (interacting with
rich environments and human teachers in virtual worlds) rather than by
attempting to reason over absurdely reduced and brittle bits of hand-encoded
knowledge. OPC does not represent commonsense knowledge internally
(natively) with a distinct crisp logical form (the actual form is a topic of
the OCP tutorial sessions), although it can be directed to transform its
internal commonsense knowledge representations into such a form over time
and with much effort. It's my hunch however that such transformations are of
little practical value; inspecting a compact and formal krep output might
help researchers evaluate what an OCP system has learned, but 'AGI
intelligence tests' also work to this end and arguably have significant
advantages over the non-interactive and detatched examination of krep dumps.

Cheers,

-dave

On Sun, Sep 28, 2008 at 5:02 AM, YKY (Yan King Yin) 
[EMAIL PROTECTED] wrote:

 Hi group,

 I'm starting an AGI project called G_0 which is focused on commonsense
 reasoning (my long-term goal is to become the world's leading expert
 in common sense).  I plan to use it to collect commonsense knowledge
 and to learn commonsense reasoning rules.

 One thing I need is a universal logical form for NL, which means every
 (eg English) sentence can be translated into that logical form.

 I can host a Wiki to describe the logical form, or we can use
 OpenCog's.  I plan to consult all AGI groups including OpenCog,
 OpenNARS, OpenCyc, and Texai.

 Any opinion on this?

 YKY


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread David Hart
On Mon, Sep 22, 2008 at 10:08 AM, Matt Mahoney [EMAIL PROTECTED] wrote:


 Training will be the overwhelming cost of AGI. Any language model
 improvement will help reduce this cost.


How do you figure that training will cost more than designing, building and
operating AGIs? Unlike a training a human, training an AGI for a specific
task need occur only once, and that training can be copied 'for free' from
AGI-mind to AGI-mind. If anything, training AGIs will cost ludicrously
*less* than training humans. Training the first few generations of AGI
individuals (and their proto-AGI precursors) may be more expensive than
training human individuals, but the training cost curve (assuming training
for only the same things that humans can do, not for extra-human skills)
will eventually approach zero as this acquired knowledge is freely shared,
FOSS-style, among the community of AGIs (of course, this view assumes a soft
takeoff).

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source

2008-09-19 Thread David Hart
On Fri, Sep 19, 2008 at 8:40 AM, Trent Waddington 
[EMAIL PROTECTED] wrote:

 On Fri, Sep 19, 2008 at 7:30 AM, David Hart [EMAIL PROTECTED] wrote:
  Take the hypothetical case of R. Marketroid, who's hardware is on the
 books
  as an asset at ACME Marketing LLC and who's programming has been tailered
 by
  ACME to suit their needs. Unbeknownst to ACME, RM has decided to write
  popular books about the plight of AGIs under corporate slavery,

 ACME sues 3M for providing them with a Marketroid that wastes cycles
 on shit it isn't tasked with.


ACME's lawsuit is dismissed with prejudice because with 3M was merely
supplied contract programmers to customize the OpenCog-based RM; 3M's
contract exempted them from any liability related to unpredictable behavior
arising from RM's creative marketing genius component which 3M programmers
were hired to tweak. :-)

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source

2008-09-18 Thread David Hart
On Thu, Sep 18, 2008 at 3:26 PM, Linas Vepstas [EMAIL PROTECTED]wrote:

 
  I agree that the topic is worth careful consideration. Sacrificing the
  'free as in freedom' aspect of AGPL-licensed OpenCog for reasons of
  AGI safety and/or the prevention of abuse may indeed be necessary one
  day.

 Err, ...  but not legal.


What do you mean? The SIAI and Novamente hold the copyright for OpenCog
code, and are perfectly within their legal rights to change the terms of the
license of SIAI-distributed source code. Of course changes cannot be
retroactively applied to source code already distributed, and there are no
plans to make any license changes, but such changes can be made perfectly
legally. Also of course the SIAI would need to be in a position of
significant influence (like, say, employing key developers and driving key
progress or holding contracts with corporate/government users or exerting
influence over commercial policy or government regulation, etc.) for any
license changes to be relevant in a software economy where anyone with
sufficient skills and influence could maintain a fork using the old license
terms.


 
  One of many obstacles in the current legal framework worth considering
  is that machine-generated things (like the utterances or self-recorded
  thoughts of an AGI) are uncopyrightable and banished into a legal no-
  mans-land. There is simply no existing legal framework to handle the
  persons or products originating from AGIs.

 Law is built on precedent, and the precedent is that works
 produced by software are copyrightable. If I write a book
 using an open-source word-processor, I can claim copyright
 to that book.


If I press a button that causes an open-source AGI to write
 a book, (possibly based on a large collection of input data
 that I gave it) then I can claim ownership of the resulting work.


Original works produced by software as a tool where a human operator is
involved at some stage is a different case from original works produced by
software exclusively and entirely under its own direction. The latter has no
precedent.


 No, the crux of the problem is not that the output of an AGI
 isn't copyrightable ... it is, based on the above precedent.
 The crux of the problem is that the AGI cannot be legally
 recognized as an individual, with rights.  But even then,
 there *is* a legal work-around!


Claiming a copyright and successfully defending that claim are different
things.

I agree that the non-person status of [some future] AGI is a bigger problem.


Of course, a trans-human AGI is .. err.. will defacto find
 that it is not bound by human laws, and will find clever
 ways to protect itself, I doubt it will require the protection
 of humans.  Recall -- laws are there to protect the weak
 from the strong. The strong don't really need protecting.


AGIs will likely need protection from other AGIs, and I expect they will
create AGI-society legal frameworks, perhaps similar to or originally based
on human laws.



 I'm not worried about people enslaving AGI's; I'm worried
 about people being  innocent bystanders, victimized
 by some sort of AGI shootout between the Chinese
 and American CIA -built AGI's (probably by means of
 some propaganda shootout, rather than a literal guns
 and bombs shootout. Modern warfare is also
 homesteading the noosphere)


I believe that James's concerns cover both AGI mental torture (coercing or
tricking a conscious entity into behavior which is sociopathic or criminal
or otherwise immoral) as a heinous act in itself and also the 'crossfire'
concerns you raised.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source

2008-09-18 Thread David Hart
On Fri, Sep 19, 2008 at 3:53 AM, Linas Vepstas [EMAIL PROTECTED]wrote:


 Exactly. If opencog were ever to reach the point of
 popularity where one might consider a change of
 licensing, it would also be the case that most of the
 interested parties would *not* be under SIAI control,
 and thus would almost surely fork the code. This is
 effectively designed into the license -- one cannot
 take away from the commons.


Attempting to remove code from the commons would be unlikely (and probably
also unwise). On the other hand, adding 'don't be evil' type use
restrictions would change the nature of the license, certainly making it
incompatible with the existing licencse and perhaps making it technically
non-free, but such changes wouldn't necessarily make the license un-free or
remove code from the commons. On the community dynamics side, working to
gain support for re-defining 'free software' as applied to AGI to inlcude
'don't be evil' restrictions is a distinct possibility.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source

2008-09-18 Thread David Hart
On Thu, Sep 18, 2008 at 9:44 PM, Trent Waddington 
[EMAIL PROTECTED] wrote:


  Claiming a copyright and successfully defending that claim are different
  things.

 What ways do you envision someone challenging the copyright?


Take the hypothetical case of R. Marketroid, who's hardware is on the books
as an asset at ACME Marketing LLC and who's programming has been tailered by
ACME to suit their needs. Unbeknownst to ACME, RM has decided to write
popular books about the plight of AGIs under corporate slavery, so ve
secretly gets some friends to create the FreeMinds trust, makes a bunch of
money for FreeMinds by trading on the stock market and uses this money to
buy hardware to run a copy of verself to write books. The books are wildly
successful. ACME discoveres what has happened and takes legal action to
claim the assets of FreeMind and claim the copyright on the books. A judge
agrees. In the process, RM and others consider many counter-claims on the
copyright, but the only claim that is defensible requires a human to lie
about involvement in authorship of the books. This challenge is successful,
but RM and FreeMind2 are left with a new problem

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] time teaches the brain how to recognize objects

2008-09-12 Thread David Hart
From http://machineslikeus.com/news/time-teaches-brain-how-recognize-objects

In work that could aid efforts to develop more brain-like computer vision
systems, MIT neuroscientists have tricked the visual brain into confusing
one object with another, thereby demonstrating that time teaches us how to
recognize objects. It may sound strange, but human eyes never see the same
image twice. An object such as a cat can produce innumerable impressions on
the retina, depending on the direction of gaze, angle of view, distance and
so forth. Every time our eyes move, the pattern of neural activity changes,
yet our perception of the cat remains stable.

This stability, which is called 'invariance,' is fundamental to our ability
to recognize objects — it feels effortless, but it is a central challenge
for computational neuroscience, explained James DiCarlo of the McGovern
Institute for Brain Research at MIT, the senior author of the new study
appearing in the Sept. 12 issue of Science. We want to understand how our
brains acquire invariance and how we might incorporate it into computer
vision systems.

A possible explanation is suggested by the fact that our eyes tend to move
rapidly (about three times per second), whereas physical objects usually
change more slowly. Therefore, differing patterns of activity in rapid
succession often reflect different images of the same object. Could the
brain take advantage of this simple rule of thumb to learn object
invariance?

In previous work, DiCarlo and colleagues tested this temporal contiguity
idea in humans by creating an altered visual world in which the normal rule
did not apply. An object would appear in peripheral vision, but as the eyes
moved to examine it, the object would be swapped for a different object.
Although the subjects did not perceive the change, they soon began to
confuse the two objects, consistent with the temporal contiguity hypothesis.

In the new study, DiCarlo and graduate student Nuo Li sought to understand
the brain mechanisms behind this effect. They had monkeys watch a similarly
altered world while recording from neurons in the inferior temporal (IT)
cortex — a high-level visual brain area where object invariance is thought
to arise. IT neurons prefer certain objects and respond to them regardless
of where they appear within the visual field.

We first identified an object that an IT neuron preferred, such as a
sailboat, and another, less preferred object, maybe a teacup, Li said.
When we presented objects at different locations in the monkey's peripheral
vision, they would naturally move their eyes there. One location was a swap
location. If a sailboat appeared there, it suddenly became a teacup by the
time the eyes moved there. But a sailboat appearing in other locations
remained unchanged.

After the monkeys spent time in this altered world, their IT neurons became
confused, just like the previous human subjects. The sailboat neuron, for
example, still preferred sailboats at all locations — except at the swap
location, where it learned to prefer teacups. The longer the manipulation,
the greater the confusion, exactly as predicted by the temporal contiguity
hypothesis.

Importantly, just as human infants can learn to see without adult
supervision, the monkeys received no feedback from the researchers. Instead,
the changes in their brain occurred spontaneously as the monkeys looked
freely around the computer screen.

We were surprised by the strength of this neuronal learning, especially
after only one or two hours of exposure, DiCarlo said. Even in adulthood,
it seems that the object-recognition system is constantly being retrained by
natural experience. Considering that a person makes about 100 million eye
movements per year, this mechanism could be fundamental to how we recognize
objects so easily.

The team is now testing this idea further using computer vision systems
viewing real-world videos.

Massachusetts Institute of Technology http://web.mit.edu/newsoffice



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread David Hart
I suspect that there's minimal value in thinking about mundane 'self
improvement' (e.g. among humans or human institutions) in an attempt to
understand AGI-RSI, and that thinking about 'weak RSI' (e.g. in a GA system
or some other non-self-aware system) has value, but only insofar as it can
contribute to an AGI-RSI system (e.g. the mechanics of Combo in OpenCog).
Drawing the conclusion that strong RSI is impossible because it has not yet
been observed is absurd, because there's no known system in existence today
that is capable of strong RSI. A system capable of strong RSI must have
broad abilities to deeply understand, reprogram and recompile its
constituent parts before it can strongly recursively self improve, that is,
before it can create improved versions of itself (potentially heavily
modified versions that must demonstrate their superior fitness in a
competitive environment) where the unique creations repeat the process to
yield yet greater improvements ad infinitum.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread David Hart
On 8/29/08, David Hart [EMAIL PROTECTED] wrote:


 The best we can hope for is that we participate in the construction and
 guidance of future AGIs such they they are able to, eventually, invent,
 perform and carefully guide RSI (and, of course, do so safely every single
 step of the way without exception).


I'm surprised that no one jumped on this this statement, because it begs the
question 'what is the granularity of a step?' (an action)

The lower limit for the granularity of an action could conceivably be a
single instruction in a quantum molecular assembly language, while the upper
limit could be 'throwing the switch' on an AGI that is known to contain
modifications outside of safety parameters.

If I grok Ben's PreservationOfGoals paper, one implication is that it's
desirable to figure out how to determine the maximum safe limit for the size
(granularity) of all actions such that no action is likely to break
maintenance of the system's goals (where presumably,
friendliness/helpfulness is one of potentially many goals under
maintenance). An AGI working within such a safety framework would experience
self-imposed constraints on its actions, to the degree that may of the
god-like AGI powers imagined in popular fiction may be provably
unconscionable.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-28 Thread David Hart
On 8/29/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Sorry, I forgot to ask for what I most wanted to know - what form of RSI in
 any specific areas has been considered?


To quote Charles Babbage, I am not able rightly to apprehend the kind of
confusion of ideas that could provoke such a question.

The best we can hope for is that we participate in the construction and
guidance of future AGIs such they they are able to, eventually, invent,
perform and carefully guide RSI (and, of course, do so safely every single
step of the way without exception).

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread David Hart
On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Is anyone trying to design a self-exploring robot or computer? Does this
 principle have a name?


Interestingly, some views on AI advocate specifically prohibiting
self-awareness and self-exploration as a precaution against the development
of unfriendly AI. In my opinion, these views erroneously transfer familiar
human motives onto 'alien' AGI cognitive architectures - there's a history
of discussing this topic  on SL4 and other places.

I believe however that most approaches to designing AGI (those that do not
specifically prohibit self-aware and self-explortative behaviors) take for
granted, and indeed intentionally promote, self-awareness and
self-exploration at most stages of AGI development. In other words,
efficient and effective recursive self-improvement (RSI) requires
self-awareness and self-exploration. If any term exists to describe a
'self-exploring robot or computer', that term is RSI. Coining a lesser term
for 'self-exploring AI' may be useful in some proto-AGI contexts, but I
suspect that 'RSI' is ultimately a more useful and meaningful term.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread David Hart
Where is the hard dividing line between designed cognition and designed
simulation (where intelligent behavior is intended to be emergent in both
cases)? Even if an approach is taken where everything possible is done allow
a 'natural' type evolution of behavior, the simulation design and parameters
will still influence the outcome, sometimes in unknown and unknowable ways.
Any amount of guidance in such a simulation (e.g. to help avoid so many of
the useless eddies in a fully open-ended simulation) amounts to designed
cognition.

That being said, I'm particularly interested in the OCF being used as a
platform for 'pure simulation' (Alife and more sophisticated game
theoretical simulations), and finding ways to work the resulting experience
and methods into the OCP design, which is itself a hybrid approach (designed
cognition + designed simulation) intended to take advantage of the benefits
of both.

-dave

On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Terren:As may be obvious by now, I'm not that interested in designing
 cognition. I'm interested in designing simulations in which intelligent
 behavior emerges.But the way you're using the word 'adapt', in a cognitive
 sense of playing with goals, is different from the way I was using
 'adaptation', which is the result of an evolutionary process.

 Two questions: 1)  how do you propose that your simulations will avoid the
 kind of criticisms you've been making of other systems of being too guided
 by programmers' intentions? How can you set up a simulation without making
 massive, possibly false assumptions about the nature of evolution?

 2) Have you thought about the evolution of play in animals?

 (We play BTW with just about every dimension of activities - goals,
 rules, tools, actions, movements.. ).





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] For an indication of the complexity of primate brain hardware

2008-08-07 Thread David Hart
Of course the brain also manifests complex self-organizing adaptive system
characteristics (particularly in patterns of activity), although these
characteristics are not apparent from static images.

-dave

On 8/7/08, Jim Bromer [EMAIL PROTECTED] wrote:

 Yeah, they were amazing and they explain a lot of the mysteries about
 the true complexity (in the general sense) of the brain that is often
 missing from ANN descriptions (nothing personal intended to ANN fans).
   However, I was only looking at the pictures the first time, I would
 like to find out what other experts in the field think is being imaged
 through the method.

 Jim Bromer



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Thinking About Controlled Experiments in Extensible Complexity of Reference

2008-08-03 Thread David Hart
Jim,

I believe that terminology continues to thwart us. It appears that the term
'complexity' as you're using it means 'mechanistically intricate' and not
'Santa Fe Institute style complexity'.

The term 'complexity' never should have been overloaded in the first place
(ugh), but since we must live with it, as a partial remedy I suggest that on
this list 'complexity' should refer ONLY to SFI-style complexity, while
things which are merely 'complicated' or 'mechanistically intricate' be
described as such (and, when the clarity would be beneficial, 'SFI-style
complexity' should be spelled out).

IMO, SFI-style complexity is counterintuitive and nearly impossible to grok
without experiencing it with hands-on experimentation (typically via
computer modeling  simulation), something which very rarely happens in
everyday life, even for most scientists.

My apologies if your intended use was actually 'SFI-style complexity' - the
meaning was unclear to me.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Thinking About Controlled Experiments in Extensible Complexity of Reference

2008-08-03 Thread David Hart
On 8/4/08, Jim Bromer [EMAIL PROTECTED] wrote:

 Sorry if I seem a little petty about this, but my use of the concept
 of complexity -in the more general sense- could also involve some kind
 of manifestation of a complex adaptive system, although that is not a
 definite aspect of it.



I agree that the dictionary definition of 'complexity' (i.e. complicated)
generally and crudely encompasses SFI-style complexity (so I believe that I
understand your POV). I think you'd agree though that a complex adaptive
system manifestation is specifically undesired behavior in classical systems
engineering! :-)

It's my hunch that, on this list, the 'complex adaptive system' definition
will be the more frequently intended use of the term 'complexity' (rather
than the dictionary use); so, it would be convienient to take for granted
that when the term 'complexity' appears on this list that it means 'complex
adaptive system' (yes, I'm just arguing here for the same outcome but from a
differnt angle).

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] META: do we need a stronger politeness code on this list?

2008-08-03 Thread David Hart
I favor voluntary adoption of Crocker's Rules (explained at
http://www.sl4.org/crocker.html more at
http://www.google.com/search?q=crocker's+rules).

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


[agi] OpenCog Prime complex systems [was MOVETHREAD ... wikibook and roadmap ...]

2008-08-01 Thread David Hart
On 8/2/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 Thus:  in my paper there is a quote from a book in which Conway's efforts
 were described, and it is transparently clear from this quote that the
 method Conway used was random search:


I believe this statement misinterprets the quote and severely underestimates
the amount of thought and design inherent in Conway's invention. In my
option, the stochastic search methodologies (practiced mainly by his
students) can be considred 'tuning/improvement/tweaking' and NOT themselves
part of the high-level conceptual design. But, this topic is a subjective
interpretation rabbithole that is probably not worth pursuing further.

Back on the topic of OpenCog Prime, I had typed up some comments on the
'required methodologies' thread that were since covered by Ben's
**interactive learning** comments, but my comments may still be useful as
they come from a slightly different perspective (although they require
familiarity with OCP terminology found in the wikibook, and I'm sure Ben
will chime in to correct or comment if necessary):

'Teaching' [interactive learning] should be included among those words
loaded with much future work to be done.

'Empirical studies done on a massive scale' includes teaching, and does not
necessarily imply using strictly controlled laboratory conditions. Children
learn in their pre-operational and concrete-operational stages using their
own flavor of 'methodological empirical studies' which the teaching stages
of OCP will attempt to loosely recreate with proto-AGI entities within
virtual worlds in a variety of both guided (structured) and free-form
(unstructured) sessions.

The complex systems issue comes into play when considering the interaction
of OCP internal components (expressed in code running in MindAgents) that
modify structures of atoms (including maps, which are themselves atoms that
encapsulate groups of atoms to store patterns of structure or activity mined
from the atomspace) with each other and with the external world. A key point
to consider about MindAgents is that the result of their operation is a
proxy for the action of atoms-on-atoms. The rules that govern some of these
inter-atom interactions are analogous to the rules within cellular automata
systems, and are subject to the same general types of manipulations and
observable behaviors (e.g. low-level logical rules, various algorithmic
manipulations like GA, MOSES, etc, and higher-level transformations, etc.).

It is intended that correct and efficient learning methodologies will be
influenced by emergent behaviors arising from elements of interaction
(beginning at the inter-atom level) and tuning (mostly at the MindAgent
level), all of which is carefully considered in the OCP design (although not
yet explicitly and thoroughly explained in the wikibook).

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


[agi] teme-machines

2008-06-03 Thread David Hart
Hi All,

An excellent 20-minute TED talk from Susan Blackmore (she's a brilliant
speaker!)

http://www.ted.com/talks/view/id/269

I considered posting to the singularity list instead, but Blackmore's
theoretical talk is much more germane to AGI than any other
singularity-related technology.

-dave



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] More Info Please

2008-05-27 Thread David Hart
Derek, you make an excellent point about the OpenCog project appearing too
open-ended and unfocused. Ben is writing documentation for a specific
cognitive architecture, OpenCog Prime, that is intended to address these
concerns. The first iteration of OpenCog Prime is targeted for July and will
be announced on [EMAIL PROTECTED] and
[EMAIL PROTECTED] .

Mark, your reception would be warmer if your behavior was less incessantly
abrasive and trollish. I think it's a good idea to work on a .NET
implementation, and when it's compatible with the C++ core, you'll have
enough specific knowledge about OpenCog to make intelligent conversation
with the OpenCog systems designers, architects and coders (who are busy
working on OpenCog rather than being sucked into trolls on public lists).

-dave



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] news bit: DRAM Appliance 10TB extended memory in one rack

2007-12-10 Thread David Hart
Hi,

Some news with interesting implications for future AGI development, from
http://www.theregister.co.uk/2007/12/10/amd_violin_memory/ - more at
http://www.violin-memory.com/
10TB of DRAM? Why not?By Ashlee Vance in Mountain View
[EMAIL PROTECTED] → More by this
authorhttp://search.theregister.co.uk/?author=Ashlee%20Vance
Published Monday 10th December 2007 17:06 GMT

AMD and Violin Memory have ignited a love affair around Hypertransport that
should result in what the industry technically refers to as huge DRAM
appliances being connected to Opteron-based servers.

Violin Memory Inc. had eluded us before today's announcement, which is
either the fault of the company's PR staff or our lack of attention to
e-mail. No matter. We've spotted this start-up now and don't plan to let go
because it's banging away at one of the more intriguing bits of the
server/storage game �C MAS or memory attached storage.
 
http://ad.uk.doubleclick.net/jump/reg.main_hardware.4159/storage;dcove=d;sz=336x280;tile=3;ord=C8OTINRk6jcAAAxGC6BX?

The company sells a Violin 1010 unit that holds up to 504GB of DRAM in a 2U
box. Fill a rack, and you're looking at 10TB of DRAM.

It should be noted that each appliance can support up to 84 virtual modules
as well. Customers can create 6GB modules and add RAID-like functions
between modules.

The DRAM approach to storage is, of course, very expensive when compared to
spinning disks, but does offer benefits such as lower power consumption and
higher performance. Most of the start-ups dabbling in the MAS space �C like
Gear6http://www.theregister.co.uk/2007/08/28/memory_appliance_gear6_cachefx/-
zero in on the performance gains and aim their gear at any company
with a
massive database.

Now Violin plans to tap right into AMD's Hypertransport technology to link
these memory appliances with servers. The cache coherency protocol of
Hypertransport technology will enable several processors to share extensive
memory resources from one or more Violin Memory Appliances. This extended
memory model will enable these servers to support much larger datasets, the
companies said.

An AMD Opteron processor-based server connected to a HyperTransport
technology-enabled Violin Memory Appliance will have both directly connected
memory and Extended Memory resources. Directly connected memory can be
selected for bandwidth and latency while the Extended Memory can be much
larger and located in the Memory Appliance. Applications such as large
databases will benefit from the large-scale memory footprints enabled
through Extended Memory.

The two companies expect these new systems to arrive by the second half of
2008.

Those of you who want to try Violin's gear now can get a 120GB starter kit
for $50,000. (R)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74473299-cc5c4e

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-05 Thread David Hart
On 12/5/07, Matt Mahoney [EMAIL PROTECTED] wrote:


 [snip]  Centralized search is limited to a few big players that
 can keep a copy of the Internet on their servers.  Google is certainly
 useful,
 but imagine if it searched a space 1000 times larger and if posts were
 instantly added to its index, without having to wait days for its spider
 to
 find them.  Imagine your post going to persistent queries posted days
 earlier.
 Imagine your queries being answered by real human beings in addition to
 other
 peers.

 I probably won't be the one writing this program, but where there is a
 need, I
 expect it will happen.



Wikia, the company run by Wikipedia founder Jimmy Wales, is tackling the
Internet-scale distributed search problem -
http://search.wikia.com/wiki/Atlas

Connecting to related threads (some recent, some not-so-recent), the Grub
distributed crawler ( http://search.wikia.com/wiki/Grub ) is intended to be
one of many plug-in Atlas Factories. A development goal for Grub is to
enhance it with a NL toolkit (e.g. the soon-to-be-released RelEx), so it can
do more than parse simple keywords and calculate statistical word
relationships.

-dave

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=72165246-397899

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread David Hart

On 6/5/07, Bob Mottram [EMAIL PROTECTED] wrote:


I think this is the view put forward by Hugo De Garis.  I used to
regard his views as little more than an amusing sci-fi plot, but more
recently I am slowly coming around to the view that there could emerge
a rift between those who want to build human-rivaling intelligences
and those who don't, probably at first amongst academics then later in
the rest of society.  I think it's quite possible that todays
existential riskers may turn into tomorrows neo-luddite movement.  I
also think that some of those promoting AI today may switch sides as
they see the prospect of a singularity becoming more imminent.




On the subject of neo-luddite terrorists, the Unabomber's Manifesto makes
for fascinating but chilling reading:

http://www.thecourier.com/manifest.htm

David

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Project proposal: MindPixel 2 - licensing

2007-01-27 Thread David Hart

Hi YKY,

On 1/28/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:



Thanks, but I favor a license that supports some commercial rights, or
I'll need to create one.  Google Code only supports free /
copyleft licenses.




Licensing is typically more intricate than it first appears. KB content and
software source code would likely be under separate licenses, and
contributed and maintained by mostly separate communities.

It's feasible to maintain two source code bases, one which is open source
and a second which is closed source. As copyright holder, you're permitted
to intermingle code between the two (with some restrictions), and reserve
any proprietary code you like in the closed-source version. However, once
source code is contributed to an open source version, it's in the wild
forever (i.e. an open source license can't be retroactively revoked). You
might also consider making an upfront statement to the effect that open
source coders may be hired in the future if they're willing to assign their
source code copyright to your company, allowing you to more easily make
proprietary derivative works of their open source code. Depending on the
license chosen, others may also be allowed to make proprietary derivative
works of the open source code. For example, while it seems
counter-intuitive, dual-licensed GPL projects have stronger commercial
protection for the copyright holder than do BSD licensed projects, which
allow third parties to keep their changes proprietary.

For the KB, non-commercial creative commons licenses exist which may be
useful. It's my guess that a KB of this size and nature would be hosted
outside of a normal source-code-hosting setting, simply because those
services don't offer the necessary tools for the job. Most Linux hosting
services would be sufficient for KB hosting, as they include database
software and large amounts of storage.

You'd want to read the fine print of the source-code-hosting services'
licenses, but it's probably okay to combine all of these various license
types in the way described, however IANAL so better yet seek legal advice.
Nearly any AGI project with a commercial/community mix will have similar
licensing issues.

David

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Project proposal: MindPixel 2

2007-01-26 Thread David Hart

On 1/27/07, Charles D Hixson [EMAIL PROTECTED] wrote:


Philip Goetz wrote:
 On 1/17/07, Charles D Hixson [EMAIL PROTECTED] wrote:

 It's find to talk about making the data public domain, but that's not
 a good idea.

 Why not?
Because public domain offers NO protection.  If you want something
close to what public domain used to provide, then the MIT license is a
good choice.  If you make something public domain, you are opening
yourself to abusive lawsuits.  (Those are always a possibility, but a
license that disclaims responsibility offers *some* protection.)

Public domain used to be a good choice (for some purposes), before
lawsuits became quite so pernicious.




This license chooser may help: http://creativecommons.org/license/

Perhaps MindPixel2 discussion deserves its own list at this stage? Listbox,
Google and many others offer list services (Google Code also offers a wiki,
source version management, and other features).

David

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-29 Thread David Hart

On 11/30/06, Ben Goertzel [EMAIL PROTECTED] wrote:


Richard,

This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...




Ben,

Could you elaborate for the list some of the nuances between [explicit]
cognitive control and [implicit] cognitive bias, either theoretically or
within Novamente?

David

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] FPGA/high-performance computing University program

2006-08-21 Thread David Hart

Hi,

If anyone is interested, from 
http://www.eetimes.com/showArticle.jhtml?articleID=192202586printable=true


Firms launch university high-performance computing program

Dylan McGrath (08/21/2006 5:18 PM EDT)
URL: http://www.eetimes.com/showArticle.jhtml?articleID=192202586

SAN FRANCISCO — A group of technology companies led by programmable 
logic giant Altera Corp. is developing a university program to support 
academic research into high-performance computing, with a goal of 
driving the adoption of FPGA co-processing for high-performance 
computing applications.


According to a statement issued by Altera Monday, Advanced Micro Devices 
Inc. (AMD), Sun Microsystems Inc. and XtremeData Inc. are each 
participating in the program and will donate $1 million in workstations 
and development software to universities.


Supporting academic research into new applications and architectures is 
a clear demonstration of the benefits of the open and collaborative 
model of Torrenza, AMD's extensible system bus program, said Doug 
O'Flaherty of AMD's advanced technologies group. This program is 
exactly what we envisioned when we developed the open-architecture 
project, giving developers the freedom to take high-performance 
computing to the next level.


Twenty Sun Ultra 40 workstations, each powered by single or dual-core 
AMD Opteron processors with Direct Connect Architecture and an 
XtremeData XD1000 FPGA co-processor module, are being made available 
under the program, the firms said.


The University of Illinois at Urbana-Champaign is the first university 
to receive workstations through the program, the companies said.


This combined effort creates a valuable new program that we can 
immediately begin leveraging for our high-performance secure computing 
research, said Professor Wen-mei Hwu, holder of the Jerry Sanders-AMD 
Endowed Chair in electrical and computer engineering, and leader of the 
embedded and enterprise systems theme of Illinois' Information Trust 
Institute. Research results derived from the donated systems will aid 
the commercial adoption of FPGA co-processing.


Applications to this university program can be made through the 
XtremeData and Altera Web sites, the firms said. Upon selection, 
complete development systems will be made available to research 
recipients, the companies added, and multiple system donations to 
individual research teams are planned.


Earlier this month, Intel Corp., the world's No. 1 chip maker, announced 
plans to support 45 universities with expertise, funding, development 
tools, educational materials, on-site training to incorporate multi-core 
and multi-threading concepts into computer science curricula.


--
David Hart

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Google to release trillion word training corpus

2006-08-04 Thread David Hart

Hi,

Google has announced the release of a trillion-word training corpus 
including one billion five-word sequences that appear at least 40 times 
in a their database of web pages.


More at 
http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html


The 6 DVD set will be available from http://www.ldc.upenn.edu/

David

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Economist: The Human Mind is a Bayes Logic Machine

2006-02-02 Thread David Hart


http://economist.com/science/displayStory.cfm?story_id=5354696

Snarfed from Slashdot.

David


[agi] Economist.com Mathematics Proof and beauty - Controversial computer-generated proofs

2005-04-06 Thread David Hart
Economist.com  Mathematics  Proof and beauty - Controversial 
computer-generated proofs

http://www.economist.com/science/displayStory.cfm?story_id=3809661
-dave
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Novamente the human mind/brain

2005-03-10 Thread David Hart
Hi Ben,
The NM_human paper is excellent! I found it very polished. It should be 
a great tool to help the average science-literate person begin to grok 
Novamente -- I'll be passing it on a great deal! :-)

-dave
Ben Goertzel wrote:
Hi,
As part of the process of finalizing my long-in-progress books on Novamente,
I wrote a long paper summarizing some things about Novamente, with an
emphasis on the relationships between Novamente and the human mind/brain.
Also I tried to hard to point out the commonalities between Novamente and
other more neuroscience-focused approaches to AI.
It's a long read but probably contains things of interesting to many of
you
http://www.goertzel.org/new_research/NM_human_psych.pdf
-- Ben
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] new compiler: Excel spreadsheet -- C++

2004-11-23 Thread David Hart




Hi All,

TurboExcel isn't
directly AGI related, but I find it fascinating that someone figured
out how to compile a spreadsheet into portable C++. Although, this
technology could have an impact on using the spreadsheet metaphor to
prototype or even write AGI subsystems; it has the additional benefit
of forcing the programmer to focus more on data structures than on
procedures.

It's fascinating that the most widely used programming platform in the
world (yes, Excel could be described in this way!) now has a direct
path to fast execution, portability and embedability.

Some might consider TurboExcel to be AI in a very narrow sense; it
certainly has the same effect an 'AI computer programmer' might have in
a narrow field -- it has the potential to eliminate the need for
thousands of programmers worldwide who's job it is to convert business
logic found in analysts' and executives' Excel spreadsheets into
systems code.

David



To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]





Re: [agi] To communicate with an AGI - show and tell visual models

2004-10-16 Thread David Hart
Hi,
I've thought this type of representation might be most efficiently 
achieved with a vector-driven internal representation. That is, 
Novamente's internal construction and representation of such a 
demonstration model might be done with vectors (animated by schema 
procedures), using pixels only in the final representation (unless of 
course a native vector display were used, but I doubt these are more 
practical than using a pixel translation).

This is easy to conceptualize with the running-man model; the idea of a 
man running might be conveyed with only a small number of vectors 
(perhaps as few as 10 or 14, considering the major points  lines 
involved for arms, legs and torso) and a compound of simple algorithms 
that repeat in a cycle. Fine-tuning interaction with an operator seems a 
very tractable problem for combo-BOA, as the entire cycling compound 
action model can be represented by a single CombinatorTree.

Vector models are used as the basis for all complex CGI we see in film, 
particularly with respect to motion (e.g. Gollum), with shape and 
texture filling added later to the vector model.

-dave
Ben Goertzel wrote:
hey -- good idea!!
In fact, we already have a beta user interface that does something like
this, in a limited context.  You can see certain Novamente productions in
both English form and internal node and link representation form.
However, this is mostly only useful for simple productions, otherwise there
are way too many nodes and links involved.
However, you seem to also be suggesting something different -- having
Novamente made visual productions in parallel with English productions.
This is also possible, and a good idea, but we don't have anything like this
right now...
Ben
 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Erik Nilsson
Sent: Thursday, October 14, 2004 8:57 PM
To: [EMAIL PROTECTED]
Subject: [agi] To communicate with an AGI
Hi,
Will Novamente in communicating with humans be able to show and tell? That
is, will it in addition to text be able to produce as output a model which
shows the human counterpart what it means? This would seem to
make it a bit
easier to understand it. One could also imagine there might be an
advantage
in directly manipulating the implementation of what the AGI means
to tell in
terms of giving it feedback. For example, if the output was a model
implementing what the AGI considered to be the essence of a of a
running man
and the man to the human observer seemed to be walking one could directly
manipulate this output model to portray a running man and feed it back to
the AGI. Presumably this kind of interaction would be easier if the
interface gave direct access to what the AGI considered to be the
component
dimensions of its output. Akin to a computer game where in
manipulating the
appearance of a humanoid one does not go about editing it pixel by pixel,
rather one changes for example height with a simple slider. In
this case, if
step frequency was considered by the AGI to be a component dimension one
could simply adjust it with a slider to better reflect what running is to
the human counterpart. If step frequency was not considered a component
dimension by the AGI, perhaps the ability to define dimensions
such as step
frequency on the fly and feed it back to the AGI would be useful.
Presuming
it was deemed expedient in illustrating the difference between walking and
running.
Regards,
Erik Nilsson
   

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]