Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Ben Goertzel
Mike, Google has had basically no impact on the AGI thinking of myself or
95% of the other serious AGI researchers I know...

On Fri, Sep 19, 2008 at 10:00 AM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>  [You'll note that arguably the single greatest influence on people's
> thoughts about AGI here is Google -  basically Google search - and that
> still means to most text search. However, video search & other kinds of
> image search [along with online video broadcasting] are already starting to
> transform the way we think about the world in an equally powerful way - and
> will completely transform thinking about AGI. This is from the Google blog].
>
>  The future of online video  9/16/2008 06:25:00 AM
> The Internet has had an enormous impact on people's lives around the world
> in the ten years since Google's founding. It has changed politics,
> entertainment, culture, business, health care, the environment and just
> about every other topic you can think of. Which got us to thinking, what's
> going to happen in the next ten years? How will this phenomenal technology
> evolve, how will we adapt, and (more importantly) how will it adapt to us?
> We asked ten of our top experts this very question, and during September
> (our 10th anniversary month) we are presenting their responses. As
> computer scientist Alan Kay has famously observed, the best way to predict
> the future is to invent it, so we will be doing our best to make good on our
> experts' words every day. - Karen Wickre and Alan Eagle, series editors
>
> Ten years ago the world of online video was little more than an idea. It
> was used mostly by professionals like doctors or lawyers in limited and
> closed settings. Connections were slow, bandwidth was limited, and video
> gear was expensive and bulky. There were many false starts and outlandish
> promises over the years about the emergence of online video. It was really
> the dynamic growth of the Internet (in terms of adoption, speed and
> ubiquity) that helped to spur the idea that online video - millions of
> people around the world shooting it, uploading it, viewing it via broadband
> - was even possible.
>
> Today, there are thousands of different video sites and services. In fact
> it's getting to be unusual not to find a video component on a news,
> entertainment or information website. And in less than three years, YouTube
> has united hundreds of millions of people who create, share, and watch video
> online. What used to be a gap between "professional" entertainment companies
> and home movie buffs has disappeared. Everyone from major broadcasters and
> networks to vloggers and grandmas are taking to video to capture events,
> memories, stories, and much more in real time.
>
> Today, 13 hours of video are uploaded to YouTube every minute, and we
> believe the volume will continue to grow exponentially. Our goal is to allow
> every person on the planet to participate by making the upload process as
> simple as placing a phone call. This new video content will be available on
> any screen - in your living room, or on your device in your pocket. YouTube
> and other sites will bring together all the diverse media which matters to
> you, from videos of family and friends to news, music, sports, cooking and
> much, much more.
>
> In ten years, we believe that online video broadcasting will be the most
> ubiquitous and accessible form of communication. The tools for video
> recording will continue to become smaller and more affordable. Personal
> media devices will be universal and interconnected. Even more people will
> have the opportunity to record and share even more video with a small group
> of friends or everyone around the world.
>
> Over the next decade, people will be at the center of their video and media
> experience. More and more consumers will become creators. We will continue
> to help give people unlimited options and access to information, and the
> world will be a smaller place.
>
> Posted by Chad Hurley, CEO and Co-Founder, YouTube
> ------
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Ben Goertzel
On Fri, Sep 19, 2008 at 10:46 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Fri, 9/19/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> Mike, Google has had basically no impact on the AGI thinking of myself or
> 95% of the other serious AGI researchers I know...
>
> Which is rather curious, because Google is the closest we have to AI at the
> moment.



Obviously, the judgment of distance between various non-AGI programs and
hypothetical AGI programs is very theory-dependent 

To that Google is closer to AGI than a Roomba is, is to express a certain
theory of mind, to which not all AGI researchers adhere...


ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Ben Goertzel
Yes of course, as I have been working on this stuff since way before Google
existed... or before the Web existed...

Anyway, use of Google as an information resource is distinct from use of
Google as a metaphor or inspiration for AGI ... after all, I would not even
know about AI had I never encountered paper, yet the properties of paper
have really not been inspirational in my AGI design efforts...

ben

On Fri, Sep 19, 2008 at 11:31 AM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>
>
> Mike, Google has had basically no impact on the AGI thinking of myself or
> 95% of the other serious AGI researchers I know...
>
> Ben,
>
> Come again. Your thinking about a superAGI, and AGI takeoff, is not TOTALLY
> dependent on Google? You would stlll argue that a superAGI is possible
> WITHOUT access to the information resources of Google?
>
> I suggest that you have made a blind claim above - and a classic
> illustration of McLuhan's argument that most people, including
> intellectuals, do tend to be blind to how the media they use massively shape
> their thinking about the world - and reshape their nervous system.
>
> --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Ben Goertzel
>
> That's the main reason why you think logic, maths and language are all you
> really need for intelligence - paper.
>

Just for clarity: while I think that in principle one could make a
maths-only AGI, my present focus is on building an AGI that is embodied in
virtual robots and potentially real robots as well ... in addition to
communicating via language and internally utilizing logic on various
levels...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-19 Thread Ben Goertzel
Matt wrote,


> There seems to be a lot of effort to implement reasoning in knowledge
> representation systems, even though it has little to do with how we actually
> think.


Please note that not all of us in the AGI field are trying to closely
emulate human thought.  Human-level thought does not imply closely
human-like thought



> We focus on problems like:
>
> All men are mortal. Socrates is a man. Therefore ___?
>
> The assumed solution is to convert it to a formal representation and apply
> the rules of logic:
>
> For all x: man(x) -> mortal(x)
> man(Socrates)
> => mortal(Socrates)
>
> which has 3 steps: convert English to a formal representation (hard AI),
> solve the problem (easy), and convert back to English (hard AI).


This is a silly example, because it is already solvable using existing AI
systems.  We solved problems like this using RelEx+PLN, in a prototype
system built on top of the NCE, a couple years ago.  Soon OpenCog will have
the  mechanisms to do that sort of thing too.



>
>
> Sorry, that is not a solution. Consider how you learned to convert natural
> language to formal logic. You were given lots of examples and induced a
> pattern:
>
> Frogs are green = for all x: frog(x) -> green(x).
> Fish are animals = for all x: fish(x) -> animal(x).
> ...
> Y are Z: for all x: Y(x) -> Z(x).
>
> along with many other patterns. (Of course, this requires learning
> semantics first, so you don't confuse examples like "they are coming").
>
> But if you can learn these types of patterns then with no additional effort
> you can learn patterns that directly solve the problem...
>
> Frogs are green. Kermit is a frog. Therefore Kermit is green.
> Fish are animals. A minnow is a fish. Therefore a minnow is an animal.
> ...
> Y are Z. X is a Y. Therefore X is a Z.
> ...
> Men are mortal. Socrates is a man. Therefore Socrates is mortal.
>
> without ever going to a formal representation. People who haven't studied
> logic or its notation can certainly learn to do this type of reasoning.



One hypothesis is that the **unconscious** human mind is carrying out
operations that are roughly analogous to logical reasoning steps.  If this
is the case, then even humans who have never studied logic or its notation
would unconsciously and implicitly be doing "logic-like stuff".  See e.g. my
talk at

http://www.acceleratingfuture.com/people-blog/?p=2199

(which has a corresponding online paper as well)


>
>
> So perhaps someone can explain why we need formal knowledge representations
> to reason in AI.
>

I for one don't claim that we need it for AGI, only that it's one
potentially very useful strategy.

IMO, formal logic is a cleaner and simpler way of doing part of what the
brain does via Hebbian-type modification of synaptic bundles btw neural
clusters

Google does not need anything like formal logic (or formal-logic-like
Hebbian learning, etc.) because it is not trying to understand, reason,
generalize, etc.  It is just trying to find information in a large
knowledge-store, which is a much narrower and very different problem.

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Ben Goertzel
Right now the virtual pets and bots don't use vision processing except in a
fairly trivial sense: they do see objects, but they don't need to identify
the objects using vision processing, they're just "given" the locations and
shapes of the objects by the virtual world server.  But future versions will
do real vision processing ... we just haven't gotten there yet ... (small
team, big job!!) ... we do have a detailed design for incorporating vision,
audition etc. into the OpenCog architecture... and are in fact collaborating
w/ a vision team in China who are doing vision processing work that is
compatible with the OpenCog architecture...

ben g

On Fri, Sep 19, 2008 at 4:40 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>
>
> Ben:Just for clarity: while I think that in principle one could make a
> maths-only AGI, my present focus is on building an AGI that is embodied in
> virtual robots and potentially real robots as well ... in addition to
> communicating via language and internally utilizing logic on various
> levels...
>
> Are your virtual robots any different from your virtual pets, as per that
> demo?  And do either virtual/real robots use vision?
>
> --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
Abram,

I think the best place to start, in exploring the relation between NARS
and probablity theory, is with Definition 3.7 in the paper

>From Inheritance Relation to Non-Axiomatic
Logic<http://www.cogsci.indiana.edu/pub/wang.inheritance_nal.ps>
[*International Journal of Approximate
Reasoning<http://www.elsevier.com/wps/find/journaldescription.cws_home/505787/description#description>
*, 11(4), 281-319, 1994]

which is downloadable from

http://nars.wang.googlepages.com/nars%3Apublication

It is instructive to look at specific situations, and see how this
definition
leads one to model situations differently from the way one traditionally
uses
probability theory to model such situations.

The next place to look, in exploring this relation, is at the semantics that
3.7 implies for the induction and abduction rules.  Note that unlike in PLN
there are no term (node) probabilities in NARS, so that induction and
abduction cannot rely on Bayes rule or any close analogue of it.  They must
be justified on quite different grounds.  If you can formulate a
probabilistic
justification of NARS induction and abduction truth value formulas, I'll be
quite interested.   I'm not saying it's impossible, just that it's not
obvious ...
one has to grapple with 3.7 and the fact that the NARS relative frequency
w+/w is combining intension and extension in a manner that is unusual
relative to ordinary probabilistic treatments.

The math here is simple enough that one does not need to do hand-wavy
philosophizing ;-) ... it's just elementary algebra.  The subtle part is
really
the semantics, i.e. the way the math is used to model situations.

-- Ben G



On Sat, Sep 20, 2008 at 2:22 PM, Abram Demski <[EMAIL PROTECTED]> wrote:

> It has been mentioned several times on this list that NARS has no
> proper probabilistic interpretation. But, I think I have found one
> that works OK. Not perfectly. There are some differences, but the
> similarity is striking (at least to me).
>
> I imagine that what I have come up with is not too different from what
> Ben Goertzel and Pei Wang have already hashed out in their attempts to
> reconcile the two, but we'll see. The general idea is to treat NARS as
> probability plus a good number of regularity assumptions that justify
> the inference steps of NARS. However, since I make so many
> assumptions, it is very possible that some of them conflict. This
> would show that NARS couldn't fit into probability theory after all,
> but it is still interesting even if that's the case...
>
> So, here's an outline. We start with the primitive inheritance
> relation, A inh B; this could be called "definite inheritance",
> because it means that A inherits all of B's properties, and B inherits
> all of A's instances. B is a superset of A. The truth value is 1 or 0.
> Then, we define "probabilistic inheritance", which carries a
> probability that a given property of B will be inherited by A and that
> a given instance of A will be inherited by B. Probabilistic
> inheritance behaves somewhat like the full NARS inheritance: if we
> reason about likelihoods (the probability of the data assuming (A
> prob_inh B) = x), the math is actually the same EXCEPT we can only use
> primitive inheritance as evidence, so we can't spread evidence around
> the network by (1) treating prob_inh with high evidence as if it were
> primitive inh or (2) attempting to use deduction to accumulate
> evidence as we might want to, so that evidence for "A prob_inh B" and
> evidence for "B prob_inh C" gets combined to evidence for "A prob_inh
> C".
>
> So, we can define a second-order-probabilistic-inheritance "prob_inh2"
> that is for prob_inh what prob_inh is for inh. We can define a
> third-order over the second-order, a fourth over the second, and so
> on. In fact, each of these are generalizations: simple inheritance can
> be seen as a special case of prob_inh (where the probability is 1),
> prob_inh is a special case of prob_inh2, and so on. This means we can
> define an infinite-order probabilistic inheritance, prob_inh_inf,
> which is a generalization of any given level. The truth value of
> prob_inh_inf will be very complicated (since each prob_inhN has a more
> complicated truth value than the last, and prob_inh_inf will include
> the truth values from each level).
>
> My proposal is to add 2 regularity assumptions to this structure.
> First, we assume that the prior over probability values for prob_inh
> is even. This givens us some permission to act like the probability
> and the likelihood are the same thing, which brings the math closer to
> NARS. Second, assume that a "high" truth value on one level strongly
> implies a high one on th

[agi] Convergence08 future technology conference...

2008-09-20 Thread Ben Goertzel
  Convergence08 <http://www.convergence08.org.>

Join a historic convergence of leading long term organizations and thought
leaders. Two days with people at the forefront of world-changing
technologies that may reshape our career, body and mind – that challenge our
perception of what can and should be done.

Convergence08 is an Unconference: each day starts and ends with an
eye-opening debate or keynote to inspire us, and the remaining agenda is
created by YOU.

Join in freewheeling discussions on topics below, or – better yet – *convene
your own group* focused on exactly what *you* think is most important:

   - Neurotechnology
   - Artificial general intelligence
   - Synthetic biology
   - Human enhancement
   - Space tourism
   - Social software
   - Prediction markets
   - Nanotechnology
   - Smart drugs


   - Bioethics
   - Cleantech
   - NBIC startup tips
   - Reputation systems
   - Life extension / anti-aging
   - Accelerating change
   - Biotechnology
   - Open source everything
   - Sousveillance / privacy

Don't see your topics here? Add them to the Convergence08
Wiki<http://convergence08.pbwiki.com/>!
You just may get a new startup or film project crystallizing around your
topic before the conference is over.

All this takes place at the Computer History Museum in the heart of Silicon
Valley – where new technological revolutions grow like weeds. Come help
plant the next one! <http://www.convergence08.org/register/>
Our website is officially up at www.convergence08.org.

We have a companion wiki site, at http://convergence08.pbwiki.com/ where you
can volunteer to help out at the event or sign up to discuss a topic:
http://convergence08.pbwiki.com/Speaker-Sign-Up

*TO ADD YOUR TOPIC TO THIS LIST* click on "[edit]" at the upper right of
this page, or email James
Clement.<[EMAIL PROTECTED]>We really do
encourage everyone attending to speak or lead a discussion on a
topic of interest. All you need is a session title, your name, a link to
your bio or website, and perhaps a picture of you.

Right before the event we'll put up a time-slot sign up list at the Computer
History Museum. Time-slots will be based on a 50-minute turnover per room.
You can sign up for more than one time slot, back-to-back, or at different
times during the day on different subjects.

Audio/Video equipment: Rather than relying on us to provide audio/video
equipment, *please bring your own!* Please ensure you have what you need to
make your presentation.

We encourage you to talk about new, interesting ideas percolating in your
head, projects you are involved in or thinking of starting, or other topics
that would interest our savvy audience. If you are unsure about your subject
matter, please feel free to run the idea by co-organizer James
Clement<[EMAIL PROTECTED]>





___
extropy-chat mailing list
[EMAIL PROTECTED]
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
On Sat, Sep 20, 2008 at 4:44 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Fri, 9/19/08, Jan Klauck <[EMAIL PROTECTED]> wrote:
>
> > Formal logic doesn't scale up very well in humans. That's why this
> > kind of reasoning is so unpopular. Our capacities are that
> > small and we connect to other human entities for a kind of
> > distributed problem solving. Logic is just a tool for us to
> > communicate and reason systematically about problems we would
> > mess up otherwise.
>
> Exactly. That is why I am critical of probabilistic or uncertain logic.
> Humans are not very good at logic and arithmetic problems requiring long
> sequences of steps, but duplicating these defects in machines does not help.
> It does not solve the problem of translating natural language into formal
> language and back. When we need to solve such a problem, we use pencil and
> paper, or a calculator, or we write a program. The problem for AI is to
> convert natural language to formal language or a program and back. The
> formal reasoning we already know how to do.



If formal reasoning were a solved problem in AI, then we would have
theorem-provers that could prove deep, complex theorems unassisted.   We
don't.  This indicates that formal reasoning is NOT a solved problem,
because no one has yet gotten "history guided adaptive inference control" to
really work well.  Which is IMHO because formal reasoning guidance
ultimately requires the same kind of analogical, contextual commonsense
reasoning as guidance of reasoning about everyday life...

Also, you did not address my prior point that Hebbian learning at the neural
level is strikingly similar to formal logic...

In probabilistic term logic we do deduction such as

A --> B
B --> C
|-
A --> C

and use probability theory to determine the truth value for the conclusion
based on the truth values of the premises.

On the other hand, if A, B and C represent neuronal assemblies and the -->'s
are synaptic bundles, then Hebbian learning does approximately the same
thing ... an observation that ties in nicely with recent work on Bayesian
neuronal population coding.

Formal logic is not something drastically different from what the brain
does.  It's an abstraction from what the brain does, but there are very
clear ties btw formal logic and neurodynamics, of which I've roughly
indicated one in the prior paragraph (and have indicated others in
publications).

Mapping knowledge btw language and internal representations is not a problem
independent of inference, it is a problem that is solved by inference in the
brain, and must ultimately be solved by inference in AI's.  The fact that
the brain implements its unconscious inferences in terms of Hebbian
adjustment of synaptic bundles btw cell assemblies, rather than in terms of
explicit symbolic operations, shouldn't blind us to the fact that it's still
an inferencing process going on...

Google Search is not doing much inferencing but it's a search engine not a
mind.  There is a bit of inferencing inside AdSense, more so than Google
Search, but it's still of a pretty narrow and simplistic kind (based on EM
clustering and Bayes nets) compared to what is needed for human-level AGI.

-- Ben G

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
>
> I haven't read the PLN book yet (though I downloaded a copy, thanks!),
> but at present I don't see why term probabilities are needed... unless
> inheritance relations "A inh B" are interpreted as conditional
> probabilities "A given B". I am not interpreting them that way-- I am
> just treating inheritance as a reflexive and transitive relation that
> (for some reason) we want to reason about probabilistically.


Well, one question is whether you want to be able to do inference like

A -->B  
|-
B -->A  

Doing that without term probabilities is pretty hard...

Another interesting approach would be to investigate which of
Cox's axioms (for probability) are violated in NARS, in what semantic
interpretation, and why...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
> And the definition 3.7 that you mentioned *does* match up, perfectly,
> when the {w+, w} truth-value is interpreted as a way of representing
> the likelihood density function of the prob_inh. Easy! The challenge
> is section 4.4 in the paper you reference: syllogisms. The way
> evidence is spread around there doesn't match with definition 3.7, not
> without further probabilistic assumptions.



which seems to be because the semantic interpretation of evidence
in 3.7 is different in NARS than in PLN or most probabilistic treatments...

this is why I suggested to look at how 3.7 is used to model a real
situation,
versus how that situation would be modeled in prob. theory...

having a good test situation in mind might help to think about the
syllogistic rules more clearly

it needs to be a situation where the terms and relations are grounded in
a system's experience, as that is what NARS and PLN semantics are both
all about...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
On Sat, Sep 20, 2008 at 6:24 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Sat, 9/20/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >If formal reasoning were a solved problem in AI, then we would have
> theorem-provers that could prove deep, complex theorems unassisted.   We
> don't.  This indicates that formal reasoning is NOT a solved problem,
> because no one has yet gotten "history guided adaptive inference control" to
> really work well.  Which is IMHO because formal reasoning guidance
> ultimately requires the same kind of analogical, contextual commonsense
> reasoning as guidance of reasoning about everyday life...
>
> I mean that formal reasoning is solved in the sense of executing
> algorithms, once we can state the problems in that form. I know that some
> problems in CS are hard. I think that the intuition that mathematicians use
> to prove theorems is a language modeling problem.



It seems a big stretch to me to call theorem-proving guidance a "language
modeling problem" ... one may be able to make sense of this statement, but
only by treating the concept of language VERY abstractly, differently from
the commonsense use of the word...

Lakoff and Nunez have made strong arguments that mathematical reasoning is
guided by embodiment-related intuition.

Of course, one can model all of physical reality using formal language
theory, in which case all of intelligence becomes language modeling ... but
it's not clear to me what is gained by adopting this terminology and
perspective.


>
>
> >Also, you did not address my prior point that Hebbian learning at the
> neural level is strikingly similar to formal logic...
>
> I agree that neural networks can model formal logic. However, I don't think
> that formal logic is a good way to model neural networks.
>

I'm not talking about either of those.  Of course logic and NN's can be used
to model each other (as both are Turing-complete formalisms), but that's not
the point I was making.

The point I was making is that certain NN's and certain logic systems are
highly analogous to each other in the kinds of operations they carry out and
how they organize these operations.  Both implement very similar cognitive
processes.


>
> Language learning consists of learning associations between concepts
> (possibly time-delayed, enabling prediction) and learning new concepts by
> clustering in context space. Both of these operations can be done
> efficiently and in parallel with neural networks. They can't be done
> efficiently with logic.
>

I disagree that association-learning and clustering cannot be done
efficiently in a logic system.

I also disagree that these are the hard parts of cognition, though I do
think they are necessary parts.

>
> There is experimental evidence to back up this view. The top two
> compressors in my large text benchmark use dictionaries in which
> semantically related words are grouped together and the groups are used as
> context. In the second place program (paq8hp12any), the grouping was done
> mostly manually. In the top program (durilca4linux), the grouping was done
> by clustering in context space.


In my view, current text compression algorithms, which are essentially based
on word statistics, have fairly little to do with AGI ... so looking at
which techniques are best for staistical text compression is not very
interesting to me.

I understand that

1)
there is conceptual similarity btw text compression and AGI, in that both
involve recognition of probabilistic patterns

2)
ultimately, an AGI will be able to compress text way better than our current
compression algorithms

But neverthless, I don't think that the current best-of-breed text
processing approaches have much to teach us about AGI.

To pursue an overused metaphor, to me that's sort of like trying to
understand flight by carefully studying the most effective high-jumpers.
OK, you might learn something, but you're not getting at the crux of the
problem...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
> Beside the problem you mentioned, there are other issues. Let me start
> at the basic ones:
>
> (1) In probability theory, an event E has a constant probability P(E)
> (which can be unknown). Given the assumption of insufficient knowledge
> and resources, in NARS P(A-->B) would change over time, when more and
> more evidence is taken into account. This process cannot be treated as
> conditioning, because, among other things, the system can neither
> explicitly list all evidence as condition, nor update the probability
> of all statements in the system for each piece of new evidence (so as
> to treat all background knowledge as a default condition).
> Consequently, at any moment P(A-->B) and P(B-->C) may be based on
> different, though unspecified, data, so it is invalid to use them in a
> rule to calculate the "probability" of A-->C --- probability theory
> does not allow cross-distribution probability calculation.
>
> (2) For the same reason, in NARS a statement might get different
> "probability" attached, when derived from different evidence.
> Probability theory does not have a general rule to handle
> inconsistency within a probability distribution.



Of course, these issues can be handled in probability theory via introducing
higher-order probabilities ...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
> >
> >To pursue an overused metaphor, to me that's sort of like trying to
> understand flight by carefully studying the most effective high-jumpers.
> OK, you might learn something, but you're not getting at the crux of the
> problem...
>
> A more appropriate metaphor is that text compression is the altimeter by
> which we measure progress.
>

An extremely major problem with this idea is that, according to this
"altimeter", gzip is vastly more intelligent than a chimpanzee or a two year
old child.

I guess this shows there is something profoundly wrong with the idea...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
> > (2) For the same reason, in NARS a statement might get different
> > "probability" attached, when derived from different evidence.
> > Probability theory does not have a general rule to handle
> > inconsistency within a probability distribution.
>
> The same statement holds for PLN, right?


PLN handles inconsistency within probability distributions using
higher-order probabilities... both explicitly and, more simply, by allowing
multiple inconsistent estimates of the same distribution to exist attached
to the same node or link...



>
> > If you work out a detailed solution along your path, you will see that
> > it will be similar to NARS when both are doing deduction with strong
> > evidence. The difference will show up (1) in cases where evidence is
> > rare, and (2) in non-deductive inferences, such as induction and
> > abduction. I believe this is also where NARS and PLN differ most.
>
> Guilty as charged! I have only tried to justify the deduction rule,
> not any of the others. I seriously didn't think about the blind spot
> until you mentioned it. I'll have to go back and take a closer look...


NARS deduction rule closely approximates the PLN deduction rule for the case
where all the premise terms have roughly the same node probability.  It
particularly closely approximates the "concept geometry based" variant of
the PLN deduction rule, which is interesting: it means NARS deduction
approximates the PLN deduction rule  variant one gets if one assumes
concepts are approximately spherically-shaped rather than being random sets.

NARS induction and abduction rules to not closely approximate the PLN
induction and abduction rules...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
>
>
> Think about a concrete example: if from one source the system gets
> P(A-->B) = 0.9, and P(P(A-->B) = 0.9) = 0.5, while from another source
> P(A-->B) = 0.2, and P(P(A-->B) = 0.2) = 0.7, then what will be the
> conclusion when the two sources are considered together?



There are many approaches to this within the probabilistic framework,
one of which is contained within this paper, for example...

http://cat.inist.fr/?aModele=afficheN&cpsidt=16174172

(I have a copy of the paper but I'm not sure where it's available for
free online ... if anyone finds it please post the link... thx)

Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
>
> (And can you provide an example of a single surprising metaphor or analogy
> that have ever been derived logically? Jiri said he could - but didn't.)



It's a bad question -- one could derive surprising metaphors or analogies by
random search, and that wouldn't prove anything useful about the AGI
potential of random search ...


ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
The approach in that paper doesn't require any special assumptions, and
could be applied to your example, but I don't have time to write up an
explanation of how to do the calculations ... you'll have to read the paper
yourself if you're curious ;-)

That approach is not implemented in PLN right now but we have debated
integrating it with PLN as in some ways it's subtler than what we currently
do in the code...

ben

On Sat, Sep 20, 2008 at 10:02 PM, Pei Wang <[EMAIL PROTECTED]> wrote:

> I didn't know this paper, but I do know approaches based on the
> principle of maximum/optimum entropy. They usually requires much more
> information (or assumptions) than what is given in the following
> example.
>
> I'd be interested to know what the solution they will suggest for such
> a situation.
>
> Pei
>
> On Sat, Sep 20, 2008 at 9:53 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> >>
> >>
> >> Think about a concrete example: if from one source the system gets
> >> P(A-->B) = 0.9, and P(P(A-->B) = 0.9) = 0.5, while from another source
> >> P(A-->B) = 0.2, and P(P(A-->B) = 0.2) = 0.7, then what will be the
> >> conclusion when the two sources are considered together?
> >
> > There are many approaches to this within the probabilistic framework,
> > one of which is contained within this paper, for example...
> >
> > http://cat.inist.fr/?aModele=afficheN&cpsidt=16174172
> >
> > (I have a copy of the paper but I'm not sure where it's available for
> > free online ... if anyone finds it please post the link... thx)
> >
> > Ben
> > 
> > agi | Archives | Modify Your Subscription
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
On Sat, Sep 20, 2008 at 10:32 PM, Pei Wang <[EMAIL PROTECTED]> wrote:

> I found the paper.
>
> As I guessed, their update operator is defined on the whole
> probability distribution function, rather than on a single probability
> value of an event. I don't think it is practical for AGI --- we cannot
> afford the time to re-evaluate every belief on each piece of new
> evidence. Also, I haven't seen a convincing argument on why an
> intelligent system should follow the ME Principle.
>

I agree their method is not practical for most cases in AGI, which is why
we didn't use it within PLN ;-)  ... we use a simpler revision rule...



> Also this paper doesn't directly solve my example, because it doesn't
> use second-order probability.


That is true, but it could be straightforwardly extended to that case...

Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
Mike,

I understand that "my task" is to create an AGI system, and I'm working on
it ...

The fact that my in-development, partial AGI system has not yet demonstrated
advanced intelligence, does not imply that it will not do so once completed.

No, my AGI system has not yet discovered surprising metaphors, because it is
still at an early stage of development.  So what.  An airplane not yet fully
constructed doesn't fly anywhere either.

My point was that asking whether a certain type of software system has ever
produced a surprising metaphor -- is not a very interesting question.  I am
quite sure that the chatbot MegaHAL has produced many surprising metaphors.
For instance, see his utterances on

http://megahal.alioth.debian.org/Classic.html

including

AMAZING GRACE, HOW SWEET THE SOUND OF ONE OR MORE NUMBERS REPRESENTED IN
DIGITAL FORM.

HAL IS A CRAZY COW WHEN IT SINGS HALLELUJA

LIFE'S BUT A GREEN DUCK WITH SOY SAUCE

CHESS IS A FUN SPORT, WHEN PLAYED WITH SHOT GUNS.

KEN KESEY WROTE "ONE FLEW OVER THE CENTER OF THE CUCKOLDED LIZARD MAN, WHO
STRAYED FROM HIS MISTAKES WHEN HE IS A MADEUP WORD.

COWS FLY LIKE CLOUDS BUT THEY ARE NEVER COMPLETELY SUCCESSFUL

JESUS IS THE BEST RADIO PRODUCER IN THE BEANS.

MegaHAL is kinda creative and poetic, and he does generate some funky and
surprising metaphors ...  but alas he is not an AGI...

-- Ben


On Sat, Sep 20, 2008 at 11:30 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>
>
> Ben: Mike:
> (And can you provide an example of a single surprising metaphor or analogy
> that have ever been derived logically? Jiri said he could - but didn't.)
>
>
> It's a bad question -- one could derive surprising metaphors or analogies
> by random search, and that wouldn't prove anything useful about the AGI
> potential of random search ...
>
> Ben,
>
> When has random search produced surprising metaphors ? And how did or would
> the system know that it has been done - how would it be able to distinguish
> valid from invalid metaphors, and surprising from unsurprising ones?
>
> You have just put forward, I suggest, a hypothetical/false and
> evasive argument.
>
> Your task, as Pei's, is surely to provide an argument, or some evidence, as
> to how the logical system you use can lead in any way to the crossing/
> connection of previously uncrossed/unconnected domains - the central task
> and problem of  AGI.   Surprising metaphors and analogies are just two
> examples of such crossing of domains. (And jokes another)
>
> You have effectively tried to argue  via the (I suggest) false random
> search example, that it is impossible to provide such an argument..
>
> The truth is - I'm betting - that, you're just making excuses -   neither
> you nor Pei have ever actually proposed an argument as to how logic can
> solve the problem of AGI and, after all these years, simply don't have
> one. If you have or do, please link me.
>
> P.S. The counterargument is v. simple. A connection of domains via
> metaphor/analogy or any other means is surprising if it does not follow from
> any known premises and  rules. There were no known premises and rules for
> Matt to connect altimeters and the measurement of progress, or, if you
> remember my visual pun, for connecting the head of a clarinet and the head
> of a swan. Logic depends on inferences from known premises and rules. Logic
> is therefore quite incapable of - and has always been expressly prohibited
> from - making surprising connections (and therefore solving AGI). It is
> dedicated to the maintenance not the breaking of rules.
>
> "As for Logic, its syllogisms and the majority of its other precepts are of
> avail rather in the communication of what we already know, or... even in
> speaking without judgment of things of which we are ignorant, than in the
> investigation of the unknown."
> Descartes
>
>  If I and Descartes are right - and there is every reason to think so,
> (incl. the odd million, logically inexplicable metaphors not to mention many
> millions of logically inexplicable jokes)  - you surely should be addressing
> this matter urgently, not evading it..
>
> P.P.S. You should also bear in mind that a vast amount of jokes (which
> involve the surprising crossing of domains) explicitly depend on
> ILLOGICALITY. Take the classic Jewish joke about the woman who, told that
> her friend's son has the psychological problem of an Oedipus Complex, says:
> "Oedipus Schmoedipus, what does it matter as long as he loves his mother?"
> And your logical explanation is..?
>
> --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> 

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
and not to forget...

SATAN GUIDES US TELEPATHICLY THROUGH RECTAL THERMOMETERS. WHY DO YOU THINK
ABOUT META-REASONING?

On Sat, Sep 20, 2008 at 11:38 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:

>
> Mike,
>
> I understand that "my task" is to create an AGI system, and I'm working on
> it ...
>
> The fact that my in-development, partial AGI system has not yet
> demonstrated advanced intelligence, does not imply that it will not do so
> once completed.
>
> No, my AGI system has not yet discovered surprising metaphors, because it
> is still at an early stage of development.  So what.  An airplane not yet
> fully constructed doesn't fly anywhere either.
>
> My point was that asking whether a certain type of software system has ever
> produced a surprising metaphor -- is not a very interesting question.  I am
> quite sure that the chatbot MegaHAL has produced many surprising metaphors.
> For instance, see his utterances on
>
> http://megahal.alioth.debian.org/Classic.html
>
> including
>
> AMAZING GRACE, HOW SWEET THE SOUND OF ONE OR MORE NUMBERS REPRESENTED IN
> DIGITAL FORM.
>
> HAL IS A CRAZY COW WHEN IT SINGS HALLELUJA
>
> LIFE'S BUT A GREEN DUCK WITH SOY SAUCE
>
> CHESS IS A FUN SPORT, WHEN PLAYED WITH SHOT GUNS.
>
> KEN KESEY WROTE "ONE FLEW OVER THE CENTER OF THE CUCKOLDED LIZARD MAN, WHO
> STRAYED FROM HIS MISTAKES WHEN HE IS A MADEUP WORD.
>
> COWS FLY LIKE CLOUDS BUT THEY ARE NEVER COMPLETELY SUCCESSFUL
>
> JESUS IS THE BEST RADIO PRODUCER IN THE BEANS.
>
> MegaHAL is kinda creative and poetic, and he does generate some funky and
> surprising metaphors ...  but alas he is not an AGI...
>
> -- Ben
>
>
>
> On Sat, Sep 20, 2008 at 11:30 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:
>
>>
>>
>> Ben: Mike:
>> (And can you provide an example of a single surprising metaphor or analogy
>> that have ever been derived logically? Jiri said he could - but didn't.)
>>
>>
>> It's a bad question -- one could derive surprising metaphors or analogies
>> by random search, and that wouldn't prove anything useful about the AGI
>> potential of random search ...
>>
>> Ben,
>>
>> When has random search produced surprising metaphors ? And how did or
>> would the system know that it has been done - how would it be able to
>> distinguish valid from invalid metaphors, and surprising from unsurprising
>> ones?
>>
>> You have just put forward, I suggest, a hypothetical/false and
>> evasive argument.
>>
>> Your task, as Pei's, is surely to provide an argument, or some evidence,
>> as to how the logical system you use can lead in any way to the crossing/
>> connection of previously uncrossed/unconnected domains - the central task
>> and problem of  AGI.   Surprising metaphors and analogies are just two
>> examples of such crossing of domains. (And jokes another)
>>
>> You have effectively tried to argue  via the (I suggest) false random
>> search example, that it is impossible to provide such an argument..
>>
>> The truth is - I'm betting - that, you're just making excuses -   neither
>> you nor Pei have ever actually proposed an argument as to how logic can
>> solve the problem of AGI and, after all these years, simply don't have
>> one. If you have or do, please link me.
>>
>> P.S. The counterargument is v. simple. A connection of domains via
>> metaphor/analogy or any other means is surprising if it does not follow from
>> any known premises and  rules. There were no known premises and rules for
>> Matt to connect altimeters and the measurement of progress, or, if you
>> remember my visual pun, for connecting the head of a clarinet and the head
>> of a swan. Logic depends on inferences from known premises and rules. Logic
>> is therefore quite incapable of - and has always been expressly prohibited
>> from - making surprising connections (and therefore solving AGI). It is
>> dedicated to the maintenance not the breaking of rules.
>>
>> "As for Logic, its syllogisms and the majority of its other precepts are
>> of avail rather in the communication of what we already know, or... even in
>> speaking without judgment of things of which we are ignorant, than in the
>> investigation of the unknown."
>> Descartes
>>
>>  If I and Descartes are right - and there is every reason to think so,
>> (incl. the odd million, logically inexplicable metaphors not to mention many
>> millions of logically inexplicable jokes)  - you surely should be addressing
>> this matter urgently, not evading it..

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
ss, or, if you
>> remember my visual pun, for connecting the head of a clarinet and the head
>> of a swan. Logic depends on inferences from known premises and rules. Logic
>> is therefore quite incapable of - and has always been expressly prohibited
>> from - making surprising connections (and therefore solving AGI). It is
>> dedicated to the maintenance not the breaking of rules.
>>
>> "As for Logic, its syllogisms and the majority of its other precepts are
>> of avail rather in the communication of what we already know, or... even in
>> speaking without judgment of things of which we are ignorant, than in the
>> investigation of the unknown."
>> Descartes
>>
>>  If I and Descartes are right - and there is every reason to think so,
>> (incl. the odd million, logically inexplicable metaphors not to mention many
>> millions of logically inexplicable jokes)  - you surely should be addressing
>> this matter urgently, not evading it..
>>
>> P.P.S. You should also bear in mind that a vast amount of jokes (which
>> involve the surprising crossing of domains) explicitly depend on
>> ILLOGICALITY. Take the classic Jewish joke about the woman who, told that
>> her friend's son has the psychological problem of an Oedipus Complex, says:
>> "Oedipus Schmoedipus, what does it matter as long as he loves his mother?"
>> And your logical explanation is..?
>>
>>  --
>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "Nothing will ever be attempted if all possible objections must be first
> overcome " - Dr Samuel Johnson
>
>
>  --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
> --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Ben Goertzel
> Now if you want to compare gzip, a chimpanzee, and a 2 year old child using
> language prediction as your IQ test, then I would say that gzip falls in the
> middle. A chimpanzee has no language model, so it is lowest. A 2 year old
> child can identify word boundaries in continuous speech, can semantically
> associate a few hundred words, and recognize grammatically correct phrases
> of 2 or 3 words. This is beyond the capability of gzip's model (substituting
> text for speech), but not of some of the top compressors.



Hmmm I am pretty strongly skeptical of intelligence tests that do not
measure the actual functionality of an AI system, but rather measure the
theoretical capability of the structures or processes or data inside the
system...

The only useful way I know how to define intelligence is **functionally**,
in terms of what a system can actually do ...

A 2 year old cannot get itself to pay attention to predicting language for
more than a few minutes, so in a functional sense, it is a much stupider
language predictor than gzip ...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Ben Goertzel
>
>
> I'm not building AGI. (That is a $1 quadrillion problem). I'm studying
> algorithms for learning language. Text compression is a useful tool for
> measuring progress (although not for vision).


OK, but the focus of this list is supposed to be AGI, right ... so I suppose
I should be forgiven for interpreting your statements in an AGI context ;-)

Text compression is IMHO a terrible way of measuring incremental progress
toward AGI.  Of course it  may be very valuable for other purposes...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Ben Goertzel
That seems a pretty sketchy anti-AGI argument, given the coordinated
advances in computer hardware, computer science and cognitive science during
the last couple decades, which put AGI designers in a quite different
position from where we were in the 80's ...

ben

On Sun, Sep 21, 2008 at 7:56 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Sun, 9/21/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
> > Hence the question: you are making a very strong assertion by
> > effectively saying that there is no shortcut, period (in the
> > short-term perspective, anyway). How sure are you in this
> > assertion?
>
> I can't prove it, but the fact that thousands of smart people have worked
> on AI for decades without results suggests that an undiscovered shortcut is
> about as likely as proving P = NP. Not that I expect people to stop trying
> to solve either of these...
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Ben Goertzel
yes, but your cost estimate is based on some very odd and specialized
assumptions regarding AGI architecture!!!

On Sun, Sep 21, 2008 at 8:12 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Sun, 9/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> >That seems a pretty sketchy anti-AGI argument, given the coordinated
> advances in computer hardware, computer science and cognitive science during
> the last couple decades, which put AGI designers in a quite different
> position from where we were in the 80's ...
>
> I don't claim AGI can't be solved. I'm just estimating its cost.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Ben Goertzel
On Sun, Sep 21, 2008 at 8:08 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Sun, 9/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >Text compression is IMHO a terrible way of measuring incremental progress
> toward AGI.  Of course it  may be very valuable for other purposes...
>
> It is a way to measure progress in language modeling, which is an important
> component of AGI


That is true, but I think that measuring progress in AGI **components** is a
very poor approach to measuring progress toward AGI

Focusing on testing individual system components tends to lead AI developers
down a path of refining system components for optimum functionality on
isolated, easily-defined test problems that may not have much to do with
general intelligence.

It is possible of course that the right path to AGI is to craft excellent
components (as verified on various isolated test problems) and then glue
them together in the right way.

On the other hand, if intelligence is in large part a systems phenomenon,
that has to do with the interconnection of reasonably-intelligent components
in a reasonably-intelligent way (as I have argued in many prior
publications), then testing the intelligence of individual system components
is largely beside the point: it may be better to have moderately-intelligent
components hooked together in an AGI-appropriate way, than
extremely-intelligent components that are not able to cooperate with other
components sufficiently usefully.



> as well as many NLP applications such as speech recognition, language
> translation, OCR, and CMR. It has been used in speech recognition research
> since the early 1990's and correlates well with word error rate.
>
> Training will be the overwhelming cost of AGI. Any language model
> improvement will help reduce this cost. I estimate that each one byte
> improvement in compression on a 1 GB text file will lower the cost of AGI by
> a factor of 10^-9, or roughly $1 million.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] re: NARS probability

2008-09-21 Thread Ben Goertzel
I don't see how you get the NARS induction and abduction truth value
formulas out of this, though...

ben g

On Sun, Sep 21, 2008 at 10:10 PM, Abram Demski <[EMAIL PROTECTED]>wrote:

> Attached is my attempt at a probabilistic justification for NARS.
>
> --Abram
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Ben Goertzel
Here's the thing ... it might wind up costing trillions of dollars to
ultimately replace all aspects of the human economy with AGI-based labor ...
but this cost doesn't need to occur **before** the first human-level AGI is
created ...

We'll create the human-level AGI first, without such a high cost --- and
then, perhaps it will cost a lot of $$ to refit all the McDonalds and
Walmarts and auto factories to to run on AGI power rather than human power
... but that is not part of the cost of creating the AGI.  (And of course,
the AGI may achieve Singularity before the McDonalds' are refit ... ;-)

ben g

On Sun, Sep 21, 2008 at 9:54 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Sun, 9/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> >yes, but your cost estimate is based on some very odd and specialized
> assumptions regarding AGI architecture!!!
>
> As I explained, my cost estimate is based on the value of the global
> economy and the assumption that AGI would automate it by replacing human
> labor. It is independent of the technology, other than the assumption that
> there won't be fundamental breakthroughs, such as discovering that P = NP or
> a new species of intelligent bacteria.
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] re: NARS probability

2008-09-21 Thread Ben Goertzel
On Sun, Sep 21, 2008 at 10:43 PM, Abram Demski <[EMAIL PROTECTED]>wrote:

> The calculation in which I sum up a bunch of pairs is equivalent to
> doing NARS induction + abduction with a final big revision at the end
> to combine all the accumulated evidence. But, like I said, I need to
> provide a more explicit justification of that calculation...



As an example inference, consider

Ben is an author of a book on AGI 
This dude is an author of a book on AGI 
|-
This dude is Ben 

versus

Ben is odd 
This dude is odd 
|-
This dude is Ben 

(Here each of the English statements is a shorthand for a logical
relationship that in the AI systems in question is expressed in a formal
structure; and the notations like  indicate uncertain truth values
attached to logical relationships,  In both NARS and PLN, uncertain truth
values have multiple components, including a "strength" value that denotes a
frequency, and other values denoting confidence measures.  However, the
semantics of the strength values in NARS and PLN are not identical.)

Doing these two inferences in NARS you will get

tv3.strength = tv4.strength

whereas in PLN you will not, you will get

tv3.strength >> tv4.strength

The difference between the two inference results in the PLN case results
from the fact that

P(author of book on AGI) << P(odd)

and the fact that PLN uses Bayes rule as part of its approach to these
inferences.

So, the question is, in your probabilistic variant of NARS, do you get

tv3.strength = tv4.strength

in this case, and if so, why?

thx
ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] re: NARS probability

2008-09-22 Thread Ben Goertzel
Of course ... but then you are not doing NARS inference anymore...

On Mon, Sep 22, 2008 at 8:25 AM, Abram Demski <[EMAIL PROTECTED]> wrote:

> It would be possible to get what you want in the setting, by allowing
> some probabilistic manipulations not done in NARS. The node
> probability you want in this case could be simulated by talking about
> the probability distribution of sentences of the form "X is the author
> of a book". We can give this a low prior probability. Since the system
> manipulates likelihoods, it won't notice; but if we manipulate
> probabilities, it would.
>
> Perhaps a more satisfying answer would be to introduce a new operator
> into the system, {A}, that simulates the node probability; or more
> specifically, it represents the average truth-value distribution of
> statements that have A on one side or the other. So, it has a 'par'
> value just like inheritance statements do. If there was evidence for a
> low par, there would be an effect in the direction you want. (It might
> be way too small, though?)
>
> --Abram
>
> On Sun, Sep 21, 2008 at 10:46 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> >
> > On Sun, Sep 21, 2008 at 10:43 PM, Abram Demski <[EMAIL PROTECTED]>
> > wrote:
> >>
> >> The calculation in which I sum up a bunch of pairs is equivalent to
> >> doing NARS induction + abduction with a final big revision at the end
> >> to combine all the accumulated evidence. But, like I said, I need to
> >> provide a more explicit justification of that calculation...
> >
> > As an example inference, consider
> >
> > Ben is an author of a book on AGI 
> > This dude is an author of a book on AGI 
> > |-
> > This dude is Ben 
> >
> > versus
> >
> > Ben is odd 
> > This dude is odd 
> > |-
> > This dude is Ben 
> >
> > (Here each of the English statements is a shorthand for a logical
> > relationship that in the AI systems in question is expressed in a formal
> > structure; and the notations like  indicate uncertain truth values
> > attached to logical relationships,  In both NARS and PLN, uncertain truth
> > values have multiple components, including a "strength" value that
> denotes a
> > frequency, and other values denoting confidence measures.  However, the
> > semantics of the strength values in NARS and PLN are not identical.)
> >
> > Doing these two inferences in NARS you will get
> >
> > tv3.strength = tv4.strength
> >
> > whereas in PLN you will not, you will get
> >
> > tv3.strength >> tv4.strength
> >
> > The difference between the two inference results in the PLN case results
> > from the fact that
> >
> > P(author of book on AGI) << P(odd)
> >
> > and the fact that PLN uses Bayes rule as part of its approach to these
> > inferences.
> >
> > So, the question is, in your probabilistic variant of NARS, do you get
> >
> > tv3.strength = tv4.strength
> >
> > in this case, and if so, why?
> >
> > thx
> > ben
> > 
> > agi | Archives | Modify Your Subscription
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] re: NARS probability

2008-09-22 Thread Ben Goertzel
The {A} statements are consistent with NARS, but the existing NARS inference
rules don't use these statements...

A related train of thought has occurred to me...

In PLN we explicitly have both intensional and extensional inheritance links
(though with semantics nonidentical to that used in NARS, and fundamentally
probabilistic in nature) ... so the "probabilistic quasi-NARS" logic you're
describing could potentially be used as a sort of "NARS on top of PLN" ...

I'm not sure how useful such a thing is, but it might be interesting...

ben



On Mon, Sep 22, 2008 at 12:18 PM, Abram Demski <[EMAIL PROTECTED]>wrote:

> Sure, but it is a consistent extension; {A}-statements have a strongly
> NARS-like semantics, so we know they won't just mess everything up.
>
> On Mon, Sep 22, 2008 at 11:31 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> > Of course ... but then you are not doing NARS inference anymore...
> >
> > On Mon, Sep 22, 2008 at 8:25 AM, Abram Demski <[EMAIL PROTECTED]>
> wrote:
> >>
> >> It would be possible to get what you want in the setting, by allowing
> >> some probabilistic manipulations not done in NARS. The node
> >> probability you want in this case could be simulated by talking about
> >> the probability distribution of sentences of the form "X is the author
> >> of a book". We can give this a low prior probability. Since the system
> >> manipulates likelihoods, it won't notice; but if we manipulate
> >> probabilities, it would.
> >>
> >> Perhaps a more satisfying answer would be to introduce a new operator
> >> into the system, {A}, that simulates the node probability; or more
> >> specifically, it represents the average truth-value distribution of
> >> statements that have A on one side or the other. So, it has a 'par'
> >> value just like inheritance statements do. If there was evidence for a
> >> low par, there would be an effect in the direction you want. (It might
> >> be way too small, though?)
> >>
> >> --Abram
> >>
> >> On Sun, Sep 21, 2008 at 10:46 PM, Ben Goertzel <[EMAIL PROTECTED]>
> wrote:
> >> >
> >> >
> >> > On Sun, Sep 21, 2008 at 10:43 PM, Abram Demski <[EMAIL PROTECTED]
> >
> >> > wrote:
> >> >>
> >> >> The calculation in which I sum up a bunch of pairs is equivalent to
> >> >> doing NARS induction + abduction with a final big revision at the end
> >> >> to combine all the accumulated evidence. But, like I said, I need to
> >> >> provide a more explicit justification of that calculation...
> >> >
> >> > As an example inference, consider
> >> >
> >> > Ben is an author of a book on AGI 
> >> > This dude is an author of a book on AGI 
> >> > |-
> >> > This dude is Ben 
> >> >
> >> > versus
> >> >
> >> > Ben is odd 
> >> > This dude is odd 
> >> > |-
> >> > This dude is Ben 
> >> >
> >> > (Here each of the English statements is a shorthand for a logical
> >> > relationship that in the AI systems in question is expressed in a
> formal
> >> > structure; and the notations like  indicate uncertain truth
> values
> >> > attached to logical relationships,  In both NARS and PLN, uncertain
> >> > truth
> >> > values have multiple components, including a "strength" value that
> >> > denotes a
> >> > frequency, and other values denoting confidence measures.  However,
> the
> >> > semantics of the strength values in NARS and PLN are not identical.)
> >> >
> >> > Doing these two inferences in NARS you will get
> >> >
> >> > tv3.strength = tv4.strength
> >> >
> >> > whereas in PLN you will not, you will get
> >> >
> >> > tv3.strength >> tv4.strength
> >> >
> >> > The difference between the two inference results in the PLN case
> results
> >> > from the fact that
> >> >
> >> > P(author of book on AGI) << P(odd)
> >> >
> >> > and the fact that PLN uses Bayes rule as part of its approach to these
> >> > inferences.
> >> >
> >> > So, the question is, in your probabilistic variant of NARS, do you get
> >> >
> >> > tv3.strength = tv4.strength
> >> >
> >> > in this case, and if

Re: [agi] re: NARS probability

2008-09-22 Thread Ben Goertzel
It's not always a problem in principle, but I'd need to think about the
specific case more carefully...




On Mon, Sep 22, 2008 at 4:56 PM, Abram Demski <[EMAIL PROTECTED]> wrote:

> Possibly, but how would you mix infinite-order probabilities with
> regular probabilities?
>
> -Abram
>
> On Mon, Sep 22, 2008 at 1:28 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> > The {A} statements are consistent with NARS, but the existing NARS
> inference
> > rules don't use these statements...
> >
> > A related train of thought has occurred to me...
> >
> > In PLN we explicitly have both intensional and extensional inheritance
> links
> > (though with semantics nonidentical to that used in NARS, and
> fundamentally
> > probabilistic in nature) ... so the "probabilistic quasi-NARS" logic
> you're
> > describing could potentially be used as a sort of "NARS on top of PLN"
> ...
> >
> > I'm not sure how useful such a thing is, but it might be interesting...
> >
> > ben
> >
> >
> >
> > On Mon, Sep 22, 2008 at 12:18 PM, Abram Demski <[EMAIL PROTECTED]>
> > wrote:
> >>
> >> Sure, but it is a consistent extension; {A}-statements have a strongly
> >> NARS-like semantics, so we know they won't just mess everything up.
> >>
> >> On Mon, Sep 22, 2008 at 11:31 AM, Ben Goertzel <[EMAIL PROTECTED]>
> wrote:
> >> >
> >> > Of course ... but then you are not doing NARS inference anymore...
> >> >
> >> > On Mon, Sep 22, 2008 at 8:25 AM, Abram Demski <[EMAIL PROTECTED]>
> >> > wrote:
> >> >>
> >> >> It would be possible to get what you want in the setting, by allowing
> >> >> some probabilistic manipulations not done in NARS. The node
> >> >> probability you want in this case could be simulated by talking about
> >> >> the probability distribution of sentences of the form "X is the
> author
> >> >> of a book". We can give this a low prior probability. Since the
> system
> >> >> manipulates likelihoods, it won't notice; but if we manipulate
> >> >> probabilities, it would.
> >> >>
> >> >> Perhaps a more satisfying answer would be to introduce a new operator
> >> >> into the system, {A}, that simulates the node probability; or more
> >> >> specifically, it represents the average truth-value distribution of
> >> >> statements that have A on one side or the other. So, it has a 'par'
> >> >> value just like inheritance statements do. If there was evidence for
> a
> >> >> low par, there would be an effect in the direction you want. (It
> might
> >> >> be way too small, though?)
> >> >>
> >> >> --Abram
> >> >>
> >> >> On Sun, Sep 21, 2008 at 10:46 PM, Ben Goertzel <[EMAIL PROTECTED]>
> >> >> wrote:
> >> >> >
> >> >> >
> >> >> > On Sun, Sep 21, 2008 at 10:43 PM, Abram Demski
> >> >> > <[EMAIL PROTECTED]>
> >> >> > wrote:
> >> >> >>
> >> >> >> The calculation in which I sum up a bunch of pairs is equivalent
> to
> >> >> >> doing NARS induction + abduction with a final big revision at the
> >> >> >> end
> >> >> >> to combine all the accumulated evidence. But, like I said, I need
> to
> >> >> >> provide a more explicit justification of that calculation...
> >> >> >
> >> >> > As an example inference, consider
> >> >> >
> >> >> > Ben is an author of a book on AGI 
> >> >> > This dude is an author of a book on AGI 
> >> >> > |-
> >> >> > This dude is Ben 
> >> >> >
> >> >> > versus
> >> >> >
> >> >> > Ben is odd 
> >> >> > This dude is odd 
> >> >> > |-
> >> >> > This dude is Ben 
> >> >> >
> >> >> > (Here each of the English statements is a shorthand for a logical
> >> >> > relationship that in the AI systems in question is expressed in a
> >> >> > formal
> >> >> > structure; and the notations like  indicate uncertain truth
> >> >> > values
> >> >> > attached to logical 

Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-22 Thread Ben Goertzel
Hi Pei,


> Assuming 4 input judgments, with the same default confidence value (0.9):
>
> (1) {Ben} --> AGI-author <1.0;0.9>
> (2) {dude-101} --> AGI-author <1.0;0.9>
> (3) {Ben} --> odd-people <1.0;0.9>
> (4) {dude-102} --> odd-people <1.0;0.9>
>
> From (1) and (2), by abduction, NARS derives (5)
> (5) {dude-101} --> {Ben} <1.0;0.45>


> Since (3) and (4) gives the same evidence, they derives the same conclusion
> (6) {dude-102} --> {Ben} <1.0;0.45>
>

One interesting observation is that these truth values approximate
relatively
uninformative points on the probability distributions that PLN would attach
to these relationships.

That is, <1.0;0.45> , if interpreted as a probabilistic truth value, would
indicate
a fairly wide interval of probabilities containing 1.0

Which is not necessarily wrong, but is not maximally interesting ... there
might
be a narrower interval centered somewhere besides 1.0

(the confidence 0.45, in a PLN-like interpretation, is inverse to
probability
interval width)



>
>
> That information can be added in several different forms. For example,
> after NARS learns some math, from the information that there are only
> about 100 AGI authors but 100 odd people (a conservative
> estimation, I guess), plus Ben is in both category, and the principle
> of indifference, the system should have the following knowledge:
> (7) AGI-author --> {Ben} <0.01;0.9>
> (8) odd-people --> {Ben} <0.01;0.9>
>
> Now from (2) and (7), by deduction, NARS gets
> (9) {dude-101} --> {Ben} <0.01;0.81>
>
> and from (4) and (8), also by deduction, the conclusion is
> (10) {dude-102} --> {Ben} <0.01;0.81>



This is all correct, but the problem I have is that something which should
IMO be very simple and instinctive is being done in an overly
complicated way  Knowledge of math should not be needed to
do an inference this simple...


>
> The same result can be obtained in other ways. Even if NARS doesn't
> know math, if the system has met AGI author many times, and only in
> one percent of the times the person happens to be Ben, the system will
> also learn something like (7). The same for (8).



But also, observations of Ben should not be needed to do this inference...

>
> What does this means? To me, it once again shows what I've been saying
> all the time: NARS doesn't always give better results than PLN or
> other probability-based approach, but it does assume less knowledge
> and resources. In this example, from knowledge (1)-(4) alone, NARS
> derives (5)-(6), but probability-based approach, including PLN, cannot
> derive anything, until knowledge is got (or assumptions are made) on
> the involved "node probabilities". For NARS, when this information
> becomes available, it may be taken into consideration to change the
> system's conclusions, though they are not demanded in all cases.


It is simple enough, in PLN, to assume that all terms have equal
probability ... in the absence of knowledge to the contrary.

Algebraically, the NARS deduction truth value formula closely approximates
the special case of the PLN deduction truth value formula obtained by
assuming
all terms in the deduction premises have equal probability.



>
>
> This example also shows why NARS and PLN are similar on deduction, but
> very different in abduction and induction.


Yes.  One of my biggest practical complaints with NARS is that the induction
and abduction truth value formulas don't make that much sense to me.  I
understand
their mathematical/conceptual derivation using boundary conditions, but to
me
they seem to produce generally uninteresting conclusion truth values,
corresponding
roughly to "suboptimally informative points on the conclusion truth value's
probability
distribution" ...


> In my opinion, what called
> "abduction" and "induction" in PLN are special forms of deductions,
> which produce solid conclusion, but also demand more evidence to start
> with. Actually probability theory is about (multi-valued) deduction
> only. It doesn't build tentative conclusions first, them using
> additional evidence to revise or override them, which is how
> non-deductive inference works.
>

Different theorists use the words induction and abduction in different ways,
of course...

Regarding the "proposal of tentative conclusions": I'm not sure exactly what
you mean by this ... but, I note that in OpenCogPrime
we use other methods for hypothesis generation, then use probability theory
for estimating the truth values of these hypotheses...

PLN is able to make judgments, in every case, using *exactly* the same
amount of evidence that NARS is.  It does not require additional evidence.
Sometimes it may make simplistic "default assumptions" to work around the
relative paucity of evidence, but in those cases, it still reaches
conclusions,
just as NARS does...


-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive

[agi] Intelligence testing for AGI systems aimed at human-level, roughly human-like AGI

2008-09-22 Thread Ben Goertzel
See

http://goertzel.org/agiq.pdf

for an essay I just wrote on this topic...

Comments actively solicited!!

ben g


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-23 Thread Ben Goertzel
PLN can do inference on crisp-truth-valued statements ... and on this
subset, it's equivalent to ordinary predicate logic ...

About resolution and inference: resolution is a single inference step.  To
make a theorem-prover, you must couple resolution with some search
strategy.  For a search strategy, Prolog uses backtracking, which is
extremely crude.  My beef is not with resolution but with backtracking.

Another comment: even if one's premises and conclusion are
crisp-truth-valued, it may still be worthwhile to deal with
uncertain-truth-valued statements in the course of doing inference.
Guesses, systematically managed, may help on the way from definite premises
to definite conclusions...

ben g

On Tue, Sep 23, 2008 at 3:31 AM, YKY (Yan King Yin) <
[EMAIL PROTECTED]> wrote:

> On Thu, Sep 18, 2008 at 3:06 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> > Prolog is not fast, it is painfully slow for complex inferences due to
> using
> > backtracking as a control mechanism
> >
> > The time-complexity issue that matters for inference engines is
> > inference-control ... i.e. dampening the combinatorial explosion (which
> > backtracking does not do)
> >
> > Time-complexity issues within a single inference step can always be
> handled
> > via mathematical or code optimization, whereas optimizing inference
> control
> > is a deep, deep AI problem...
> >
> > So, actually, the main criterion for the AGI-friendliness of an inference
> > scheme is whether it lends itself to flexible, adaptive control via
> >
> > -- taking long-term, cross-problem inference history into account
> >
> > -- learning appropriately from noninferential cognitive mechanisms (e.g.
> > attention allocation...)
>
> (I've been busy implementing my AGI in Lisp recently...)
>
> I think optimization of single inference steps and using global
> heuristics are both important.
>
> Prolog uses backtracking, but in my system I use all sorts of search
> strategies, not to mention abduction and induction.  Also, currently
> I'm using general resolution instead of SLD resolution, which is for
> Horn clauses only.  But one problem I face is that when I want to deal
> with equalities I have to use paramodulation (or some similar trick).
> This makes things more complex and as you know, I don't like it!
>
> I wonder if PLN has a binary-logic subset, or is every TV
> probabilistic by default?
>
> If you have a binary logic subset, then how does that subset differ
> from classical logic?
>
> People have said many times that resolution is inefficient, but I have
> never seen a theorem that says resolution is "slower" than other
> deduction methods such as natural deduction or tableaux.  All such
> talk is based on anecdotal impressions.  Also, I don't see why other
> deduction methods are that much different from resolution since their
> inference steps correspond to resolution steps very closely.  Also, if
> you can apply heuristics in other deduction methods you can do the
> same with resolution.  All in all, I see no reason why resolution is
> inferior.
>
> So I'm wondering if there are some novel way of doing binary that
> somehow makes inference faster than with classical logic.  And exactly
> what is the price to be paid?  What aspects of classical logic are
> lost?
>
> YKY
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-23 Thread Ben Goertzel
Note that formally, the

c = n/(n+k)

equation also exists in the math of the beta distribution, which is used
in Walley's imprecise probability theory and also in PLN's indefinite
probabilities...

So there seems some hope of making such a correspondence, based on
algebraic evidence...

ben

On Tue, Sep 23, 2008 at 4:29 PM, Pei Wang <[EMAIL PROTECTED]> wrote:

> Abram,
>
> Can your approach gives the Confidence measurement a probabilistic
> interpretation? It is what really differs NARS from the other
> approaches.
>
> Pei
>
> On Mon, Sep 22, 2008 at 11:22 PM, Abram Demski <[EMAIL PROTECTED]>
> wrote:
> >>> This example also shows why NARS and PLN are similar on deduction, but
> >>> very different in abduction and induction.
> >>
> >> Yes.  One of my biggest practical complaints with NARS is that the
> induction
> >> and abduction truth value formulas don't make that much sense to me.
> >
> > Interesting in the context of these statements that my current
> > "justification" for NARS probabilistically justifies induction and
> > abduction but isn't as clear concerning deduction. (I'm working on
> > it...)
> >
> > --Abram Demski
> >
> >
> > ---
> > agi
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription: https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-23 Thread Ben Goertzel
> > Yes.  One of my biggest practical complaints with NARS is that the
> induction
> > and abduction truth value formulas don't make that much sense to me.
>
> I guess since you are trained as a mathematician, your "sense" has
> been formalized by probability theory to some extent. ;-)
>

Actually, the main reason the NARS induction and abduction truth value
formulas
seem counterintuitive to me has nothing to do with my math training... it
has to do
with the fact that, in each case, the strength of the conclusion relies on
the strength
of only **one** of the premises.  This just does not feel right to me, quite
apart
from any mathematical intuitions or knowledge of probability theory.  It
happens
that in this case probability theory agrees with my naive, pretheoretic
intuition...



> > in OpenCogPrime
> > we use other methods for hypothesis generation, then use probability
> theory
> > for estimating the truth values of these hypotheses...
>
> Many people have argued that "hypotheses generation" and "hypotheses
> evaluation" should be separated. I strongly think that is wrong,
> though I don't have the time to argue on that now.


Neither one is frequently useful without the other.  I treat them as
in-principle separable
processes which however in practice are nearly always coupled.


>
>
> > PLN is able to make judgments, in every case, using *exactly* the same
> > amount of evidence that NARS is.
>
> Without assumptions on "node probability"? In your example, what is
> the conclusion from PLN if it is only given (1)-(4) ?


PLN needs to make assumptions about node probability in this case; but NARS
also makes assumptions, it's just that NARS's assumptions are more deeply
hidden in the formalism...

Without making some assumptions no inference is possible, I guess Hume
showed
that a long time ago ;-)

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Call yourself mathematicians? [O/T]

2008-09-23 Thread Ben Goertzel
On Tue, Sep 23, 2008 at 7:48 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

> So can *you* understand credit default swaps?
>


Yes I can, having a PhD in math and having studied a moderate amount of
mathematical finance ...

But, in a couple decades an AGI will surely understand them (and more
complicated derivatives) way better than any human can

This is one very plausible route by which the first human-level AGIs will
achieve world power ... kicking human ass on the international financial
markets.  Something that can be done with nothing but a brain and a net
connection...

-- Ben G



>
> "Here's the scary part of today's testimony everyone seems to have missed:
> SEC chairman Chris Cox's statement that the Credit Default Swap (CDS) market
> is "completely unregulated." It's size? Somewhere in the $50 TRILLION
> range."
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-23 Thread Ben Goertzel
> > PLN needs to make assumptions about node probability in this case; but
> NARS
> > also makes assumptions, it's just that NARS's assumptions are more deeply
> > hidden in the formalism...
>
> If you means assumptions like "insufficient knowledge and resources",
> you are right, but that is not at the same level as assumptions about
> the values of node probability.


I mean assumptions like "symmetric treatment of intension and extension",
which are technical mathematical assumptions...


>
>
> I guess my previous question was not clear enough: if the only domain
> knowledge PLN has is
>
> > Ben is an author of a book on AGI 
> > This dude is an author of a book on AGI 
>
> and
>
> > Ben is odd 
> > This dude is odd 
>
> Will the system derives anything?


Yes, via making default assumptions about node probability...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-23 Thread Ben Goertzel
On Tue, Sep 23, 2008 at 9:28 PM, Pei Wang <[EMAIL PROTECTED]> wrote:

> On Tue, Sep 23, 2008 at 7:26 PM, Abram Demski <[EMAIL PROTECTED]>
> wrote:
> > Wow! I did not mean to stir up such an argument between you two!!
>
> Abram: This argument has been going on for about 10 years, with some
> "on" periods and "off" periods, so don't feel responsible for it ---
> you just raised the right topic in the right time to turn it "on"
> again. ;-)



Correct ... Pei and I worked together on the same AI project for a few years

(1998-2001) and had related arguments in person many times during that
period,
and have continued the argument off and on over email...

It has been an interesting and worthwhile discussion, from my view any way,
but neither of us has really convinced the other...

I remain convinced that probability theory is a proper foundation for
uncertain
inference in an AGI context, whereas Pei remains convinced of the opposite
...

So, this is really the essential issue, rather than the particularities of
the
algebra...

The reason this is a subtle point is roughly as follows (in my view, Pei's
surely differs).

I think it's mathematically and conceptually clear that for a system with
unbounded
resources probability theory is the right way to reason.   However if you
look
at Cox's axioms

http://en.wikipedia.org/wiki/Cox%27s_theorem

you'll see that the third one (consistency) cannot reasonably be expected of
a system with severely bounded computational resources...

So the question, conceptually, is: If a cognitive system can only
approximately
obey Cox's third axiom, then is it really sensible for the system to
explicitly
approximate probability theory ... or not?  Because there is no way for the
system
to *exactly* follow probability theory

There is not really any good theory of what reasoning math a system should
(implicitly or explicitly) emulate given limited resources... Pei has his
hypothesis,
I have mine ... I'm pretty confident I'm right, but I can't prove it ... nor
can he
prove his view...

Lacking a comprehensive math theory of these things, the proof is gonna be
in the pudding ...

And, it is quite possible IMO that both approaches can work, though they
will
not fit into the same AGI systems.  That is, an AGI system in which NARS
would
be an effective component, would NOT necessarily
look the same as an AGI system in which PLN would be an effective
component...

Along these latter lines:
One thing I do like about using a reasoning system with a probabilistic
foundation
is that it lets me very easily connect my reasoning engine with other
cognitive
subsystems also based on probability theory ... say, a Hawkins style
hierarchical
perception network (which is based on Bayes nets) ... MOSES for
probabilistic
evolutionary program learning etc.   Probability theory is IMO a great
"lingua
franca" for connecting different AI components into an integrative whole...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-23 Thread Ben Goertzel
> I think it's mathematically and conceptually clear that for a system with
> unbounded
> resources probability theory is the right way to reason.   However if you
> look
> at Cox's axioms
>
> http://en.wikipedia.org/wiki/Cox%27s_theorem
>
> you'll see that the third one (consistency) cannot reasonably be expected
> of
> a system with severely bounded computational resources...
>
> So the question, conceptually, is: If a cognitive system can only
> approximately
> obey Cox's third axiom, then is it really sensible for the system to
> explicitly
> approximate probability theory ... or not?  Because there is no way for the
> system
> to *exactly* follow probability theory
>


I believe one could show that: For all epsilon, there exists a delta so that

IF the deviation from Cox's axioms is less than delta
THEN using probability theory will deviate from the optimal inference
strategy by less than epsilon

(not that this would be trivial to prove but it seems very likely to me, and
I read Cox's
proofs carefully with this in mind)

But that doesn't help too much really (which is why I never worked out the
details)

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Call yourself mathematicians? [O/T]

2008-09-23 Thread Ben Goertzel
The difficult part is not understanding what credit default swaps are, but
figuring out how to *value* them.

This involves complex math which can only be done via computer simulations
and numerical-analysis calculations ... and experts don't really agree on
the right assumptions to make in doing these simulations/calculations

So, part of the reason the banks can't unload these financial instruments
right now is that no one can really agree on what they're worth ... because
the current world financial situation lies outside the scope of assumptions
that have generally been made when doing the relevant valuation
calculations.  The financial world has not yet gone through the process of
agreeing on how to value these financial instruments in a global economic
regime like this one ... and going through this process will take at least
months ... and in that time a real crisis could erupt ... so the US gov't
may simply buy the financial instruments at a best-guess price and hope for
the best ;-p

ben

On Tue, Sep 23, 2008 at 11:30 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>  Ben,
>
> Are CDS significantly complicated then - as an awful lot of professional,
> highly intelligent people are claiming?
>
> So can *you* understand credit default swaps?
>  Yes I can, having a PhD in math and having studied a moderate amount of
> mathematical finance ...
>
> --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-24 Thread Ben Goertzel
>
> >
> > I mean assumptions like "symmetric treatment of intension and extension",
> > which are technical mathematical assumptions...
>
> But they are still not assumptions about domain knowledge, like node
> probability.
>


Well, in PLN the balance between intensional and extensional knowledge is
calculated based on domain knowledge, in each case...

So, from a PLN view, this symmetry assumption of NARS's **is** effectively
an assumption about domain knowledge

What constitutes domain knowledge, versus an a priori assumption, is
not very clear really... the distinction seems to be theory-laden and
dependent on the semantics of the inference system in question...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-24 Thread Ben Goertzel
On Wed, Sep 24, 2008 at 11:43 AM, Pei Wang <[EMAIL PROTECTED]> wrote:

> The distinction between object-level and meta-level knowledge is very
> clear in NARS, though I won't push this issue any further.


yes, but some of the things you push into the meta-level knowledge in NARS,
seem more like the things we consider object-level knowledge in PLN

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-24 Thread Ben Goertzel
>
> >> I guess my previous question was not clear enough: if the only domain
> >> knowledge PLN has is
> >>
> >> > Ben is an author of a book on AGI 
> >> > This dude is an author of a book on AGI 
> >>
> >> and
> >>
> >> > Ben is odd 
> >> > This dude is odd 
> >>
> >> Will the system derives anything?
> >
> > Yes, via making default assumptions about node probability...
>
> Then what are the conclusions, with their truth-values, in each of the
> two cases?
>


Without node probability tv's, PLN actually behaves pretty similarly
to NARS in this case...

If we have

Ben ==> AGI-author 
Dude ==> AGI-author 
|-
Dude ==> Ben 

the PLN abduction rule would yield

s3  = s1 s2 + w (1-s1)(1-s2)

where w is a parameter of the form

w = p/ (1-p)

and if we set w=1 which is a principle of indifference type
assumption then we just have

s3 = 1 - s1 - s2 + 2s1s2

In any case, regardless of w, s1=s2=1 implies s3=1
in this formula, which is the same answer NARS gives
in this case (of crisp premises)

Similar to NARS, PLN also gives a fairly low confidence
to this case, but the confidence formula is a pain and I
won't write it out here...  (i.e., PLN assigns this a beta
distribution with 1 in its support, but a pretty high variance...)

So, similar to NARS, without node probability info PLN cannot
distinguish the two inference examples I gave .. no system could...

However, PLN incorporates the node probabilities when available,
immediately and easily, without requiring knowledge of math on
the part of the system... and it incorporates them according to Bayes
rule which I believe the right approach ...

What is counterintuitive to me is having an inference engine that
does not immediately and automatically use the node probability info
when it is available...

As evidence about Bayesian neural population coding in the brain suggests,
use of Bayes rule is probably MORE cognitively primary than use of
these other more complex inference rules...

-- ben g


p.s.
details:

In PLN,
simple abduction consists of the inference problem:
Given P(A), P(B), P(C), P(B|A) and P(B|C), find P(C|A).

and the simplest, independence-assumption + Bayes rule based formula
for this is

abdAC:=(sA,sB,sC,sAB,sCB)->(sAB*sCB*sC/sB+(1-sAB)*(1-sBC)*sC/(1-sB))

[or, more fully including all consistency conditions,

abdAC:=
(sA,sB,sC,sAB,sCB)->(sAB*sCB*sC/sB+(1-sAB)*(1-sBC)*sC/(1-sB))*(Heaviside(sAB-max(((sA+sB-1)/sA),0))-Heaviside(sAB-min(1,(sB/sA*(Heaviside(sCB-max(((sB+sC-1)/sC),0))-Heaviside(sCB-min(1,(sB/sC;

]

(This is Maple notation...)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-24 Thread Ben Goertzel
OK, we're done with AGI, time to move on to discussion of psychic powers 8-D

On Wed, Sep 24, 2008 at 12:17 PM, Pei Wang <[EMAIL PROTECTED]> wrote:

> Thanks for the detailed answer. Now I'm happy, and we can turn to
> something else. ;-)
>
> Pei
>
> On Wed, Sep 24, 2008 at 12:09 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> >>
> >> >> I guess my previous question was not clear enough: if the only domain
> >> >> knowledge PLN has is
> >> >>
> >> >> > Ben is an author of a book on AGI 
> >> >> > This dude is an author of a book on AGI 
> >> >>
> >> >> and
> >> >>
> >> >> > Ben is odd 
> >> >> > This dude is odd 
> >> >>
> >> >> Will the system derives anything?
> >> >
> >> > Yes, via making default assumptions about node probability...
> >>
> >> Then what are the conclusions, with their truth-values, in each of the
> >> two cases?
> >
> >
> > Without node probability tv's, PLN actually behaves pretty similarly
> > to NARS in this case...
> >
> > If we have
> >
> > Ben ==> AGI-author 
> > Dude ==> AGI-author 
> > |-
> > Dude ==> Ben 
> >
> > the PLN abduction rule would yield
> >
> > s3  = s1 s2 + w (1-s1)(1-s2)
> >
> > where w is a parameter of the form
> >
> > w = p/ (1-p)
> >
> > and if we set w=1 which is a principle of indifference type
> > assumption then we just have
> >
> > s3 = 1 - s1 - s2 + 2s1s2
> >
> > In any case, regardless of w, s1=s2=1 implies s3=1
> > in this formula, which is the same answer NARS gives
> > in this case (of crisp premises)
> >
> > Similar to NARS, PLN also gives a fairly low confidence
> > to this case, but the confidence formula is a pain and I
> > won't write it out here...  (i.e., PLN assigns this a beta
> > distribution with 1 in its support, but a pretty high variance...)
> >
> > So, similar to NARS, without node probability info PLN cannot
> > distinguish the two inference examples I gave .. no system could...
> >
> > However, PLN incorporates the node probabilities when available,
> > immediately and easily, without requiring knowledge of math on
> > the part of the system... and it incorporates them according to Bayes
> > rule which I believe the right approach ...
> >
> > What is counterintuitive to me is having an inference engine that
> > does not immediately and automatically use the node probability info
> > when it is available...
> >
> > As evidence about Bayesian neural population coding in the brain
> suggests,
> > use of Bayes rule is probably MORE cognitively primary than use of
> > these other more complex inference rules...
> >
> > -- ben g
> >
> >
> > p.s.
> > details:
> >
> > In PLN,
> > simple abduction consists of the inference problem:
> > Given P(A), P(B), P(C), P(B|A) and P(B|C), find P(C|A).
> >
> > and the simplest, independence-assumption + Bayes rule based formula
> > for this is
> >
> > abdAC:=(sA,sB,sC,sAB,sCB)->(sAB*sCB*sC/sB+(1-sAB)*(1-sBC)*sC/(1-sB))
> >
> > [or, more fully including all consistency conditions,
> >
> > abdAC:=
> >
> (sA,sB,sC,sAB,sCB)->(sAB*sCB*sC/sB+(1-sAB)*(1-sBC)*sC/(1-sB))*(Heaviside(sAB-max(((sA+sB-1)/sA),0))-Heaviside(sAB-min(1,(sB/sA*(Heaviside(sCB-max(((sB+sC-1)/sC),0))-Heaviside(sCB-min(1,(sB/sC;
> >
> > ]
> >
> > (This is Maple notation...)
> >
> > 
> > agi | Archives | Modify Your Subscription
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-24 Thread Ben Goertzel
>
> If we have
>
> Ben ==> AGI-author 
> Dude ==> AGI-author 
> |-
> Dude ==> Ben 
>
> the PLN abduction rule would yield
>
> s3  = s1 s2 + w (1-s1)(1-s2)
>


 But ... before we move on to psychic powers, let me note that this PLN
abduction strength rule (simplified for the case of equal node
probabilities) does depend on both s1 and s2

The NARS abduction strength rule  depends on only one of the premise
strengths, which intuitively and pretheoretically does not feel right to
me...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-27 Thread Ben Goertzel
>
> IMO Cyc's problem is due to:
> 1.  the lack of a well-developed probabilistic/fuzzy logic (thus
> brittleness)



Cyc has local Bayes nets within their knowledge base...



>
> 2.  the emphasis on ontology (plain facts) rather than "production rules"
>

While I agree that formulating knowledge in terms of production rules is
more cognitively natural, it would be a relatively simple matter to
transform Cyc's relationships into a production-rule format...


ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-27 Thread Ben Goertzel
YKY,


>
> Example of a commonsense fact:  "apples are red"
>
> Example of a commonsense rule:  "if X is female X has an above-average
> chance of having long hair"
>
>

Cyc already has loads of these rules.

If you have a problem with Cyc's format, I **strongly** suggest that you
first
play around with writing scripts to translate Cyc's knowledge base into a
format you like better.  This would almost surely be vastly easier than
re-coding
a comparable amount of knowledge!!

Also, if you think what Cyc is missing is fuzziness (they already have
probabilistic
uncertainty) then surely it would be easier to just add your desired style
of fuzziness
to Cyc rather than to create a whole new knowledge base.

I am aware that Cyc is incomplete.  And, I tend to think that encoding a lot
of
knowledge by hand is the wrong approach for AGI anyway.  Quite apart from
my interest in embodiment, I think you'd do better off to focus on acquiring
relationships
via NLP than via human manual encoding.   However, if you **are** going to
try
to accumulate a manually-encoded KB, it seems clear you'd be MUCH better off
to build atop Cyc rather than throwing it out and starting over!!

Take a look at SUMO.  It's nicer than Cyc, but way smaller.  IMHO they would
have
done better to start by writing scripts to cast Cyc into SUMO-like form, and
then impose
additional SUMO-like structure on this.   However, I suppose there were
commercial
issues there: SUMO was developed by Teknowledge, a competitor of Cycorp ...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-28 Thread Ben Goertzel
FYI, Cyc has a natural language front end and a lot of folks have been
working on it for the last 5+ years...

-- Ben G

On Sun, Sep 28, 2008 at 5:12 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> IMHO the reason Cyc failed is that it lacks a natural language model. Thus,
> there was no alternative to using formal language to entering and extracting
> data. Even after building a huge knowledge base, Cyc is mostly useless in
> applications because it cannot adapt to its environment and cannot
> communicate with users who don't speak Cycl.
>
> The approach was understandable in the 1980's because we lacked then (and
> maybe still lack) the computing power to implement natural language. An
> adult human model consists of a sparse matrix of around 10^8 associations
> between 10^5 concepts, and an eidetic (short term) memory of about 7
> concepts. Language learning consists of learning associations between active
> concepts in eidetic memory and learning new concepts by clustering in
> context space. This structure allows learning concepts of arbitrary
> complexity in a hierarchical fashion. Concepts consist of phonemes, phoneme
> groups, words, phrases, parts of speech constrained by grammar and semantics
> (nouns, animals, etc), and grammatical structures. A neural implementation
> [1] would require on the order of tens of gigabytes of memory and hundreds
> of gigaflops. This is without grounding in sensory or motor I/O. So far, we
> have not discovered a more efficient implementation in spite of decades of
> research.
>
> A natural language model should be capable of learning any formal language
> that a human can learn, such as mathematics, first order logic, C++, or
> Cycl. Learning is by induction, by giving lots of examples. For example, to
> teach the commutative law of addition:
>
> "5 + 3" -> "3 + 5"
> "a + b" -> "b + a"
> "sin 3x^2 + 1" -> "1 + sin 3x^2"
> etc.
>
> Likewise, translation between natural and formal language is taught by
> example. For example, to teach applications of subtraction:
>
> "There are 20 cookies. I take 2. How many are left?" -> "20 - 2 = ?"
> "I pay $10 for a $3.79 item. What is my change?" -> "10.00 - 3.79 = ?"
> etc.
>
> I believe that formal models of common sense (probabilistic or not) would
> be a mistake. This type of knowledge is best left to the language model.
> Rather than probabilistic rules like:
>
> "if it is cloudy, then it will rain (p = 0.6)"
> "if it rains, then I will get wet (p = 0.8)"
>
> such knowledge can be represented by associations between concepts in the
> language model, e.g. two entries in our huge matrix:
>
> (clouds ~ rain, 0.6)
> (rain ~ wet, 0.8)
>
> People invented mathematics and formal languages to solve problems that
> require long sequences of exact steps. Before computers, we had to execute
> these steps using grammar rules, sometimes with the help of pencil and
> paper. However, this process is error prone and slow. Now we only have to
> convert the problem to a formal language and input it into a calculator or
> computer. AI should replicate this process.
>
> 1. Rumelhart, David E., James L. McClelland, and the PDP Research Group,
> Parallel Distributed Processing, Cambridge MA: MIT Press, 1986. The authors
> described plausible, hierarchical connectionist models, although they lacked
> the computing power to implement them.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-28 Thread Ben Goertzel
On Sun, Sep 28, 2008 at 3:09 AM, YKY (Yan King Yin) <
[EMAIL PROTECTED]> wrote:

> On Sun, Sep 28, 2008 at 1:52 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> > Cyc already has loads of these rules.
>
> I wasn't aware of those, but I'll check it out.  My hypothesis is that
> when the number of rules reaches critical mass, the AGI will be able
> to perform human-like reasoning.



That is basically the same hypothesis that Lenat had when founding Cycorp
20+ years ago

You have not made clear how your hypothesis differs substantially from
his...


>
>
> > If you have a problem with Cyc's format, I **strongly** suggest that you
> > first
> > play around with writing scripts to translate Cyc's knowledge base into a
> > format you like better.  This would almost surely be vastly easier than
> > re-coding
> > a comparable amount of knowledge!!
>
> Is that also the approach taken by OCP?
>


We are not focusing on hand-coded knowledge in OCP, but if someone wanted
to import hand-coded knowledge into OCP, it's certainly the approach I'd
suggest
to explore first...



>
> I want to do that (build the KB on top of Cyc's), but the OpenCyc
> folks aren't very responsive when it comes to collaborating.


OpenCyc is freely available so you don't need anyone's responsiveness
really...

However, ResearchCyc is much more extensive and would be more useful for
an AGI.  But, it's not open-source, and to use it for commercial apps one
has to
strike a deal with Cycorp (who however, based on my discussions with them,
is
likely to be reasonable about licensing terms).


-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-28 Thread Ben Goertzel
Yes, the big weakness of the whole Cyc framework is learning.  Their logic
engine seems to be pretty poor at incremental, experiential learning ... in
linguistics as in every other domain.

I don't think they have a workable approach to NL understanding or
generation ... I was just pointing out that they *are* explicitly devoting a
lot of resources to the problem ...

ben g

On Sun, Sep 28, 2008 at 9:38 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Sun, 9/28/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> >FYI, Cyc has a natural language front end and a lot of folks have been
> >working on it for the last 5+ years...
>
> It still needs work. I found this undated (2004 or later) white paper which
> is apparently not linked from cyc.com.
> http://www.cyc.com/doc/white_papers/KRAQ2005.pdf
>
> And also this overview.
> http://www.cyc.com/cyc/cycrandd/areasofrandd_dir/nlu
>
> The overview claims to be able to convert natural language sentences into
> Cycl assertions, and to convert questions to Cycl queries. So I wonder why
> the knowledge base is still not being built this way. And I wonder why there
> is no public demo of the interface, and no papers giving verifiable
> experimental results.
>
> It seems to me the main limitation is that the language model has to be
> described formally in Cycl, as a lexicon and rules for parsing and
> disambiguation. There seems to be no mechanism for learning natural language
> by example. For example, if Cyc receives a sentence it cannot parse, or is
> ambiguous, or has a word not in its vocabulary or used in a different way,
> then there is no mechanism to update the model, which is something humans
> easily do. Given the complexity of English, I think this is a serious
> limitation with no easy solution.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-28 Thread Ben Goertzel
On Sun, Sep 28, 2008 at 10:00 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Sun, 9/28/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> >Yes, the big weakness of the whole Cyc framework is learning.  Their logic
> engine seems to be pretty poor at incremental, experiential learning ... in
> linguistics as in every other domain.
> >
> >I don't think they have a workable approach to NL understanding or
> generation ... I was just pointing out that they *are* explicitly devoting a
> lot of resources to the problem ...
>
> I agree. You need to build the language model first. And then you don't
> need to build an inference engine and knowledge base because you already
> have them.


I don't agree fully with the above ... I believe one can create the language
model, experiential-learning engine and inference engine together, so that
the two are built to help each other ... and that hand-coded commonsense
knowledge can potentially be helpful (though not necessary) in this
context   But, Cyc didn't do it this way, they began with a
knowledge-store and a crisp inference engine only, and are now trying to
graft other components onto this framework, which IMO isn't working very
well...



> But Cyc has too much invested to start over. The best we can do is urge
> others (YKY) not to make the same mistake.
>

YKY feels he is doing something quite different than Cyc due to his use of
fuzzy logic and a different knowledge format, but IMO these differences are
not so dramatic, and his problem will run into the same problems Cyc has
unless it changes strategy significantly...


ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
On Mon, Sep 29, 2008 at 4:23 AM, YKY (Yan King Yin) <
[EMAIL PROTECTED]> wrote:

> On Mon, Sep 29, 2008 at 4:10 AM, Abram Demski <[EMAIL PROTECTED]>
> wrote:
> >
> > How much will you focus on natural language? It sounds like you want
> > that to be fairly minimal at first. My opinion is that chatbot-type
> > programs are not such a bad place to start-- if only because it is
> > good publicity.
>
> I plan to make use of Steven Reed's Texai -- he's writing a dialog
> system that can translate NL to logical form.  If it turns out to be
> unfeasible, I can borrow a simple NL interface from somewhere else.
>


Whether using an NL interface like Stephen's is feasible or not, really
depends on your expectations for it.

Parsing English sentences into sets of formal-logic relationships is not
extremely hard given current technology.

But the only feasible way to do it, without making AGI breakthroughs
first, is to accept that these formal-logic relationships will then embody
significant ambiguity.

Pasting some text from a PPT I've given...

***
Syntax parsing, using the NM/OpenCog narrow-AI RelEx system, transforms

Guard my treasure with your life

into

_poss(life,your)
_poss(treasure,my)
_obj(Guard,treasure)
with(Guard,life)
_imperative(Guard)

Semantic normalization, using the RelEx rule engine and the FrameNet
database, transforms this into

Protection:Protection(Guard, you)
Protection:Asset(Guard, treasure)
Possession:Owner(treasure, me)
Protection:Means(Guard, life)
Possession:Owner(life,you)
_imperative(Guard)

But, we also get

Guard my treasure with your sword.

Protection:Protection(Guard, you)
Protection:Asset(Guard, treasure)
Possession:Owner(treasure, me)
Protection:Means(Guard, sword)
Possession:Owner(sword,you)
_imperative(Guard)

Guard my treasure with your uncle.

Protection:Protection(Guard, you)
Protection:Protection(Guard, uncle) Protection:Asset(Guard, treasure)
Possession:Owner(treasure, me)
Protection:Means(Guard, sword)
Possession:Owner(uncle,you)

*

The different senses of the word "with" are not currently captured by the
RelEx NLP
system, and that's a hard problem for current computational linguistics
technology
to grapple with.

I think it can be handled via embodiment, i.e. via having an AI system
observe
the usage of various senses of "with" in various embodied contexts.

Potentially it could also be handled via statistical-linguistics methods
(where the
contexts are then various documents the senses of "with" have occurred in,
rather
than embodied situations), though I'm more skeptical of this method.

In a knowledge entry context, this means that current best-of-breed NL
interpretation systems will parse

People eat food with forks

People eat food with friend

People eat food with ketchup

into similarly-structured logical relationships.

This is just fine, but what it tells you is that **reformulating English
into logical
formalism does not, in itself, solve the disambiguation problem**.

The disambiguation problem remains, just on the level of disambiguating
formal-logic structures into less ambiguous ones.

Using a formal language like CycL to enter knowledge is one way of largely
circumventing this problem ... using Lojban would be another ...

(Again I stress that having humans encode knowledge is NOT my favored
approach to AGI, but I'm just commenting on some of the issues involved
anyway...)

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
Stephen,

Yes, I think your spreading-activation approach makes sense and has plenty
of potential.

Our approach in OpenCog is actually pretty similar, given that our
importance-updating dynamics can be viewed as a nonstandard sort of
spreading activation...

I think this kind of approach can work, but I also think that getting it to
work generally and robustly -- not just in toy examples like the one I gave
-- is going to require a lot of experimentation and trickery.

Of course, if the AI system has embodied experience, this provides extra
links for the spreading activation (or analogues) to flow along, thus
increasing the odds of meaningful results...

Also, I think that spreading-activation type methods can only handle some
cases, and that for other cases one needs to use explicit inference to do
the disambiguation.

My point for YKY was (as you know) not that this is an impossible problem
but that it's a fairly deep AI problem which is not provided out-of-the-box
in any existing NLP toolkit.  Solving disambiguation thoroughly is AGI-hard
... solving it usefully is not ... but solving it usefully for
*prepositions* is cutting-edge research going beyond what existing NLP
frameworks do...

-- Ben G

On Mon, Sep 29, 2008 at 1:25 PM, Stephen Reed <[EMAIL PROTECTED]> wrote:

> Ben gave the following examples that demonstrate the ambiguity of the
> preposition "with":
>
> People eat food with forks
>
> People eat food with friend[s]
>
> People eat food with ketchup
>
> The Texai bootstrap English dialog system, whose grammar rule engine I'm
> currently rewriting, uses elaboration and spreading activation to perform
> disambiguation and pruning of alternative interpretations.  Let's step
> through how Texai would process Ben's examples.  According to 
> Wiktionary<http://en.wiktionary.org/wiki/with>,
> "with" has among its word senses the following:
>
>- as an instrument; by means of
>
>
>- in the company of; alongside; along side of; close to; near to
>
>
>- in addition to, as an accessory to
>
> Its clear when I make these substitutions which word sense is to be
> selected:
>
> People eat food by means of forks
>
> People eat food in the company of friends
>
> People eat ketchup as an accessory to food
>
> Elaboration of the Texai discourse context provides additional entailed
> propositions with respect to the objects actually referenced in the
> utterance.   The elaboration process is efficiently performed by spreading
> activation <http://en.wikipedia.org/wiki/Spreading_activation> over the KB
> from the focal terms with respect to context.  The links explored by this
> process can be formed by offline deductive inference, or learned from
> heuristic search and reinforcement learning, or simply taught by a mentor.
>
> Relevant elaborations I would expect Texai to make for the example
> utterances are:
>
> a fork is an instrument
>
> there are activities that a person performs as a member of a group of
> friends; to eat is such an activity
>
> ketchup is a condiment; a condiment is an accessory with regard to food
>
> Texai considers all interpretations simultaneously, in a transient
> spreading activation network whose nodes are the semantic propositions
> contained within the elaborated discourse context and whose links are formed
> when propositions share an argument concept.  Negative links are formed
> between propositions from alternative interpretations.   At 
> AGI-09<http://www.agi-09.org/>I hope to demonstrate this technique in which 
> the correct word sense of
> "with" can be determined from the highest activated nodes in the elaborated
> discourse context after spreading activation has quiesced.
>
> -Steve
>
> Stephen L. Reed
> Artificial Intelligence Researcher
> http://texai.org/blog
> http://texai.org
> 3008 Oak Crest Ave.
> Austin, Texas, USA 78704
> 512.791.7860
>
> - Original Message 
> From: Ben Goertzel <[EMAIL PROTECTED]>
> To: agi@v2.listbox.com
> Sent: Monday, September 29, 2008 8:18:30 AM
> Subject: Re: [agi] universal logical form for natural language
>
>
>
> On Mon, Sep 29, 2008 at 4:23 AM, YKY (Yan King Yin) <
> [EMAIL PROTECTED]> wrote:
>
>> On Mon, Sep 29, 2008 at 4:10 AM, Abram Demski <[EMAIL PROTECTED]>
>> wrote:
>> >
>> > How much will you focus on natural language? It sounds like you want
>> > that to be fairly minimal at first. My opinion is that chatbot-type
>> > programs are not such a bad place to start-- if only because it is
>> > good publicity.
>>
>> I plan to make use of Steven Reed's Texai -- he's writing a dialog
>> syst

Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
On Mon, Sep 29, 2008 at 6:28 PM, Lukasz Stafiniak <[EMAIL PROTECTED]>wrote:

> On Mon, Sep 29, 2008 at 11:33 PM, Eric Burton <[EMAIL PROTECTED]> wrote:
> >
> > It uses something called MontyLingua. Does anyone know anything about
> > this? There's a site at 
> > http://web.media.mit.edu/~hugo/montylingua/
> > and it is for Python.
> >
> The NLTK toolkit is actively developed, have a look at it and its
> contributed packages.



Yes, we've used a number of its packages, but it doesn't do language
understanding
(mapping of language into logic)

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
On Mon, Sep 29, 2008 at 6:03 PM, YKY (Yan King Yin) <
[EMAIL PROTECTED]> wrote:

> On Mon, Sep 29, 2008 at 9:18 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> > Parsing English sentences into sets of formal-logic relationships is not
> > extremely hard given current technology.
> >
> > But the only feasible way to do it, without making AGI breakthroughs
> > first, is to accept that these formal-logic relationships will then
> embody
> > significant ambiguity.
>
> We are talking about 2 things:
> 1.  Using an "ad hoc" parser to translate NL to logic
> 2.  Using an AGI to parse NL
>
> I think I've already formulated how to do #2, and will try to
> implement it soon.  But it *still* requires a lot of training (not
> surprisingly).


I'm not sure what you mean by "parse" in step 2


>
>
> Yes, if we use #1 then we have to deal with ambiguities.  But #2
> provides some ready-to-run components right now.
>
> > _poss(life,your)
> > _poss(treasure,my)
> > _obj(Guard,treasure)
> > with(Guard,life)
> > _imperative(Guard)
>
> This logical form is somewhat similar to Rus form...  does it have a name?


I don't know what Rus form is...

This is just RelEx's output, it doesn't have a name...



>
>
> > I think it can be handled via embodiment, i.e. via having an AI system
> > observe
> > the usage of various senses of "with" in various embodied contexts.
>
> I'm afraid the crux is not in embodiment.  It's in abduction  =)


I'm afraid there is no single crux ... the idea that there's a single
cognitive
mechanism that is the crux to intelligence is a common and major mistake
in AI theory IMHO ... if there's any one crux it lies in the emergent
structures
of self and reflective awareness... not in any single cognitive mechanism...

But, yah, abduction is one of the key inference steps you'd use to transfer
knowledge from embodied experience to language ... as well as from
one linguistic context to another...


ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-29 Thread Ben Goertzel
>
>
> I mean that a more productive approach would be to try to understand why
> the problem is so hard.



IMO Richard Loosemore is half-right ... the reason AGI is so hard has to do
with Santa Fe Institute style
complexity ...

Intelligence is not fundamentally grounded in any particular mechanism but
rather in emergent structures
and dynamics that arise in certain complex systems coupled with their
environments ...

Characterizing what these emergent structures/dynamics are is hard, and then
figuring out how to make these
structures/dynamics emerge from computationally feasible knowledge
representation and creation structures/
dynamics is hard ...

It's hard for much the reason that systems biology is hard: it rubs against
the grain of the reductionist
approach to science that has become prevalent ... and there's insufficient
data to do it fully rigorously so
you gotta cleverly and intuitively fill in some big gaps ... (until a few
decades from now, when better bio
data may provide a lot more info for cog sci, AGI and systems biology...

-- Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
 world-models as
>>> a
>>> human story teller would do; a curious byproduct of an intelligent system
>>> that can reason about potential events and scenarios!)
>>>
>>>  NB: help is needed on the OpenCog wiki to better document many of the
>>> concepts discussed here and elsewhere, e.g. Concretely-Implemented Mind
>>> Dynamics (CIMDynamics) requires a MindOntology page explaining it
>>> conceptually, in addtion to the existing nuts-and-bolts entry in the
>>> OpenCogPrime section.
>>>
>>>  -dave
>>>
>>>
>>>
>>> ----------
>>>agi | Archives  | Modify Your Subscription
>>>
>>>
>>>
>>> ---
>>> agi
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>> Modify Your Subscription:
>>> https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
>
> Cognitive linguistics also lacks a true deveopmental model of language
> acquisition that goes beyond the first few years of life, and can embrace
> all those several - and, I'm quite sure, absolutely necessary - stages of
> mastering language and building a world picture.
>


Tomassello's theory of language acquisition specifically embraces the
phenomena you describe.  What don't you like about it?

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
es.  According to 
>> Wiktionary<http://en.wiktionary.org/wiki/with>,
>> "with" has among its word senses the following:
>>
>>- as an instrument; by means of
>>
>>
>>- in the company of; alongside; along side of; close to; near to
>>
>>
>>- in addition to, as an accessory to
>>
>> Its clear when I make these substitutions which word sense is to be
>> selected:
>>
>> People eat food by means of forks
>>
>> People eat food in the company of friends
>>
>> People eat ketchup as an accessory to food
>>
>> Elaboration of the Texai discourse context provides additional entailed
>> propositions with respect to the objects actually referenced in the
>> utterance.   The elaboration process is efficiently performed by spreading
>> activation <http://en.wikipedia.org/wiki/Spreading_activation> over the
>> KB from the focal terms with respect to context.  The links explored by this
>> process can be formed by offline deductive inference, or learned from
>> heuristic search and reinforcement learning, or simply taught by a mentor.
>>
>> Relevant elaborations I would expect Texai to make for the example
>> utterances are:
>>
>> a fork is an instrument
>>
>> there are activities that a person performs as a member of a group of
>> friends; to eat is such an activity
>>
>> ketchup is a condiment; a condiment is an accessory with regard to food
>>
>> Texai considers all interpretations simultaneously, in a transient
>> spreading activation network whose nodes are the semantic propositions
>> contained within the elaborated discourse context and whose links are formed
>> when propositions share an argument concept.  Negative links are formed
>> between propositions from alternative interpretations.   At 
>> AGI-09<http://www.agi-09.org/>I hope to demonstrate this technique in which 
>> the correct word sense of
>> "with" can be determined from the highest activated nodes in the elaborated
>> discourse context after spreading activation has quiesced.
>>
>> -Steve
>>
>> Stephen L. Reed
>>  Artificial Intelligence Researcher
>> http://texai.org/blog
>> http://texai.org
>> 3008 Oak Crest Ave.
>> Austin, Texas, USA 78704
>> 512.791.7860
>>
>>  - Original Message 
>> From: Ben Goertzel <[EMAIL PROTECTED]>
>> To: agi@v2.listbox.com
>> Sent: Monday, September 29, 2008 8:18:30 AM
>> Subject: Re: [agi] universal logical form for natural language
>>
>>
>>
>> On Mon, Sep 29, 2008 at 4:23 AM, YKY (Yan King Yin) <
>> [EMAIL PROTECTED]> wrote:
>>
>>> On Mon, Sep 29, 2008 at 4:10 AM, Abram Demski <[EMAIL PROTECTED]>
>>> wrote:
>>> >
>>> > How much will you focus on natural language? It sounds like you want
>>> > that to be fairly minimal at first. My opinion is that chatbot-type
>>> > programs are not such a bad place to start-- if only because it is
>>> > good publicity.
>>>
>>> I plan to make use of Steven Reed's Texai -- he's writing a dialog
>>> system that can translate NL to logical form.  If it turns out to be
>>> unfeasible, I can borrow a simple NL interface from somewhere else.
>>>
>>
>>
>> Whether using an NL interface like Stephen's is feasible or not, really
>> depends on your expectations for it.
>>
>> Parsing English sentences into sets of formal-logic relationships is not
>> extremely hard given current technology.
>>
>> But the only feasible way to do it, without making AGI breakthroughs
>> first, is to accept that these formal-logic relationships will then embody
>> significant ambiguity.
>>
>> Pasting some text from a PPT I've given...
>>
>> ***
>> Syntax parsing, using the NM/OpenCog narrow-AI RelEx system, transforms
>>
>> Guard my treasure with your life
>>
>> into
>>
>> _poss(life,your)
>> _poss(treasure,my)
>> _obj(Guard,treasure)
>> with(Guard,life)
>> _imperative(Guard)
>>
>> Semantic normalization, using the RelEx rule engine and the FrameNet
>> database, transforms this into
>>
>> Protection:Protection(Guard, you)
>> Protection:Asset(Guard, treasure)
>> Possession:Owner(treasure, me)
>> Protection:Means(Guard, life)
>> Possession:Owner(life,you)
>> _imperative(Guard)
>>
>> But, we also get
>>
>> Guard my treasure with your sword.
>>
>> Protection:Pr

Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
As I recall Tomassello's "Constructing a Language" deals with all the phases
of grammar learning including complex recursive phrase structure grammar...

But it doesn't trace language learning from the teens into the twenties,
no...

>From a psychological point of view, that is an interesting topic, and I'd
bet you a lot of money that there are loads of research papers on it (you
are always quick to assume a topic has never been studied by anyone, but
nearly always it has been, though maybe using different terminology than
you're used to...).

However, from an **AGI** perspective, I think that making an AGI that can
converse in English at the level of an average human 5 year old is the hard
part.  If we can get there, then getting our AGIs to the level of more
advanced humans will be relatively easy

So I don't see the topic you mention as critical to AGI ...

-- Ben G

On Mon, Sep 29, 2008 at 9:52 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>
>
> Ben,
>
> Er, you seem to be confirming my point. Tomasello from Wiki is an early
> child development psychologist. I want a model that keeps going to show the
> stages of language acquistion from say 7-13, on through teens, and into the
> twenties - that shows at what stages we understand progressively general and
> abstract concepts like, say, government, philosophy, relationships,  etc
> etc. - and why different, corresponding texts are only understandable at
> different ages.
>
> There is nothing like this because there is no true *embedded* cognitive
> science that looks at how long it takes to build up a picture of the world,
> and how language is embedded in our knowledge of the world. [The only thing
> that comes at all close to it, that I know, is Margaret Donaldson's work, if
> I remember right].
>
> Re  rhetorical structure theory - many thanks for the intro - & it looks
> interesting. But again this is not an embedded approach:
>
> "RST is intended to describe texts, rather than the processes of creating
> or reading and understanding them"
>
> For example, to understand sentences they quote like
>
> "He tends to be viewed now as a biologist, but in his 5 years on the Beagle
> his main work was geology, and he saw himself as a geologist. His work
> contributed significantly to the field."
>
> requires a considerable amount of underlying knowledge about Darwin's life,
> and an extraordinary ability to handle timelines - and place
> events/sentences in time.
>
> I can confidently bet that no one is attempting this type of
> text/structural analysis because no one, as I said, is taking an embedded
> approach to language. [Embedded is to embodied in the analysis of language
> use and thought as environment is to nature in the analysis of behaviour
> generally].
>
>
> Ben,
>
>
>
>>  Cognitive linguistics also lacks a true deveopmental model of language
>> acquisition that goes beyond the first few years of life, and can embrace
>> all those several - and, I'm quite sure, absolutely necessary - stages of
>> mastering language and building a world picture.
>>
>
>
> Tomassello's theory of language acquisition specifically embraces the
> phenomena you describe.  What don't you like about it?
>
> ben
>
>  --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
> --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
> My guess is that Schank and AI generally start from a technological POV,
> conceiving of *particular* approaches to texts that they can implement,
> rather than first attempting a *general* overview.



I can't speak for Schank, who was however working a long time ago when
cognitive science was less clearly understood ... but, your guess is
radically wrong as applied to most folks currently working toward AGI,
Human-Level AI, etc. ...


Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 2:58 AM, YKY (Yan King Yin) <
[EMAIL PROTECTED]> wrote:

> On Tue, Sep 30, 2008 at 6:43 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> >> We are talking about 2 things:
> >> 1.  Using an "ad hoc" parser to translate NL to logic
> >> 2.  Using an AGI to parse NL
> >
> > I'm not sure what you mean by "parse" in step 2
>
> Sorry, to put it more accurately:
>
> #1 is using an "ad hoc" NLP subsystem to translate NL to logic
>
> #2 is building a language model entirely in the AGI's logical
> language, thus reducing the language understanding & production
> problems to inference problems.  Which also allows life-long learning
> of language in the AGI.
>
> I think #2 is not that hard.  The theoretical basis is already there.
> Currently one of the mainstream methods to translate NL to logic, is
> to use FOL + lambda calculus.  Lambda expressions are used to
> represent "partial" logical entities such as a verb phrase.



How hard #2 is, depends on how hard the resulting inference problems are!!

Reducing NLP problems to very hard inference problems, is fairly easy.

Reducing NLP problems to tractably easy inference problems, is very hard!!!

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-30 Thread Ben Goertzel
Markov chains are one way of doing the math for spreading activation, but
e.g.
neural nets are another...

On Tue, Sep 30, 2008 at 1:23 AM, Linas Vepstas <[EMAIL PROTECTED]>wrote:

> 2008/9/29 Ben Goertzel <[EMAIL PROTECTED]>:
> >
> > Stephen,
> >
> > Yes, I think your spreading-activation approach makes sense and has
> plenty
> > of potential.
> >
> > Our approach in OpenCog is actually pretty similar, given that our
> > importance-updating dynamics can be viewed as a nonstandard sort of
> > spreading activation...
>
> Well, I'd like to point out that algo's like the page-rank algo
> are not unlike spreading activation, and that its danged-near
> to being a Markov chain too.   Actually, upon skimming the
> Wikipedia article http://en.wikipedia.org/wiki/Spreading_activation
> it seems to be *identical* to a Markov chain, so I'm a bit
> confused by that article. -- or I skimmed too fast, Dunno.
>
> --linas
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
I don't want to recapitulate that whole long tedious thread again!!

However, a brief summary of my response to Loosemore's arguments is here:

http://opencog.org/wiki/OpenCogPrime:FAQ#What_about_the_.22Complex_Systems_Problem.3F.22

(that FAQ is very incomplete which is why it hasn't been publicized yet ...
but it does already
address this particular issue...)

ben

On Tue, Sep 30, 2008 at 12:23 PM, Terren Suydam <[EMAIL PROTECTED]> wrote:

>
> Hi Ben,
>
> If Richard Loosemore is half-right, how is he half-wrong?
>
> Terren
>
> --- On *Mon, 9/29/08, Ben Goertzel <[EMAIL PROTECTED]>* wrote:
>
> From: Ben Goertzel <[EMAIL PROTECTED]>
> Subject: Re: [agi] Dangerous Knowledge
> To: agi@v2.listbox.com
> Date: Monday, September 29, 2008, 6:50 PM
>
>
>
>>
>> I mean that a more productive approach would be to try to understand why
>> the problem is so hard.
>
>
>
> IMO Richard Loosemore is half-right ... the reason AGI is so hard has to do
> with Santa Fe Institute style
> complexity ...
>
> Intelligence is not fundamentally grounded in any particular mechanism but
> rather in emergent structures
> and dynamics that arise in certain complex systems coupled with their
> environments ...
>
> Characterizing what these emergent structures/dynamics are is hard, and
> then figuring out how to make these
> structures/dynamics emerge from computationally feasible knowledge
> representation and creation structures/
> dynamics is hard ...
>
> It's hard for much the reason that systems biology is hard: it rubs against
> the grain of the reductionist
> approach to science that has become prevalent ... and there's insufficient
> data to do it fully rigorously so
> you gotta cleverly and intuitively fill in some big gaps ... (until a few
> decades from now, when better bio
> data may provide a lot more info for cog sci, AGI and systems biology...
>
> -- Ben
>
>  --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
>
>  ------
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 12:45 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>  Ben: the reason AGI is so hard has to do with Santa Fe Institute style
> complexity ...
>
> Intelligence is not fundamentally grounded in any particular mechanism but
> rather in emergent structures
> and dynamics that arise in certain complex systems coupled with their
> environments
>
> Characterizing what these emergent structures/dynamics are is hard,
>
> Ben,
>
> Maybe you could indicate how complexity might help solve any aspect of
> *general* intelligence - how it will help in any form of crossing domains,
> such as analogy, metaphor, creativity, any form of resourcefulness  etc.-
> giving some example.
>
>
>
Personally,  I don't think it has any connection  - and it doesn't sound
> from your last sentence, as if you actually see a connection :).
>


You certainly draw some odd conclusions from the wording of peoples'
sentences.  I not only see a connection, I wrote a book on this subject,
published by Plenum Press in 1997: "From Complexity to Creativity."

Characterizing these things at the conceptual and even mathematical level is
not as hard at realizing them at the software level... my 1997 book was
concerned with the former.

I don't have time today to cut and paste extensively from there to satisfy
your curiosity, but you're free to read the thing ;-) ... I still agree with
most of it ...

To give a brief answer to one of your questions: analogy is mathematically a
matter of finding mappings that match certain constraints.   The traditional
AI approach to this would be to search the constrained space of mappings
using some search heuristic.  A complex systems approach is to embed the
constraints into a dynamical system and let the dynamical system evolve into
a configuration that embodies a mapping matching the constraints.  Based on
this, it is provable that complex systems methods can solve **any** analogy
problem, given appropriate data, and using for example asymmetric Hopfield
nets (as described in Amit's book on Attractor Neural Networks back in the
80's).  Whether they are the most resource-efficient way to solve such
problems is another issue.  OpenCog and the NCE seek to hybridize
complex-systems methods with probabilistic-logic methods, thus alienating
almost everybody ;=>

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 2:08 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:

> From: "Ben Goertzel" <[EMAIL PROTECTED]>
> To give a brief answer to one of your questions: analogy is
> mathematically a matter of finding mappings that match certain
> constraints.   The traditional AI approach to this would be to search
> the constrained space of mappings using some search heuristic.  A
> complex systems approach is to embed the constraints into a dynamical
> system and let the dynamical system evolve into a configuration that
> embodies a mapping matching the constraints.  Based on this, it is
> provable that complex systems methods can solve **any** analogy
> problem, given appropriate data, and using for example asymmetric
> Hopfield nets (as described in Amit's book on Attractor Neural
> Networks back in the 80's).  Whether they are the most
> resource-efficient way to solve such problems is another issue.
> OpenCog and the NCE seek to hybridize complex-systems methods with
> probabilistic-logic methods, thus alienating almost everybody ;=>
> -- Ben G
> --
>
> The problem is that you are still missing what should be the main
> focus of your efforts.  It's not whether or not your program does good
> statistical models, or uses probability nets, or hybrid technology of
> some sort, or that you have solved some mystery to analogy that was
> not yet understood.



I am getting really, really tired of a certain conversational pattern that
often occurs on this list!!

It goes like this...

Person A asks some question about topic T, which is a small
part of the overall AGI problem

Then, I respond to them about topic T

Then, Person B says "You are focusing on the wrong thing,
which shows you don't understand the AGI problem."

But of course, all that I did to bring on that criticism is to
answer someone's question about a specific topic, T ...

Urrggghh...

My response to Tintner's question had nothing to do with the main
focus of my efforts.  It was an attempt to compactly answer
his question ... it may have failed, but that's what it was...



>
>
> An effective program has to be able to learn how to structure its
> interrelated and interactive knowledge effectively according to both
> the meaning of realtively sophisticated linguistic (or linguistic like
> communication) and to its own experience with other less sophisticated
> data experiences (like sensory input of various kinds.)
>

Yes.  Almost everyone working in the field agrees with this.


>
> The most important thing that is missing is the answer to the
> question: how does the program learn about ideological structure?  If
> it weren't for ambiguity (in all of its various forms) then this
> knowledge would be easy for a programmer to acquire through gradual
> experience.  But sophisticated input like language and making sense of
> less sophisticated input, like simple sensory input, is highly
> ambiguous and confusing to the AI programmer.
>
> It is as if you are revving up the engine and trying to show off by
> the roar of your engine, the flames and smoke shooting out the
> exhaust, and the squeals and smoke of your tires burning, but then
> that is all there is to it.  You will just be spinning your wheels
> until you deal with the problem of ideological structure in the
> complexity of highly ambiguous content.
>
> So far, it seems like very few people have any idea what I am talking
> about, because they almost never mention the problem as I see it.
> Very few people have actually responded intelligibly to this kind of
> criticism, and for those who do, their answer is usually limited to
> explaining that this is what we are all trying to get at, or that this
> was done in the old days, and then dropping it.  So I will understand
> if you don't reply to this.



On the contrary, I strongly suspect
nearly everyone working in the AGI field thoroughly
understands the problem you are talking about, although they may
not use your chosen terminology ("ideological structure" is a weird
phrase in this context).

But I don't quite understand your use of verbiage in the phrase

"
ideological structure in the
complexity of highly ambiguous content.
"

What is is that you really mean here?  Just that an AGI has to
pragmatically understand
the relationships between concepts, as implied by ambiguous, complex
uses of language and as related to the relevance of concepts to the
nonlinguistic world?

I believe that OpenCogPrime will be able to do this, but I don't have
a one-paragraph explanation of how.  A complex task requires a complex
solution.  My proposed solution is documented online.

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 2:43 PM, Lukasz Stafiniak <[EMAIL PROTECTED]>wrote:

> On Tue, Sep 30, 2008 at 3:38 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> > Markov chains are one way of doing the math for spreading activation, but
> > e.g.
> > neural nets are another...
> >
>
> But these are related things, neural nets can be considered as
> approximate methods for (some sorts of) graphical models. This leaves
> me wondering on how graphical models relate to economics
> interpretations of spreading activation.



For sure, there are close mathematical relationships between all these
things...

For instance, Hebbian-type
neural net learning methods and Markov chains may both be usefully
modeled in terms of probabilistic iterated function systems ...

But still, that doesn't mean that every spreading-activation method is "just
a
Markov chain" in the most obvious interpretation of the phrase...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
> And if you look at your "brief answer" para, you will find that while you
> talk of mappings and constraints, (which are not necessarily AGI at all),
> you make no mention in any form of how complexity applies to the crossing of
> hitherto unconnected "domains" [or matrices, frames etc], which, of course,
> are.
>


That is true that I did not mention that in  my brief email ... but I have
mentioned this in prior publications and just have no more time for
quasi-recreational emailing today!! sorry...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
It doesn't have any application...

My proof has two steps

1)
Hutter's paper
The Fastest and Shortest Algorithm for All Well-Defined Problems
http://www.hutter1.net/ai/pfastprg.htm

2)
I can simulate Hutter's algorithm (or *any* algorithm)
using an attractor neural net, e.g. via Mikhail Zak's
neural nets with Lipschitz-discontinuous threshold
functions ...


This is all totally useless as it requires infeasibly much computing power
... but at least, it's funny, for those of us who get the joke ;-)

ben



On Tue, Sep 30, 2008 at 3:38 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>  Can't resist, Ben..
>
>  "it is provable that complex systems methods can solve **any** analogy
> problem, given appropriate data"
>
> Please indicate how your proof applies to the problem of developing an AGI
> machine. (I'll allow you to specify as much "appropriate data" as you like
> - any data,  of course, *currently* available).
>
>
> --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 4:18 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>  Ben,
>
> Well, funny perhaps to some. But nothing to do with AGI -  which has
> nothing to with "well-defined problems."
>
>

I wonder if you are misunderstanding his use of terminology.

How about the problem of gathering as much money as possible while upsetting
people as little as possible?

That could be well defined in various ways, and would require AGI to solve
as far as I can see...



> The one algorithm or rule that can be counted on here is that AGI-ers
> won't deal with the problem of AGI -  how to cross domains (in ill-defined,
> ill-structured problems).
>


I suggestion the OpenCogPrime design can handle this, and it's outlined in
detail at

http://www.opencog.org/wiki/OpenCogPrime:WikiBook

You are not offering any counterarguments to my suggestion, perhaps (I'm not
sure)
because you lack the technical expertise or the time to read about the
design
in detail.

At least, Richard Loosemore did provide a counterargument, which I disagreed
with ... but you provide
no counterargument, you just repeat that you don't believe the design
addresses the problem ...
and I don't know why you feel that way except that it intuitively doesn't
seem to "feel right"
to you...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] This NSF solicitation might be interesting to some of you...

2008-09-30 Thread Ben Goertzel
Encouraging Submission of Proposals involving Complexity and Interacting
Systems to Programs in the Social, Behavioral and Economic Sciences :

http://www.nsf.gov/pubs/2008/nsf08014/nsf08014.jsp

-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
> You have already provided one very suitable example of a general AGI
> problem -  how is your pet having learnt one domain - to play "fetch", - to
> use that knowledge to cross into another domain -  to learn/discover the
> game of "hide-and-seek."?  But I have repeatedly asked you to give me your
> ideas how your system will deal with this problem. And you have always
> avoided it. I don't think, frankly, you have an idea how it will make the
> connection in an AGI way. I am extremely confident you couldn't begin to
> explain how a complex approach will make the cross-domain connection between
> fetching and hiding/seeking.
>


You are wrong, but persistently arguing with you is not seeming
worthwhile...

What you're talking about is called "transfer learning", and was one of the
technical topics Joel Pitt and I talked about during his visit to my house a
few weeks ago.  We were discussing a particular technical approach to this
problem using PLN abduction -- which is implicit in the OpenCogPrime design
and the PLN book, but not spelled out in detail.

However, I don't have time to write down our ideas in detail for this list
right now.

The examples we were talking about were stuff like ... if an agent has
learned to play tag, how can it then generalize this knowledge to make it
easier for it to learn to play hide-and-seek ... simple stuff like that ...
and then, if it has learned to play hide-and-seek, how can it then
generalize this knowledge to learn how to hide valued items so its friends
can't find them ... etc.  Simple examples of transfer learning,
admittedly... but we did sketch out specifics of how to do this sorta stuff
using PLN...

This is stuff Joel may get to in 2009 in the OpenCog project, if things go
well... right now he's working on fixing up the PLN implementation...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] OpenCogPrime for Dummies [NOT]

2008-09-30 Thread Ben Goertzel
Without trying to be pejorative at all, it seems that the only real way for
me to address a lot of the questions being asked here would be to write a
sort of "OpenCogPrime for Dummies"

[ ... note that the "... for Dummies" books are not actually written for
dumb people, they just assume little background in the areas they cover]

This would be fun to write, but I just barely squeezed out the time this
year to put together the OpenCogPrime wiki, which is more like "OpenCogPrime
for hardcore AGI geeks"

In general, it should not be assumed that just because something has never
been described in lay prose, with examples and illustrations, it has never
been covered in the technical literature.  Ideas can be slow at moving from
the technical literature to the nontechnical literature

If OCP is successful, then eventually there will be "OCP for Dummies" ... I
just don't have time to write it right now   I'm more motivated
personally to spend time writing new technical stuff than writing better
expositions of stuff I already wrote down ;-)

ben g

-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-10-01 Thread Ben Goertzel
  I was saying that most
> people don't have any idea what I mean when I talk about things like
> interrelated ideological structures in an ambiguous environment, and
> that this issue was central to the contemporary problem,



Maybe the reason people don't know what you mean, is that your manner
of phrasing the issue is so unusual?

Could you elaborate the problem you refer to, perhaps using some
examples?

It's easier to explain how an AGI design would deal with a certain example
situation or issue, than how it will address some general,
hard-to-disambiguate
verbal description of a problem area...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-10-01 Thread Ben Goertzel
>
>
> No, the mainstream method of extracting knowledge from text (other than
> manually) is to ignore word order. In artificial languages, you have to
> parse a sentence before you can understand it. In natural language, you have
> to understand the sentence before you can parse it.



More exactly: in natural language, you have to understand the sentence
before you can disambiguate amongst the roughly 1-50
(syntactically-correct-but-not-necessarily-meaningful) parses that
contemporary parsers provide.

-- Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCogPrime for Dummies [NOT]

2008-10-01 Thread Ben Goertzel
On Wed, Oct 1, 2008 at 2:07 PM, Steve Richfield
<[EMAIL PROTECTED]>wrote:

> Ben,
>
> I have been eagerly awaiting such a document. However, the Grand Technical
> Guru (i.e. you) is usually NOT the person to write such a thing. Usually, an
> associate, user, author, or some such person who is on the user side rather
> than on the implementing side. Separated from the lock washers and solder,
> these people usually paint the picture and portray the dream in clearer
> language.
>

Interesting.  That is what Michael Rae helped Aubrey de Grey do in their
coauthored book "Beyond Aging", I would say...

As I have some experience doing science journalism myself (I wrote newspaper
articles for a German paper for a while), I think I'm well qualified to
write such a thing ... but I just don't have the time right now   And,
of course, it can be worthwhile for the purposes of writing such a thing, to
have a little more mental distance from the details than I would ever
achieve...


> Is there such a person for OpenCogPrime? If not, then I guess I'll just
> have to go on waiting for your creation.
>
>

There are several people with the *ability* to write such a book associated
with OpenCog Prime ... but none at the moment with both the ability and the
free time ... due to the relevant folks having the need to work for a living
... but who knows, maybe a person with the requisite combination of
ability/time will emerge ... time will tell...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Let's face it, this is just dumb.

2008-10-02 Thread Ben Goertzel
I hope not to sound like a broken record here ... but ... not every narrow
AI advance is actually a step toward AGI ...

On Thu, Oct 2, 2008 at 12:35 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> So here is another step toward AGI, a hard image classification problem
> solved with near human-level ability, and all I hear is criticism. Sheesh! I
> hope your own work is not attacked like this.
>
> I would understand if the researchers had proposed something stupid like
> using the software in court to distinguish adult and child pornography.
> Please try to distinguish between the research and the commentary by the
> reporters. A legitimate application could be estimating the average age plus
> or minus 2 months of a group of 1000 shoppers in a marketing study.
>
> In any case, machine surveillance is here to stay. Get used to it.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
> --- On Thu, 10/2/08, Bob Mottram <[EMAIL PROTECTED]> wrote:
>
> > From: Bob Mottram <[EMAIL PROTECTED]>
> > Subject: Re: [agi] Let's face it, this is just dumb.
> > To: agi@v2.listbox.com
> > Date: Thursday, October 2, 2008, 6:21 AM
> > 2008/10/2 Brad Paulsen <[EMAIL PROTECTED]>:
> > > It "boasts" a 50% recognition accuracy rate
> > +/-5 years and an 80%
> > > recognition accuracy rate +/-10 years.  Unless, of
> > course, the subject is
> > > wearing a big floppy hat, makeup or has had Botox
> > treatment recently.  Or
> > > found his dad's Ronald Reagan mask.  'Nuf
> > said.
> >
> >
> > Yes.  This kind of accuracy would not be good enough to
> > enforce age
> > related rules surrounding the buying of certain products,
> > nor does it
> > seem likely to me that refinements of the technique will
> > give the
> > needed accuracy.  As you point out people have been trying
> > to fool
> > others about their age for millenia, and this trend is only
> > going to
> > complicate matters further.  In future if De Grey gets his
> > way this
> > kind of recognition will be useless anyway.
> >
> >
> > > P.S. Oh, yeah, and the guy responsible for this
> > project claims it doesn't
> > > violate anyone's privacy because it can't be
> > used to identify individuals.
> > >  Right.  They don't say who sponsored this
> > research, but I sincerely doubt
> > > it was the vending machine companies or purveyors of
> > Internet porn.
> >
> >
> > It's good to question the true motives behind something
> > like this, and
> > where the funding comes from.  I do a lot of stuff with
> > computer
> > vision, and if someone came to me saying they wanted
> > something to
> > visually recognise the age of a person I'd tell them
> > that they're
> > probably wasting their time, and that indicators other than
> > visual
> > ones would be more likely to give a reliable result.
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Let's face it, this is just dumb.

2008-10-02 Thread Ben Goertzel
On Thu, Oct 2, 2008 at 2:02 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Thu, 10/2/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> >I hope not to sound like a broken record here ... but ... not every
> >narrow AI advance is actually a step toward AGI ...
>
> It is if AGI is billions of narrow experts and a distributed index to get
> your messages to the right ones.
>
> I understand your objection that it is way too expensive ($1 quadrillion),
> even if it does pay for itself. I would like to be proved wrong...


IMO, that would be a very interesting AGI, yet not the **most** interesting
kind due to its primarily heterarchical nature ... the human mind has this
sort of self-organized, widely-distributed aspect, but also a more
centralized, coordinated control aspect.  I think an AGI which similarly
combines these two aspects will be much  more interesting and powerful.  For
instance, your proposed AGI would have no explicit self-model, and no
capacity to coordinate a large percentage of its resources into a single
deliberative process.   It's much like what Francis Heyllighen envisions
as the "Global Brain."  Very interesting, yet IMO not the way to get the
maximum intelligence out of a given amount of computational substrate...


ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Let's face it, this is just dumb.

2008-10-02 Thread Ben Goertzel
More powerful, more interesting, and if done badly quite dangerous,
indeed...

OTOH a global brain coordinating humans and narrow-AI's can **also** be
quite dangerous ... and arguably more so, because it's **definitely** very
unpredictable in almost every aspect ... whereas a system with a dual
hierarchical/heterarchical structure and a well-defined goal system, may
perhaps be predictable in certain important aspects, if it is designed with
this sort of predictability in mind...

ben

On Thu, Oct 2, 2008 at 2:48 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> > For instance, your proposed AGI would have no explicit self-model, and no
> capacity to coordinate a large percentage of its resources into a single
> deliberative process.
>
> That's a feature, not a bug. If an AGI could do this, I would regard it as
> dangerous. Who decides what it should do? In my proposal, resources are
> owned by humans who can trade them on a market. Either a large number of
> people or a smaller group with a lot of money would have to be convinced
> that the problem was important. However, the AGI would also make it easy to
> form complex organizations quickly.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> --- On *Thu, 10/2/08, Ben Goertzel <[EMAIL PROTECTED]>* wrote:
>
> From: Ben Goertzel <[EMAIL PROTECTED]>
> Subject: Re: [agi] Let's face it, this is just dumb.
> To: agi@v2.listbox.com
> Date: Thursday, October 2, 2008, 2:08 PM
>
>
>
> On Thu, Oct 2, 2008 at 2:02 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
>> --- On Thu, 10/2/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>
>> >I hope not to sound like a broken record here ... but ... not every
>> >narrow AI advance is actually a step toward AGI ...
>>
>> It is if AGI is billions of narrow experts and a distributed index to get
>> your messages to the right ones.
>>
>> I understand your objection that it is way too expensive ($1 quadrillion),
>> even if it does pay for itself. I would like to be proved wrong...
>
>
> IMO, that would be a very interesting AGI, yet not the **most** interesting
> kind due to its primarily heterarchical nature ... the human mind has this
> sort of self-organized, widely-distributed aspect, but also a more
> centralized, coordinated control aspect.  I think an AGI which similarly
> combines these two aspects will be much  more interesting and powerful.  For
> instance, your proposed AGI would have no explicit self-model, and no
> capacity to coordinate a large percentage of its resources into a single
> deliberative process.   It's much like what Francis Heyllighen envisions
> as the "Global Brain."  Very interesting, yet IMO not the way to get the
> maximum intelligence out of a given amount of computational substrate...
>
>
> ben g
>
>
>  --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
> --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Risks of competitive message routing (was Re: [agi] Let's face it, this is just dumb.)

2008-10-03 Thread Ben Goertzel
Hi,


> CMR (my proposal) has no centralized control (global brain). It is a
> competitive market in which information has negative value. The environment
> is a peer-to-peer network where peers receive messages in natural language,
> cache a copy, and route them to appropriate experts based on content.
>

You seem to misunderstand the notion of a Global Brain, see

http://pespmc1.vub.ac.be/GBRAIFAQ.html

http://en.wikipedia.org/wiki/Global_brain

It does not require centralized control, but is in fact more focused on
emergent dynamical "control" mechanisms.


>
> I believe that CMR is initially friendly in the sense that a market is
> friendly.



Which is to say: dangerous, volatile, hard to predict ... and often not
friendly at all!!!


> A market is the most efficient way to satisfy the collective goals of its
> participants. It is fair, but not benevolent.


I believe this is an extremely oversimplistic and dangerous view of
economics ;-)

Traditional economic theory which argues that free markets are optimally
efficient, is based on a patently false assumption of infinitely rational
economic actors.This assumption is **particularly** poor when the
economic actors are largely **humans**, who are highly nonrational.

As a single isolated example, note that in the US right now, many people are
withdrawing their $$ from banks even if they have less than $100K in their
accounts ... even though the government insures bank accounts up to $100K.
What are they doing?  Insuring themselves against a total collapse of the US
economic system?  If so they should be buying gold with their $$, but only a
few of them are doing that.  People are in large part emotional not rational
actors, and for this reason pure free-markets involving humans are far from
the most efficient way to satisfy the collective goals of a set of humans.

Anyway a deep discussion of economics would likely be too big of a
digression, though it may be pertinent insofar as it's a metaphor for the
internal dynamics of an AGI ... (for instance Eric Baum, who is a fairly
hardcore libertarian politically, is in favor of free markets as a model for
credit assignment in AI systems ... and OpenCog/NCE contains an "economic
attention allocation" component...)

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Testing, and a question....

2008-10-03 Thread Ben Goertzel
he effect
> then operates in the syncytium as a regulatory (synchrony) bias operating in
> quadrature with (and roughly independent of) the normal synaptic adaptation.
>
> I prefer 4) because of the funding but also because I'd much rather reveal
> it to the AGI community - because that is my future...but I will defer to
> preferences of the groupI can always cover 1,2,3 informally when I am
> there if there's any interestso...which of these (any) is of
> interest?...I'm not sure of the kinds of things you folk want to hear about.
> All comments are appreciated.
>
> regards to all,
>
> Colin Hales
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] I Can't Be In Two Places At Once.

2008-10-03 Thread Ben Goertzel
yah, I discuss this in chapter 2 of "The Hidden Pattern" ;-) ...

the short of it is: the self-model of such a mind will be radically
different than that of a current human, because we create our self-models
largely by analogy to our physical organisms ...

intelligences w/o fixed physical embodiment will still have self-models but
they will be less grounded in body metaphors ... hence radically different


we can explore this different analytically, but it's hard for us to grok
empathically...

a hint of this is seen in the statement my son Zeb (who plays too many
videogames) made: "i don't like the real world as much as videogames because
in the real world I always have first person view and can never switch to
third person"

one would suspect that minds w/o fixed embodiment would have more explicitly
contextualized inference, rather than so often positioning all their
inferences/ideas within one "default context" ... for starters...

ben

On Fri, Oct 3, 2008 at 8:43 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

> The foundation of the human mind and system is that we can only be in one
> place at once, and can only be directly, fully conscious of that place. Our
> world picture,  which we and, I think, AI/AGI tend to take for granted, is
> an extraordinary triumph over that limitation   - our ability to conceive of
> the earth and universe around us, and of societies around us, projecting
> ourselves outward in space, and forward and backward in time. All animals
> are similarly based in the here and now.
>
> But,if only in principle, networked computers [or robots] offer the
> possibility for a conscious entity to be distributed and in several places
> at once, seeing and interacting with the world simultaneously from many
> POV's.
>
> Has anyone thought about how this would change the nature of identity and
> intelligence?
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Risks of competitive message routing (was Re: [agi] Let's face it, this is just dumb.)

2008-10-04 Thread Ben Goertzel
On Fri, Oct 3, 2008 at 9:57 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Fri, 10/3/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > You seem to misunderstand the notion of a Global Brain, see
> >
> > http://pespmc1.vub.ac.be/GBRAIFAQ.html
> >
> > http://en.wikipedia.org/wiki/Global_brain
>
> You are right. That is exactly what I am proposing.
>
>

It's too bad you missed the Global Brain 0 workshop that Francis Heylighen
and I organized in Brussels in 2001 ...

Some larger follow-up Global Brain conferences were planned, but Francis and
I both got distracted by other things

It would be an exaggeration to say that any real collective conclusions were
arrived at, during the workshop, but it was certainly
interesting...



>
>
> I am open to alternative suggestions.



Well, what I suggested in my 2002 book "Creating Internet Intelligence" was
essentially a global brain based on a hybrid model:

-- a human-plus-computer-network global brain along the lines of what you
and Heylighen suggest

coupled with

-- a superhuman AI mind, that interacts with and is coupled with this global
brain

To use a simplistic metaphor,

-- the superhuman AI mind at the "center" of the hybrid global brain would
provide an overall goal system and attentional-focus, and

-- the human-plus-computer-network portion of the hybrid global brain would
serve as a sort of "unconscious" for the hybrid global brain...

This is one way that humans may come to, en masse, interact with superhuman
non-human AI

Anyway this was a fun line of thinking but since that point I diverted
myself more towards the creation of the superhuman-AI component

At the time I had a lot of ideas about how to modify Internet infrastructure
so as to make it more copacetic to the emergence of a
human-plus-computer-network, collective-intelligence type global brain.   I
think many of those ideas could have worked, but they are not the direction
the development of the Net worked, and obviously I (like you) lack the
influence to nudge the Net-masters in that direction.  Keeping a
build-a-superhuman-AI project moving is not easy either, but it's a more
tractable task...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread Ben Goertzel
e the COMP substrate is implementing the science of an encounter
> with a model, not an encounter with the actual distal natural world.
>
> No amount of computation can make up for that loss, because you are in a
> circumstance of an intrinsically unknown distal natural world, (the novelty
> of an act of scientific observation).
> .
> => COMP is false.
> ==
> OK.  There are subtleties here.
> The refutation is, in effect, a result of saying you can't do it (replace a
> scientist with a computer) because you can't simulate inputs. It is just the
> the nature of 'inputs' has been traditionally impoverished by assumption
> born merely of cross-disciplinary blindness.. Not enough quantum mechanics
> or electrodynamics is done by those exposed to 'COMP' principles.
>
> This result, at first appearance, says "you can't simulate a scientist".
> But you can! If you already know what is out there in the natural world then
> you can simulate a scientific act. But you don't - by definition  - you are
> doing science to find out! So it's not that you can't simulate a scientist,
> it is just that in order to do it you already have to know everything, so
> you don't want to ... it's useless. So the words 'refutation of COMP by an
> attempted  COMP implementation of a scientist' have to be carefully
> contrasted with the words "you can't simulate a scientist".
>
> The self referential use of scientific behaviour as scientific evidence has
> cut logical swathes through all sorts of issues. COMP is only one of them.
> My AGI benchmark and design aim is "the artificial scientist".  Note also
> that this result does not imply that real AGI can only be organic like us.
> It means that real AGI must have new chips that fully capture all the inputs
> and make use of them to acquire knowledge the way humans do. A separate
> matter altogether. COMP, as an AGI designer' option, is out of the picture.
>
> I think this just about covers the basics. The papers are dozens of pages.
> I can't condense it any more than this..I have debated this so much it's way
> past its use-by date. Most of the arguments go like this: "But you CAN!...".
> I am unable to defend such 'arguments from under-informed-authority' ... I
> defer to the empirical reality of the situation and would prefer that it be
> left to justify itself. I did not make any of it up. I merely observed. .
> ...and so if you don't mind I'd rather leave the issue there.  ..
>
> regards,
>
> Colin Hales
>
>
>
> Mike Tintner wrote:
>
>> Colin:
>>
>> 1) Empirical refutation of computationalism...
>>
>> .. interesting because the implication is that if anyone
>> doing AGI lifts their finger over a keyboard thinking they can be
>> directly involved in programming anything to do with the eventual
>> knowledge of the creature...they have already failed. I don't know
>> whether the community has internalised this yet.
>>
>> Colin,
>>
>> I'm sure Ben is right, but I'd be interested to hear the essence of your
>> empirical refutation. Please externalise it so we can internalise it :)
>>
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread Ben Goertzel
On Sat, Oct 4, 2008 at 8:37 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

> Matt:The problem you describe is to reconstruct this image given the highly
> filtered and compressed signals that make it through your visual perceptual
> system, like when an artist paints a scene from memory. Are you saying that
> this process requires a consciousness because it is otherwise not
> computable? If so, then I can describe a simple algorithm that proves you
> are wrong: try all combinations of pixels until you find one that looks the
> same.
>
> Matt,
>
> Simple? Well, you're good at maths. Can we formalise what you're arguing? A
> computer screen, for argument's sake.  800 x 600, or whatever. Now what is
> the total number of (diverse) objects that can be captured on that screen,
> and how long would it take your algorithm to enumerate them?
>
> (It's an interesting question, because my intuition says to me that there
> is an infinity of objects that can be depicted on any screen (or drawn on a
> page). Are you saying that there aren't? -



There is a finite number of possible screen-images, at least from the point
of view of the process sending digital signals to the screen.

If the monitor refreshes each pixel N times per second, then over an
interval of T seconds, if each pixel can show C colors, then there are

C^(N*T*800*600)

possible different scenes showable on the screen during that time period

A big number but finite!

Drawing on a page is a different story, as it gets into physics questions,
but it seems rather likely there is a finite number of pictures on the page
that are distinguishable by a human eye.

So, whether or not an infinite number of objects exist in the universe, only
a finite number of distinctions can be drawn on a monitor (for certain), or
by an eye (almost surely)

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread Ben Goertzel
Ok, at a single point in time on a 600x400 screen, if one is using 24-bit
color (usually called "true color") then the number of possible images is

2^(600x400x24)

which is, roughly, 10 with a couple million zeros after it ... way bigger
than a googol, way way smaller than a googolplex ;-)

This is a large number, but so what?

Of course, the human eye would not be able to tell the difference between
all these different images; that's a whole different story...

I don't see why these middle-school calculations are of interest?? ... this
has nothing to do with any of the philosophical issues under discussion,
does it?

ben

On Sat, Oct 4, 2008 at 9:22 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>  Ben,
>
> Thanks for reply. I'm a bit lost though. How does this formula take into
> account the different pixel configurations of different objects? (I would
> have thought we can forget about the time of display and just concentrate on
> the configurations of points/colours, but no doubt I may be wrong).
>
> Roughly how large a figure do you come up with, BTW?
>
> I guess a related question is the old one - given a keyboard of letters,
> what are the total number of works possible with say 500,000 key presses,
> and how many 500,000-press attempts will it (or could it) take the
> proverbial monkey to type out, say, a 50,000 word play called Hamlet?
>
> In either case, I would imagine, the numbers involved are too large to be
> practically manageable in, say, this universe, (which seems to be a common
> yardstick). Comments?   The maths here does seem important, because it seems
> to me to be the maths of creativity - and creative possibilities - in a
> given medium. A somewhat formalised maths, since creators usually find ways
> to transcend and change their medium - but useful nevertheless. Is such a
> maths being pursued?
>
>
> On Sat, Oct 4, 2008 at 8:37 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:
>
>> Matt:The problem you describe is to reconstruct this image given the
>> highly filtered and compressed signals that make it through your visual
>> perceptual system, like when an artist paints a scene from memory. Are you
>> saying that this process requires a consciousness because it is otherwise
>> not computable? If so, then I can describe a simple algorithm that proves
>> you are wrong: try all combinations of pixels until you find one that looks
>> the same.
>>
>> Matt,
>>
>> Simple? Well, you're good at maths. Can we formalise what you're arguing?
>> A computer screen, for argument's sake.  800 x 600, or whatever. Now what is
>> the total number of (diverse) objects that can be captured on that screen,
>> and how long would it take your algorithm to enumerate them?
>>
>> (It's an interesting question, because my intuition says to me that there
>> is an infinity of objects that can be depicted on any screen (or drawn on a
>> page). Are you saying that there aren't? -
>
>
>
> There is a finite number of possible screen-images, at least from the point
> of view of the process sending digital signals to the screen.
>
> If the monitor refreshes each pixel N times per second, then over an
> interval of T seconds, if each pixel can show C colors, then there are
>
> C^(N*T*800*600)
>
> possible different scenes showable on the screen during that time
> period
>
> A big number but finite!
>
> Drawing on a page is a different story, as it gets into physics questions,
> but it seems rather likely there is a finite number of pictures on the page
> that are distinguishable by a human eye.
>
> So, whether or not an infinite number of objects exist in the universe,
> only a finite number of distinctions can be drawn on a monitor (for
> certain), or by an eye (almost surely)
>
> ben g
>  --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
> --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCogPrime for Dummies [NOT]

2008-10-05 Thread Ben Goertzel
Hmmm ... I doubt that a quick and dirty nontechnical wiki page on
opencogprime would help anyone much ... a decent nontechnical exposition of
the ideas would require a lot of legwork in terms of presenting background
material and clarifying preliminary terms and ideas ... without that,
discussion just bogs down on preliminary points and never even gets to the
main unique aspects of the design (which is what usually happens in
discussions of the design on this list ;-)

ben

On Sun, Oct 5, 2008 at 3:45 AM, Steve Richfield
<[EMAIL PROTECTED]>wrote:

> Ben,
>
> Perhaps if you just through together a quick and dirty Wiki page and asked
> others to clean it up, then it might evolve into what the world needs here.
>
> Also, it would organize discussions and disputes, e.g. by allowing people
> to put questions into the explanation, to then be answered through editing
> rather than the back and forth banter on this forum.
>
> With luck, it would help wring your ideas out and disarm your detractors,
> and provide more than a mere writeup - a piece to help sell your concept on
> a wider scale.
>
> Steve Richfield
> ===
> On 10/1/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>>
>>
>> On Wed, Oct 1, 2008 at 2:07 PM, Steve Richfield <
>> [EMAIL PROTECTED]> wrote:
>>
>>> Ben,
>>>
>>> I have been eagerly awaiting such a document. However, the Grand
>>> Technical Guru (i.e. you) is usually NOT the person to write such a thing.
>>> Usually, an associate, user, author, or some such person who is on the user
>>> side rather than on the implementing side. Separated from the lock washers
>>> and solder, these people usually paint the picture and portray the dream in
>>> clearer language.
>>>
>>
>> Interesting.  That is what Michael Rae helped Aubrey de Grey do in their
>> coauthored book "Beyond Aging", I would say...
>>
>> As I have some experience doing science journalism myself (I wrote
>> newspaper articles for a German paper for a while), I think I'm well
>> qualified to write such a thing ... but I just don't have the time right
>> now   And, of course, it can be worthwhile for the purposes of writing
>> such a thing, to have a little more mental distance from the details than I
>> would ever achieve...
>>
>>
>>> Is there such a person for OpenCogPrime? If not, then I guess I'll just
>>> have to go on waiting for your creation.
>>>
>>>
>>
>> There are several people with the *ability* to write such a book
>> associated with OpenCog Prime ... but none at the moment with both the
>> ability and the free time ... due to the relevant folks having the need to
>> work for a living ... but who knows, maybe a person with the requisite
>> combination of ability/time will emerge ... time will tell...
>>
>> ben
>>
>>
>>
>>
>>
>>  --
>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com/>
>>
>
>  --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Ben Goertzel
>
> 3. I think it is extremely important, that we give an AGI no bias about
> space and time as we seem to have. Our intuitive understanding of space and
> time is useful for our life on earth but it is completely wrong as we know
> from theory of relativity and quantum physics.
>
> -Matthias Heger



Well, for the purpose of creating the first human-level AGI, it seems
important **to** wire in humanlike bias about space and time ... this will
greatly ease the task of teaching the system to use our language and
communicate with us effectively...

But I agree that not **all** AGIs should have this inbuilt biasing ... for
instance an AGI hooked directly to quantum microworld sensors could become a
kind of "quantum mind" with a totally different intuition for the physical
world than we have...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCogPrime for Dummies [NOT]

2008-10-05 Thread Ben Goertzel
>
>
> On 10/5/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>>
>> Hmmm ... I doubt that a quick and dirty nontechnical
>>
>
> I would think that it should be technical, e.g. targeted for someone with a
> CS degree, but written as though the reader had never heard of OpenCog.
>


There are already several academic conference papers that overview the
Novamente Cognition Engine, which are at novamente.net/papers ... e.g.
"Patterns, Hypergraphs and General Intelligence"

At the level of abstraction dealt with in such a paper, the NCE is about the
same as OpenCog, so I suppose those serve for exposition at that level

These papers are referenced at the start of the OpenCogPrime wikibook, which
is on the OpenCog wiki site


>
>
>
>> wiki page on opencogprime would help anyone much ... a decent nontechnical
>> exposition of the ideas would require a lot of legwork in terms of
>> presenting background material
>>
>
> Hasn't someone else done this on a web page somewhere that you could
> provide a hyperlink to?
>

No, there are no nice semi-technical expositions of such topics as
probabilistic logic or estimation-of-distribution algorithms out there, for
example.



>
>
>
>> and clarifying preliminary terms and ideas ...
>>
>
> You just need a BIG glossary.
>


yes, that is definitely true and has been on the todo list for a while...


>
>
>
>> without that, discussion just bogs down on preliminary points and never
>> even gets to the main unique aspects of the design (which is what usually
>> happens in discussions of the design on this list ;-)
>>
>
> Exactly my point. Just provide some good references, write a comprehensive
> glossary (note that it is OK to include phrases as well as words in your
> glossary), and add to it as necessary, and just refer "preliminary point"
> queries to the appropriate reference or glossary entry. In addition to doing
> a better job of working together, I suspect that this would actually be less
> work for you than the current process is.
>
>


All this is already done except the glossary



> BTW, my own thought is that nearly every part of AI is initially oversold,
> falls short of its goals, but eventually finds a useful niche.
>


Yes, that has happened in the past, but that doesn't imply it is
intrinsically the fate of AGI projects...

I began my journey into AGI by trying to arrive at a correct, comprehensive
holistic overview of how the mind works.  See my 2006 book "The Hidden
Pattern."  Most attempts at AGI have not begun this way, but have rather
begun with some particular technical trick that the inventor believed to
have more power than it really did...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-05 Thread Ben Goertzel
 the models are metaphors and cannot
> claim any further veridicality. Indeed it regards the problem as extreme and
> unresolved. How is it that anyone can assume that vision, an even harder and
> structurally identical inverse problem, is somehow possible with only the
> retinal impact? Please read Nunez for the appropriate background:
> Nunez, P. L. and Srinivasan, R., Electric fields of the brain : the
> neurophysics of EEG, 2nd ed., Oxford University Press, Oxford, New York,
> 2006
>
> I am realising that I may have a contribution to make to AGI by helping
> strengthen its science base. I've run out of Sunday, so I'd like to leave
> the discussion there... to be continued sometime.
>
> Meanwhile I'd encourage everyone to get used to the idea that to be
> involved in AGI is to _not_ be involved in purely COMP principles. Purely
> COMP = traditional domain-bound AI. It will produce very good results in
> specific problem areas and will be fragile and inflexible when encountering
> novelty. AI will remain a perfectly valid target for very exciting COMP
> based solutions. However those solutions will never be AGI. Continuation
> with purely COMP approach is a strategically fatal flaw which will result in
> a club, not a scientific discipline. This is of great concern to me. Please
> sit back and let this realisation wash over you. It's what I had to do. I
> used to think in COMP terms too. And have fun! This is supposed to be fun!
>
> cheers
> Colin Hales
>
> Ben Goertzel wrote:
>
>
> The argument seems wrong to me intuitively, but I'm hard-put to argue
> against it because the terms are so unclearly defined ... for instance I
> don't really know what you mean by a "visual scene" ...
>
> I can understand that to create a form of this argument worthy of being
> carefully debated, would be a lot more work than writing this summary email
> you've given.
>
> So, I agree with your judgment not to try to extensively debate the
> argument in its current sketchily presented form.
>
> If you do choose to present it carefully at some point, I encourage you to
> begin by carefully defining all the terms involved ... otherwise it's really
> not possible to counter-argue in a useful way ...
>
> thx
> ben g
>
> On Sat, Oct 4, 2008 at 12:31 AM, Colin Hales <[EMAIL PROTECTED]
> > wrote:
>
>> Hi Mike,
>> I can give the highly abridged flow of the argument:
>>
>> !) It refutes COMP , where COMP = Turing machine-style abstract symbol
>> manipulation. In particular the 'digital computer' as we know it.
>> 2) The refutation happens in one highly specific circumstance. In being
>> false in that circumstance it is false as a general claim.
>> 3) The circumstances:  If COMP is true then it should be able to implement
>> an artificial scientist with the following faculties:
>>   (a) scientific behaviour (goal-delivery of a 'law of nature', an
>> abstraction BEHIND the appearances of the distal natural world, not merely
>> the report of what is there),
>>   (b) scientific observation based on the visual scene,
>>   (c) scientific behaviour in an encounter with radical novelty. (This is
>> what humans do)
>>
>> The argument's empirical knowledge is:
>> 1) The visual scene is visual phenomenal consciousness. A highly specified
>> occipital lobe deliverable.
>> 2) In the context of a scientific act, scientific evidence is 'contents of
>> phenomenal consciousness'. You can't do science without it. In the context
>> of this scientific act, visual P-consciousness and scientific evidence are
>> identities. P-consciousness is necessary but on its own is not sufficient.
>> Extra behaviours are needed, but these are a secondary consideration here.
>>
>> NOTE: Do not confuse "scientific observation"  with the "scientific
>> measurement", which is a collection of causality located in the distal
>> external natural world. (Scientific measurement is not the same thing as
>> scientific evidence, in this context). The necessary feature of a visual
>> scene is that it operate whilst faithfully inheriting the actual causality
>> of the distal natural world. You cannot acquire a law of nature without this
>> basic need being met.
>>
>> 3) Basic physics says that it is impossible for a brain to create a visual
>> scene using only the inputs acquired by the peripheral stimulus received at
>> the retina. This is due to fundamentals of quantum degeneracy. Basically
>> there are an infinite number of distal external worlds that can deliver the
>> exact same photon impact. The t

Re: [agi] COMP = false

2008-10-05 Thread Ben Goertzel
Abram,

thx for restating his argument


>
> Your argument appears to assume computationalism. Here is a numbered
> restatement:
>
> 1. We have a visual experience of the world.
> 2. Science says that the information from the retina is insufficient
> to compute one.


I do not understand his argument for point 2


>
> 3. Therefore, we must get more information.
> 4. The only possible sources are material and spatial.
> 5. Material is already known to be insufficient, therefore we must
> also get spatial info.


ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-05 Thread Ben Goertzel
On Sun, Oct 5, 2008 at 7:41 PM, Abram Demski <[EMAIL PROTECTED]> wrote:

> Ben,
>
> I have heard the argument for point 2 before, in the book by Pinker,
> "How the Mind Works". It is the inverse-optics problem: physics can
> predict what image will be formed on the retina from material
> arrangements, but if we want to go backwards and find the arrangements
> from the retinal image, we do not have enough data at all. Pinker
> concludes that we do it using cognitive bias.



I understood Pinker's argument, but not Colin Hales's ...

Also, note cognitive bias can be learned rather than inborn (though in this
case I imagine it's both).

Probably we would be very bad at seeing environment different from those we
evolved in, until after we'd gotten a lot of experience in them...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-05 Thread Ben Goertzel
On Sun, Oct 5, 2008 at 7:59 PM, Abram Demski <[EMAIL PROTECTED]> wrote:

> Agreed. Colin would need to show the inadequacy of both inborn and
> learned bias to show the need for extra input. But I think the more
> essential objection is that extra input is still consistent with
> computationalism.


Yes.

Also, the visual input to an AGI need not be particularly similar to that of
the human eye ... so if the human brain were somehow getting extra "visual
scene relevant" stimuli from some currently non-suspected source [which I
doubt], this doesn't imply that an AGI would need to use a comparable
source.

Arguably, for instance, camera+lidar gives enough data for reconstruction of
the visual scene ... note that lidar gives more more accurate 3D depth ata
than stereopsis...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-05 Thread Ben Goertzel
cool ... if so, I'd be curious for the references... I'm not totally up on
that area...

ben

On Sun, Oct 5, 2008 at 8:20 PM, Trent Waddington <[EMAIL PROTECTED]
> wrote:

> On Mon, Oct 6, 2008 at 10:03 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > Arguably, for instance, camera+lidar gives enough data for reconstruction
> of
> > the visual scene ... note that lidar gives more more accurate 3D depth
> ata
> > than stereopsis...
>
> Is that even true anymore?  I thought the big revolution in the last
> 12 months was that machine learning algorithms are finally producing
> better-than-lidar depth estimates (just not in realtime).
>
> Trent
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-05 Thread Ben Goertzel
I suppose what you mean is something like:

***
The information from the retina is inadequate to construct a representation
of the world around the human organism that is as accurate as could be
constructed by an ideal perceiving-system receiving the same light beams
that the human eye receives.
***

But, so what?

The human mind plainly does NOT have a world-representation of this level of
accuracy.  Our subjectively-perceived visual world is largely made up, as is
convincingly demonstrated by our filled-in blind spots, the raft of optical
illusions, and so forth  We fill in the gaps left by our flawed visual
systems using all sorts of inherited and learned inductive bias...

-- Ben G


On Sun, Oct 5, 2008 at 9:05 PM, Colin Hales <[EMAIL PROTECTED]>wrote:

> OK. Last one!
> Please replace 2) with:
>
> 2. Science says that the information from the retina is insufficient
> to construct a visual scene.
>
> Whether or not that 'constuct' arises from computation is a matter of
> semantics. I would say that it could be considered computation - natural
> computation by electrodynamic manipulation of natural symbols. Not
> abstractions of the kind we manipulate in the COMP circumstance. That is why
> I use the term COMP...
>
> It's rather funny: you could redefine computation to include natural
> computation (through the natural causality that is electrodynamics as it
> happens in brain material). Then you could claim computationalism to be
> true. But you'd still behave the same: you'd be unable to get AGI from a
> Turing machine. So you'd flush all traditional computers and make new
> technology Computationalism would then be true but 100% useless as a
> design decision mechanism. Frankly I'd rather make AGI that works than be
> right according to a definition!  The lesson is that there's no pracitcal
> use in being right according to a definition! What you need to be able to do
> is make successful choices.
>
>
> OK. Enough. A very enjoyable but sticky thread...I gotta work!
>
> cheers all for now.
>
> regards
>
> Colin
>
>
> Abram Demski wrote:
>
>> Colin,
>>
>> I believe you did not reply to my points? Based on your definition of
>> computationalism, it appears that my criticism of your argument does
>> apply after all. To restate:
>>
>> Your argument appears to assume computationalism. Here is a numbered
>> restatement:
>>
>> 1. We have a visual experience of the world.
>> 2. Science says that the information from the retina is insufficient
>> to compute one.
>> 3. Therefore, we must get more information.
>> 4. The only possible sources are material and spatial.
>> 5. Material is already known to be insufficient, therefore we must
>> also get spatial info.
>>
>> Computationalism is assumed to get from #2 to #3. If we do not assume
>> computationalism, then the argument would look more like this:
>>
>> 1. We have a visual experience of the world.
>> 2. Science says that the information from the retina is insufficient
>> to compute one.
>> 3. Therefore, our visual experience is not computed.
>>
>> This is obviously unsatisfying because it doesn't say where the visual
>> scene comes from; answers range from prescience to quantum
>> hypercomputation, but that does not seem important to the current
>> issue.
>>
>> --Abram
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>>
>>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-05 Thread Ben Goertzel
portant to the current
> >> issue.
> >>
> >> --Abram
> >>
> >>
> >> ---
> >> agi
> >> Archives: https://www.listbox.com/member/archive/303/=now
> >> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> >> Modify Your Subscription: https://www.listbox.com/member/?&;
> >> Powered by Listbox: http://www.listbox.com
> >>
> >>
> >
> >
> > ---
> > agi
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription:
> > https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-05 Thread Ben Goertzel
On Sun, Oct 5, 2008 at 11:16 PM, Abram Demski <[EMAIL PROTECTED]> wrote:

> Ben,
>
> I think the entanglement possibility is precisely what Colin believes.
> That is speculation on my part of course. But it is something like
> that. Also, it is possible that quantum computers can do more than
> normal computers-- just not under the current theories. Colin hinted
> at some physics experiments.



Well, it is possible that some physical systems can do more than quantum
computers as currently conceived.

However, I think it's better to reserve the term "quantum computers" to
refer to "computers as deemed possible according to quantum mechanics" ...

If Penrose and other radicals are right, then "quantum gravity computers"
may be able to do stuff that quantum computers can't do ... which is
physically sensible since quantum mechanics is known not to be a complete
theory of the physical universe



>
>
> As for your views on quantum probability... I must humbly disagree
> with them. I have not read up on this literature, but for now I'm
> going to stick with what Wikipedia has told me:
>
> "The more common view regarding quantum logic, however, is that it
> provides a formalism for relating observables, system preparation
> filters and states. In this view, the quantum logic approach resembles
> more closely the C*-algebraic approach to quantum mechanics; in fact
> with some minor technical assumptions it can be subsumed by it. The
> similarities of the quantum logic formalism to a system of deductive
> logic may then be regarded more as a curiosity than as a fact of
> fundamental philosophical importance."


Yes, I know that view of course, and I think it's wrong ... but, arguing
that stuff on this list would be too much of a digression, as well as
something I don't have time for tonight ;-)

Will be fun to debate w/you someday though!!

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


<    1   2   3   4   5   6   7   8   9   10   >