Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-01 Thread Ben Goertzel
We cannot
 ask Feynman, but I actually asked Deutsch. He does not only think QM
 is our most basic physical reality (he thinks math and computer
 science lie in quantum mechanics), but he even takes quite seriously
 his theory of parallel universes! and he is not alone. Speaking by
 myself, I would agree with you, but I think we would need to
 relativize the concept of agreement. I don't think QM is just another
 model of merely mathematical value to make finite predictions. I think
 physical models say something about our physical reality. If you deny
 QM as part of our physical reality then I guess you deny any other
 physical model. I wonder then what is left to you. You perhaps would
 embrace total skepticism, perhaps even solipsism. Current trends have
 moved from there to a more relativized positions, where models are
 considered so, models, but still with some value as part of our actual
 physical reality (just as Newtonian physics is not just completely
 wrong after General Relativity since it still describes a huge part of
 our physical reality).


Well, I don't embrace solipsism, but that is really a philosophic and
personal rather than scientific matter ...

 and, I'm not going talk here about what is,
which IMO is not a matter for science ... but merely about what science
can tell us.

And, science cannot tell us whether QM or some empirically-equivalent,
wholly randomness-free theory is the right one...

ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-01 Thread Philip Hunt
2008/12/1 Ben Goertzel [EMAIL PROTECTED]:

 And, science cannot tell us whether QM or some empirically-equivalent,
 wholly randomness-free theory is the right one...

If two theories give identical predictions under all circumstances
about how the real world behaves, then they are not two separate
theories, they are merely rewordings of the same theory. And choosing
between them is arbitrary; you may prefer one to the other because
human minds can visualise it more easily, or it's easier to calculate,
or you have an aethetic preference for it.

-- 
Philip Hunt, [EMAIL PROTECTED]
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-01 Thread Ben Goertzel
 If two theories give identical predictions under all circumstances
 about how the real world behaves, then they are not two separate
 theories, they are merely rewordings of the same theory. And choosing
 between them is arbitrary; you may prefer one to the other because
 human minds can visualise it more easily, or it's easier to calculate,
 or you have an aethetic preference for it.

 --
 Philip Hunt, [EMAIL PROTECTED]



However, the two theories may still have very different consequences
**within the minds of the community of scientists** ...

Even though T1 and T2 are empirically equivalent in their predictions,
T1 might have a tendency to lead a certain community of scientists
in better directions, in terms of creating new theories later on

However, empirically validating this property of T1 is another question ...
which leads one to the topic of scientific theories about the sociological
consequences of scientific theories ;-)

ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] AIXI

2008-12-01 Thread Matt Mahoney
--- On Sun, 11/30/08, Philip Hunt [EMAIL PROTECTED] wrote:

 Can someone explain AIXI to me?

AIXI models an intelligent agent interacting with an environment as a pair of 
interacting Turing machines. At each step, the agent outputs a symbol to the 
environment, and the environment outputs a symbol and a numeric reward signal 
to the agent. The goal of the agent is to maximize the accumulated reward.

Hutter proved that the optimal solution is for the agent to guess, at each 
step, that the environment is simulated by the shortest program that is 
consistent with the interaction observed so far.

Hutter also proved that the optimal solution is not computable because the 
agent can't know which of its guesses are halting Turing machines. The best it 
can do is pick numbers L and T, try all 2^L programs up to length L for T steps 
each in order of increasing length, and guess the first one that is consistent. 
If there are no matches, then it needs to choose larger L and T and try again. 
That solution is called AIXI^TL. It's time complexity is O(T 2^L). In general, 
it may require L up to the length of the observed interaction (because there is 
a fast program that outputs the agent's observations from a list of length L).

In a separate paper ( http://www.vetta.org/documents/ui_benelearn.pdf ), Legg 
and Hutter propose defining universal intelligence as the expected reward of an 
AIXI agent in random environments.

The value of AIXI is not that it solves the general intelligence problem, but 
rather it explains why the problem is so hard. It also justifies a general 
principle that is already used in science and in practical machine learning 
algorithms: to choose the simplest hypothesis that fits the data. It formally 
defines simple as the length of the shortest program that outputs a 
description of the hypothesis.

For example, to avoid overfitting in neural networks, you should use the 
smallest number of connections and the least amount of training needed to fit 
the training data, then stop. In this case, the complexity of your neural 
network is the length of the shortest program that outputs the configuration of 
your network and its weights. Even if you don't know what that program is, and 
haven't chosen a programming language, you may reasonably expect that fewer 
connections, smaller weights, and coarser weight quantization will result in a 
shorter program.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] AIXI

2008-12-01 Thread Jim Bromer
I really appreciate Matt's comments about this even though I am wary
of the field.  It is important to have some ideas about why the AI
problem is so hard, and that insight is best told with some
descriptive information like Matt's message.  Of course, if no one is
asking why then the poster has to wonder if he should explain it.

However, I do not believe that the proposition that the shortest
program that can produce the trial results would establish a solution
to an AI problem is a sound philosophical basis for AGI.  We need to
be able to show that the program can learn about new things.  Since
this question has to be expressed as open ended statement using some
vague general form, it is impossible or at least very hard to define a
definitive test basis that could be used to establish the shortest
program that can achieve the goal.  Instead we use techniques that
seem to do be adaptable and then try to figure out how to
systematically deal with all of the errors that these methods tend to
produce.

Jim Bromer

On Mon, Dec 1, 2008 at 12:04 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- On Sun, 11/30/08, Philip Hunt [EMAIL PROTECTED] wrote:

 Can someone explain AIXI to me?

 AIXI models an intelligent agent interacting with an environment as a pair of 
 interacting Turing machines. At each step, the agent outputs a symbol to the 
 environment, and the environment outputs a symbol and a numeric reward signal 
 to the agent. The goal of the agent is to maximize the accumulated reward.

 Hutter proved that the optimal solution is for the agent to guess, at each 
 step, that the environment is simulated by the shortest program that is 
 consistent with the interaction observed so far.

 Hutter also proved that the optimal solution is not computable because the 
 agent can't know which of its guesses are halting Turing machines. The best 
 it can do is pick numbers L and T, try all 2^L programs up to length L for T 
 steps each in order of increasing length, and guess the first one that is 
 consistent. If there are no matches, then it needs to choose larger L and T 
 and try again. That solution is called AIXI^TL. It's time complexity is O(T 
 2^L). In general, it may require L up to the length of the observed 
 interaction (because there is a fast program that outputs the agent's 
 observations from a list of length L).

 In a separate paper ( http://www.vetta.org/documents/ui_benelearn.pdf ), Legg 
 and Hutter propose defining universal intelligence as the expected reward of 
 an AIXI agent in random environments.

 The value of AIXI is not that it solves the general intelligence problem, but 
 rather it explains why the problem is so hard. It also justifies a general 
 principle that is already used in science and in practical machine learning 
 algorithms: to choose the simplest hypothesis that fits the data. It formally 
 defines simple as the length of the shortest program that outputs a 
 description of the hypothesis.

 For example, to avoid overfitting in neural networks, you should use the 
 smallest number of connections and the least amount of training needed to fit 
 the training data, then stop. In this case, the complexity of your neural 
 network is the length of the shortest program that outputs the configuration 
 of your network and its weights. Even if you don't know what that program is, 
 and haven't chosen a programming language, you may reasonably expect that 
 fewer connections, smaller weights, and coarser weight quantization will 
 result in a shorter program.

 -- Matt Mahoney, [EMAIL PROTECTED]




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] AIXI

2008-12-01 Thread Philip Hunt
That was helpful. Thanks.

2008/12/1 Matt Mahoney [EMAIL PROTECTED]:
 --- On Sun, 11/30/08, Philip Hunt [EMAIL PROTECTED] wrote:

 Can someone explain AIXI to me?

 AIXI models an intelligent agent interacting with an environment as a pair of 
 interacting Turing machines. At each step, the agent outputs a symbol to the 
 environment, and the environment outputs a symbol and a numeric reward signal 
 to the agent. The goal of the agent is to maximize the accumulated reward.

 Hutter proved that the optimal solution is for the agent to guess, at each 
 step, that the environment is simulated by the shortest program that is 
 consistent with the interaction observed so far.

 Hutter also proved that the optimal solution is not computable because the 
 agent can't know which of its guesses are halting Turing machines. The best 
 it can do is pick numbers L and T, try all 2^L programs up to length L for T 
 steps each in order of increasing length, and guess the first one that is 
 consistent. If there are no matches, then it needs to choose larger L and T 
 and try again. That solution is called AIXI^TL. It's time complexity is O(T 
 2^L). In general, it may require L up to the length of the observed 
 interaction (because there is a fast program that outputs the agent's 
 observations from a list of length L).

 In a separate paper ( http://www.vetta.org/documents/ui_benelearn.pdf ), Legg 
 and Hutter propose defining universal intelligence as the expected reward of 
 an AIXI agent in random environments.

 The value of AIXI is not that it solves the general intelligence problem, but 
 rather it explains why the problem is so hard. It also justifies a general 
 principle that is already used in science and in practical machine learning 
 algorithms: to choose the simplest hypothesis that fits the data. It formally 
 defines simple as the length of the shortest program that outputs a 
 description of the hypothesis.

 For example, to avoid overfitting in neural networks, you should use the 
 smallest number of connections and the least amount of training needed to fit 
 the training data, then stop. In this case, the complexity of your neural 
 network is the length of the shortest program that outputs the configuration 
 of your network and its weights. Even if you don't know what that program is, 
 and haven't chosen a programming language, you may reasonably expect that 
 fewer connections, smaller weights, and coarser weight quantization will 
 result in a shorter program.

 -- Matt Mahoney, [EMAIL PROTECTED]




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Philip Hunt, [EMAIL PROTECTED]
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] AIXI

2008-12-01 Thread Vladimir Nesov
On Mon, Dec 1, 2008 at 8:04 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 The value of AIXI is not that it solves the general intelligence problem, but 
 rather
 it explains why the problem is so hard.

It doesn't explain why it's hard (is impossible hard?). That you
can't solve a problem exactly, doesn't mean that there is no simple
satisfactory solution.


 It also justifies a general principle that is
 already used in science and in practical machine learning algorithms:
 to choose the simplest hypothesis that fits the data. It formally defines
 simple as the length of the shortest program that outputs a description
 of the hypothesis.

It's Solomonoff's universal induction, a much earlier result. Hutter
generalized Solomonoff's induction to decision-making and proved some
new results, but the idea of simple hypotheses prior and proof that it
does good at learning are Solomonoff's.

See ( http://www.scholarpedia.org/article/Algorithmic_probability )
for introduction.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking CYC critiques

2008-12-01 Thread Steve Richfield
Steve,

The KRAKEN paper was quite interesting, and has a LOT in common with my own
Dr. Eliza. However, I saw no mention of Dr. Eliza's secret sauce, that
boosts it from answering questions to solving problems given symptoms. The
secret sauce has two primary ingredients:
1.  The syntax of differential symptom statements - how people state a
symptom that separates it from similar symptoms of other conditions.
2.  Questions, the answers to which will probably carry #1 above
recognizable differential symptom statements.
Both of the above seem to require domain *experienced* people to code, as
book learning doesn't seem to convey what people typically say, or what you
have to say to them to get them to state their symptom in a differential
way. Also, I suspect that knowledge coded today wouldn't work well in 50
years, when common speech has shifted.

I finally gave up on having Dr. Eliza answer questions, because the round
trip error rate seemed to be inescapably high. This is the product of:

1.  The user's flaws in their world model.
2.  The user's flaws in formulating their question.
3.  The computer's errors in parsing the question.
4.  The computer's errors in formulating an answer.
5.  The user's errors in understanding the answer.
6.  The user's errors from filing the answer into a flawed world model.

Between each of these is:

x.5  English's shortcomings in providing a platform to accurately state the
knowledge, question, or answer.

While each of these could be kept to 5%, it seemed completely hopeless to
reduce the overall error rate to low enough to actually make it good for
anything useful. Of course, everyone on this forum concentrates on #3 above,
when in the real world, this is often/usually swamped by the others. Hence,
I am VERY curious. Has KRAKEN found a worthwhile/paying niche in the world
with itsw question answering, where people actually use it to their benefit?
If so, then how did they deal with the round trip error rate?

KRAKEN contains lots of good ideas, several of which were already on my wish
list for Dr. Eliza sometime in the future. I suspect that a merger of
technologies might be a world-beater.

I wonder if the folks at Cycorp would be interested in such an effort?

BTW, http://www.DrEliza.com is up and down these days, with plans for a new
and more reliable version to be installed next weekend.

Any thoughts?

Steve Richfield
==
On 11/29/08, Stephen Reed [EMAIL PROTECTED] wrote:

  Hi Robin,
 There are no Cyc critiques that I know of in the last few years.  I was
 employed seven years at Cycorp until August 2006 and my non-compete
 agreement expired a year later.

 An interesting competition was held by Project 
 Halohttp://www.projecthalo.com/halotempl.asp?cid=30in which Cycorp 
 participated along with two other research groups to
 demonstrate human-level competency answering chemistry questions.  Results
 are 
 herehttp://www.projecthalo.com/content/docs/ontologies_in_chemistry_ISWC2.pdf.
 Although Cycorp performed principled deductive inference giving detailed
 justifications, it was judged to have performed inferior due to the
 complexity of its justifications and due to its long running times.  The
 other competitors used special purpose problem solving modules whereas
 Cycorp used its general purpose inference engine, extended for chemistry
 equations as needed.

 My own interest is in natural language dialog systems for rapid knowledge
 formation.  I was Cycorp's first project manager for its participation in
 the the DARPA Rapid Knowledge Formation project where it performed to
 DARPA's satisfaction, but subsequently its RKF tools never lived up to
 Cycorp's expectations that subject matter experts could rapidly extend the
 Cyc KB without Cycorp ontological engineers having to intervene.  A Cycorp
 paper describing its KRAKEN system is 
 herehttp://www.google.com/url?sa=tsource=webct=rescd=1url=http%3A%2F%2Fwww.cyc.com%2Fdoc%2Fwhite_papers%2Fiaai.pdfei=IDgySdKoIJzENMzqpJcLusg=AFQjCNG1VlgQxAKERyiHj4CmPohVeZxRywsig2=o50LFe4D6TRC3VwC7ZNPxw
 .

 I would be glad to answer questions about Cycorp and Cyc technology to the
 best of my knowledge, which is growing somewhat stale at this point.

 Cheers.
 -Steve


 Stephen L. Reed

 Artificial Intelligence Researcher
 http://texai.org/blog
 http://texai.org
 3008 Oak Crest Ave.
 Austin, Texas, USA 78704
 512.791.7860


  --
 *From:* Robin Hanson [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Saturday, November 29, 2008 9:46:09 PM
 *Subject:* [agi] Seeking CYC critiques

 What are the best available critiques of CYC as it exists now (vs. soon
 after project started)?

 Robin Hanson  [EMAIL PROTECTED]  http://hanson.gmu.edu
 Research Associate, Future of Humanity Institute at Oxford University
 Associate Professor of Economics, George Mason University
 MSN 1D3, Carow Hall, Fairfax VA 22030-
 703-993-2326  FAX: 703-993-2323

  --
   *agi* | 

Re: [agi] Seeking CYC critiques

2008-12-01 Thread Steve Richfield
Mike,

On 12/1/08, Mike Tintner [EMAIL PROTECTED] wrote:

  I wonder whether you'd like to outline an additional list of
 English/language's shortcomings here. I've just been reading Gary Marcus'
 Kluge - he has a whole chapter on language's shortcomings, and it would be
 v. interesting to compare and analyse.


The real world is a wonderful limitless-dimensioned continuum of
interrelated happenings. We have but a limited window to this, and have an
even more limited assortment of words that have very specific meanings.
Languages like Arabic vary pronunciation or spelling to convey additional
shades of meaning, and languages like Chinese convey meaning via joined
concepts. These may help, but they do not remove the underlying problem.
This is like throwing pebbles onto a map and ONLY being able to communicate
which pebble is closest to the intended location. Further, many words have
multiple meanings, which is like only being able to specify certain disjoint
multiples of pebbles, leaving it to AI to take a WAG (Wild Ass Guess) which
one was intended.

This becomes glaring obvious in language translation. I learned this stuff
from people on the Russian national language translator project. Words in
these two languages have very different shades of meaning, so that in
general, a sentence in one language can NOT be translated to the other
language with perfect accuracy, simply because the other language lacks
words with the same shading. This is complicated by the fact that the
original author may NOT have intended all of the shades of meaning, but was
stuck with the words in the dictionary.

For example, a man saying sit down in Russian to a woman, is conveying
something like an order (and not a request) to sit down, shut up, and don't
move. To remove that overloading, he might say please sit down in
Russian. Then, it all comes down to just how he pronounces the please as
to what he REALLY means, but of course, this is all lost in print. So, just
how do you translate please sit down so as not to miss the entire meaning?

One of my favorite pronunciation examples is excuse me.

In Russian, it is approximately eezveneetsya minya and is typically spoken
with flourish to emphasize apology.

In Arabic, it is approximately afwan without emphasis on either syllable,
and is typically spoken curtly, as if to say yea, I know I'm an idiot. It
is really hard to pronounce these two syllables without emphases, but with
flourish.

There is much societal casting of meaning to common concepts.

The underlying issue here is the very concept of translation, be it into a
human language, or a table form in an AI engine.. Really good translations
have more footnotes than translation, where these shades of meaning are
explained, yet modern translation programs produce no footnotes, which
pretty much consigns them to the trash translation pile, even with perfect
disambiguation, which of course is impossible. Even the AI engines, that can
carry these subtle overloadings, are unable to determine what nearby meaning
the author actually intended.

Hence, no finite language can convey specific meanings from within a
limitlessly-dimensional continuum of potential meanings. English does better
than most other languages, but it is still apparently not good enough even
for automated question answering, which was my original point. Everywhere
semantic meaning is touched upon, both within the wetware and within
software, additional errors are introduced. This makes many answers
worthless and all answers suspect, even before they are formed in the mind
of the machine.

Have I answered your question?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking CYC critiques

2008-12-01 Thread Mike Tintner
Steve,

Thanks. I was just looking for a systematic, v basic analysis of the problems 
language poses for any program, which I guess mainly come down to multiplicity -

multiple
-word meanings
-word pronunciations
-word spellings
-word endings
-word fonts
-word/letter layout/design
-languages [mixed discourse]
-accents
-dialects
-sentence constructions

to include new and novel
-words
-pronunciations
-spellings
-endings
-layout/design
-languages
-accents
-dialects
-sentence constructions

-all of which are *advantages* for a GI as opposed to a narrow AI.  The latter 
wants the right meaning, the former wants many meanings - enables flexibility 
and creativity of explanation and association.

Have I left anything out?
  Steve: MT:: 
I wonder whether you'd like to outline an additional list of 
English/language's shortcomings here. I've just been reading Gary Marcus' 
Kluge - he has a whole chapter on language's shortcomings, and it would be v. 
interesting to compare and analyse.

  The real world is a wonderful limitless-dimensioned continuum of interrelated 
happenings. We have but a limited window to this, and have an even more limited 
assortment of words that have very specific meanings. Languages like Arabic 
vary pronunciation or spelling to convey additional shades of meaning, and 
languages like Chinese convey meaning via joined concepts. These may help, but 
they do not remove the underlying problem. This is like throwing pebbles onto a 
map and ONLY being able to communicate which pebble is closest to the intended 
location. Further, many words have multiple meanings, which is like only being 
able to specify certain disjoint multiples of pebbles, leaving it to AI to take 
a WAG (Wild Ass Guess) which one was intended.

  This becomes glaring obvious in language translation. I learned this stuff 
from people on the Russian national language translator project. Words in these 
two languages have very different shades of meaning, so that in general, a 
sentence in one language can NOT be translated to the other language with 
perfect accuracy, simply because the other language lacks words with the same 
shading. This is complicated by the fact that the original author may NOT have 
intended all of the shades of meaning, but was stuck with the words in the 
dictionary.

  For example, a man saying sit down in Russian to a woman, is conveying 
something like an order (and not a request) to sit down, shut up, and don't 
move. To remove that overloading, he might say please sit down in Russian. 
Then, it all comes down to just how he pronounces the please as to what he 
REALLY means, but of course, this is all lost in print. So, just how do you 
translate please sit down so as not to miss the entire meaning?

  One of my favorite pronunciation examples is excuse me.

  In Russian, it is approximately eezveneetsya minya and is typically spoken 
with flourish to emphasize apology.

  In Arabic, it is approximately afwan without emphasis on either syllable, 
and is typically spoken curtly, as if to say yea, I know I'm an idiot. It is 
really hard to pronounce these two syllables without emphases, but with 
flourish.

  There is much societal casting of meaning to common concepts.

  The underlying issue here is the very concept of translation, be it into a 
human language, or a table form in an AI engine.. Really good translations have 
more footnotes than translation, where these shades of meaning are explained, 
yet modern translation programs produce no footnotes, which pretty much 
consigns them to the trash translation pile, even with perfect 
disambiguation, which of course is impossible. Even the AI engines, that can 
carry these subtle overloadings, are unable to determine what nearby meaning 
the author actually intended.

  Hence, no finite language can convey specific meanings from within a 
limitlessly-dimensional continuum of potential meanings. English does better 
than most other languages, but it is still apparently not good enough even for 
automated question answering, which was my original point. Everywhere semantic 
meaning is touched upon, both within the wetware and within software, 
additional errors are introduced. This makes many answers worthless and all 
answers suspect, even before they are formed in the mind of the machine.

  Have I answered your question?

  Steve Richfield


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking CYC critiques

2008-12-01 Thread Steve Richfield
Mike,

More than multiplicity is the issue of discrete-point semantics vs.
continuous real-world possibilities. Multiplicity could potentially be
addressed by requiring users to put (clarifications) following unclear words
(e.g. in response to diagnostic messages to clarify input). Dr. Eliza
already does some of this, e.g. when it encounters If ... then ... it
complains that it just wants to know the facts, and NOT how you think the
world works. However, such approaches are unable to address the discrete vs.
continuous issue, because every clarifying word has its own fuzziness, you
don't know what the user's world model (and hence its discrete points) is,
etc.

Somewhat of an Islamic scholar (needed for escape after being sold into
servitude in 1994), I am sometimes asked to clarify really simple-sounding
concepts like agent of Satan. The problem is that many people from our
culture simply have no place in their mental filing system for this
information, without which, it is simply not possible to understand things
like the present Middle East situation. Here, the discrete points that are
addressable by their world-model are VERY far apart.

For those of you who do understand agent of Satan, this very mental
incapacity MAKES them agents of Satan. This is related to a passage in the
Qur'an that states that most of the evil done in the world is done by people
who think that they are doing good. Sounds like George Bush, doesn't it? In
short, not only is this definition, but also this reality is circular. Here
is one of those rare cases where common shortcomings in world models
actually have common expressions referring to them. Too bad that these
expressions come from other cultures, as we could sure use a few of them.

Anyway, I would dismiss the multiplicity viewpoint, not because it is
wrong, but because it guides people into disambiguation, which is ultimately
unworkable. Once you understand that the world is a continuous domain, but
that language is NOT continuous, you will realize the hopelessness of such
efforts, as every question and every answer is in ERROR, unless by some
wild stroke of luck, it is possible to say EXACTLY what is meant.

As an interesting aside Bayesian programs tend (89%) to state their
confidence, which overcomes some (13%) of such problems.

Steve Richfield
=
On 12/1/08, Mike Tintner [EMAIL PROTECTED] wrote:

  Steve,

 Thanks. I was just looking for a systematic, v basic analysis of the
 problems language poses for any program, which I guess mainly come down to
 multiplicity -

 multiple
 -word meanings
 -word pronunciations
 -word spellings
 -word endings
 -word fonts
 -word/letter layout/design
 -languages [mixed discourse]
 -accents
 -dialects
 -sentence constructions

 to include new and novel
 -words
 -pronunciations
 -spellings
 -endings
 -layout/design
 -languages
 -accents
 -dialects
 -sentence constructions

 -all of which are *advantages* for a GI as opposed to a narrow AI.  The
 latter wants the right meaning, the former wants many meanings - enables
 flexibility and creativity of explanation and association.

 Have I left anything out?

 Steve: MT::

  I wonder whether you'd like to outline an additional list of
 English/language's shortcomings here. I've just been reading Gary Marcus'
 Kluge - he has a whole chapter on language's shortcomings, and it would be
 v. interesting to compare and analyse.


 The real world is a wonderful limitless-dimensioned continuum of
 interrelated happenings. We have but a limited window to this, and have an
 even more limited assortment of words that have very specific meanings.
 Languages like Arabic vary pronunciation or spelling to convey additional
 shades of meaning, and languages like Chinese convey meaning via joined
 concepts. These may help, but they do not remove the underlying problem.
 This is like throwing pebbles onto a map and ONLY being able to communicate
 which pebble is closest to the intended location. Further, many words have
 multiple meanings, which is like only being able to specify certain disjoint
 multiples of pebbles, leaving it to AI to take a WAG (Wild Ass Guess) which
 one was intended.

 This becomes glaring obvious in language translation. I learned this stuff
 from people on the Russian national language translator project. Words in
 these two languages have very different shades of meaning, so that in
 general, a sentence in one language can NOT be translated to the other
 language with perfect accuracy, simply because the other language lacks
 words with the same shading. This is complicated by the fact that the
 original author may NOT have intended all of the shades of meaning, but was
 stuck with the words in the dictionary.

 For example, a man saying sit down in Russian to a woman, is conveying
 something like an order (and not a request) to sit down, shut up, and don't
 move. To remove that overloading, he might say please sit down in
 Russian. Then, it all comes down to just 

Re: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-01 Thread Eric Burton
Ed, they used to combine ritalin with lsd for psychotherapy. It
assists in absorbing insights achieved from psycholitic doses, which
is a term for doses that are not fully psychedelic. Those are edifying
on their own but are less organized. I don't know if you can get this
in a clinical setting today. But these molecules are gradually being
apprehended as tools

On 11/30/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 Ed,

 Unfortunately to reply to your message in detail would absorb a lot of
 time, because there are two issues mixed up

 1) you don't know much about computability theory, and educating you
 on it would take a lot of time (and is not best done on an email list)

 2) I may not have expressed some of my weird philosophical ideas about
 computability and mind and reality clearly ... though Abram, at least,
 seemed to get them ;)  [but he has a lot of background in the area]

 Just to clarify some simple things though: Pi is a computable number,
 because there's a program that would generate it if allowed to run
 long enough  Also, pi has been proved irrational; and, quantum
 theory really has nothing directly to do with uncomputability...

 About

How can several pounds of matter that is the human brain model
 the true complexity of an infinity of infinitely complexity things?

 it is certainly thinkable that the brain is infinite not finite in its
 information content, or that it's a sort of antenna that receives
 information from some infinite-information-content source.  I'm not
 saying I believe this, just saying it's a logical possibility, and not
 really ruled out by available data...

 Your reply seems to assume that the brain is a finite computational
 system and that other alternatives don't make sense.  I think this is
 an OK working assumption for AGI engineers but it's not proved by any
 means.

 My main point in that post was, simply, that science and language seem
 intrinsically unable to distinguish computable from uncomputable
 realities.  That doesn't necessarily mean the latter don't exist but
 it means they're not really scientifically useful entities.  But, my
 detailed argument in favor of this point requires some basic
 understanding of computability math to appreciate, and I can't review
 those basics in an email, it's too much...

 ben g

 On Sun, Nov 30, 2008 at 4:20 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Ben,



 On November 19, 2008 5:39 you wrote the following under the above titled
 thread:



 --

 Ed,



 I'd be curious for your reaction to



 http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-forhtml



 which explores the limits of scientific and linguistic explanation, in

 a different but possibly related way to Richard's argument.



 --



 In the below email I asked you some questions about your article, which
 capture my major problem in understanding it, and I don't think I ever
 receive a reply



 The questions were at the bottom of such a long post you may well never
 have
 even seen them.  I know you are busy, but if you have time I would be
 interested in hearing your answers to the following questions about the
 following five quoted parts (shown in red if you are seeing this in rich
 text) from you article.  If you are too busy to respond just say so,
 either
 on or off list.



 -



 (1) In the simplest case, A2 may represent U directly in the language,
 using a single expression



 How, can U be directly represented in the language if it is
 uncomputable?



 I assume you consider any irational number, such as pi to be uncomputable
 (although, at least pi has a forumula that with enough computation can
 approach it as a limit –I assume that for most real numbers if there is
 such
 a formula, we do not know it.) (By the way, do we know for a fact that pi
 is
 irational, and if so how do we know other than that we have caluclated it
 to
 millions of places and not yet found an exact solution?)



 Merely communicating the symbol pi only represents the number if the agent
 receiving the communication has a more detailed definition, but any
 definition, such as a formula for iteratively approaching pi, which
 presumably is what you mean by R_U would only be an approximation.



 So U could never by fully represented unless one had infinite time --- and
 I
 generally consider it a waste of time to think about infinate time unless
 there is something valuable about such considerations that has a use in
 much
 more human-sized chunks of time.



 In fact, it seems the major message of quantum mechanics is that even
 physical reality doesn't have the time or machinery to compute
 uncomputable
 things, like a space constructed of dimensions each correspond to all the
 real numbers within some astronomical range .  So the real number line is
 not really real.  It is at best a construct of the human mind that can at
 best only be approximated in part.