On 11/9/06, Eric Baum [EMAIL PROTECTED] wrote:
It is true that much modern encryption is based on simple algorithms.
However, some crypto-experts would advise more primitive approaches.
RSA is not known to be hard, even if P!=NP, someone may find a
number-theoretic trick tomorrow that factors.
Ben Goertzel wrote:
It's just that problem X is NP-hard means roughly Any problem Y in
NP is polynomial-time reducible to problem X, and your example did
not seem to exemplify this...
All your example seemed to exemplify was a problem that was solvable
in polynomial time (class P, not class
Eric Baum wrote:
The argument, in very brief, is the following. Evolution found a
very compact program that does the right thing. (This is my
hypothesis, not claimed proved but lots of reasons to believe it
given in WIT?.) Finding such programs is NP-hard.
Richard Hold it right there. As far
Hi Richard,
I don't really want to get too sidetracked, but even if Immerman's
analysis were correct, would this make a difference to the way that Eric
was using NP-Hard, though?
No, Immerman's perspective on complexity classes doesn't really affect
your objections...
Firstly, the
Richard Eric Baum wrote:
I don't think the proofs depend on any special assumptions about
the nature of learning.
I beg to differ. IIRC the sense of learning they require is
induction over example sentences. They exclude the use of real
world knowledge, in spite of the fact that such
Richard,
I know it's peripheral to your main argument, but in this example ...
Suppose that the computational effort that evolution needs to build
different sized language understanding mechanisms scales as:
2.5 * (N/7 + 1)^^6 planet-years
... where different sized is captured by the value
Ben Goertzel wrote:
Richard,
I know it's peripheral to your main argument, but in this example ...
Suppose that the computational effort that evolution needs to build
different sized language understanding mechanisms scales as:
2.5 * (N/7 + 1)^^6 planet-years
... where different sized is
The primitive terms arent random, just some of the structure of it.
English standard does Sub VB Ob, while others do
VB Subj Ob
or another manner, as long as they are known and roughly consistently used, the
actual choice coudl well be random there and not matter,
but a 'concept' of a dog in
Sorry for my delay in responding... too busy to keep up with most
of this, just got some downtime and scanning various messages:
I don't know what you mean by incrementally updateable, but if
you look up the literature on language learning, you will find
that learning various sorts of
Eric Baum wrote:
Sorry for my delay in responding... too busy to keep up with most
of this, just got some downtime and scanning various messages:
I don't know what you mean by incrementally updateable, but if
you look up the literature on language learning, you will find
that learning
, 2006 9:29:13 AM
Subject: Re: [agi] Natural versus formal AI interface languages
Matt wrote:
Anyway, my point is that decoding the human genome or natural language is n=
ot as hard as breaking encryption. It cannot be because these systems are =
incrementally updatable, unlike ciphers. This allows
Eric Baum wrote:
Matt wrote:
Anyway, my point is that decoding the human genome or natural language is n=
ot as hard as breaking encryption. It cannot be because these systems are =
incrementally updatable, unlike ciphers. This allows you to use search str=
ategies that run in polynomial time.
I don't know what you mean by incrementally updateable,
but if you look up the literature on language learning, you will find
that learning various sorts of relatively simple grammars from
examples, or even if memory serves examples and queries, is NP-hard.
Try looking for Dana Angluin's
Ben Goertzel wrote:
I don't know what you mean by incrementally updateable,
but if you look up the literature on language learning, you will find
that learning various sorts of relatively simple grammars from
examples, or even if memory serves examples and queries, is NP-hard.
Try looking
I don't think the proofs depend on any special assumptions about the
nature of learning.
I beg to differ. IIRC the sense of learning they require is induction
over example sentences. They exclude the use of real world knowledge,
in spite of the fact that such knowledge (or at least
with n = 10^9 is much faster than brute force
cryptanalysis in O(2^n) time with n = 128.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Eric Baum [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 9, 2006 12:18:34 PM
Subject: Re: [agi] Natural versus formal AI
John Fully decoding the human genome is almost impossible. Not only
John is there the problem of protein folding, which I think even
John supercomputers can't fully solve, but the purpose for the
John structure of each protein depends on interaction with the
John incredibly complex molecular
Eric Baum [EMAIL PROTECTED] wrote:
Matt wrote:
Changing one bit of the key or plaintext affects every bit of the cipherte=
xt.
That is simply not true of most encryptions. For example, Enigma.=20
Matt:
Enigma is laughably weak compared to modern encryption, such as AES, RSA, S=
HA-256, ECC, etc.
Ben Jef wrote:
As I see it, the present key challenge of artificial intelligence
is to develop a fast and frugal method of finding fast and frugal
methods,
Ben However, this in itself is not possible. There can be a fast
Ben method of finding fast and frugal methods, or a frugal method of
Eric wrote:
The challenge is to find a methodology
for producing fast enough and frugal enough code, where that
methodology is practicable. For example, as a rough upper bound,
it would be practicable if it required 10,000 programmer years and
1,000,000 PC-years (i.e a $3Bn budget).
(Why should
Eric Baum wrote:
As I and Jef and you appear to agree, extant Intelligence
works because it exploits structure *of our world*; there is
and can be (unless P=NP or some such radical and unlikely
possibility) no such thing as as General Intelligence that
works in all worlds.
I'm going to
Eric Baum wrote:
(Why should producing a human-level AI be cheaper than decoding the
genome?)
Because the genome is encrypted even worse than natural language.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial
Eliezer Eric Baum wrote:
(Why should producing a human-level AI be cheaper than decoding the
genome?)
Eliezer Because the genome is encrypted even worse than natural
Eliezer language.
(a) By decoding the genome, I meant merely finding the sequence
(should have been clear in context), which
Ben Goertzel [EMAIL PROTECTED] wrote:
I am afraid that it may not be possible to find an initial project that is both
* small
* clearly a meaningfully large step along the path to AGI
* of significant practical benefit
I'm afraid you're right. It is especially difficult because there is a long
]
- Original Message
From: Eliezer S. Yudkowsky [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 8, 2006 3:23:10 PM
Subject: Re: [agi] Natural versus formal AI interface languages
Eric Baum wrote:
(Why should producing a human-level AI be cheaper than decoding the
genome
Eric Baum wrote:
Eliezer Eric Baum wrote:
(Why should producing a human-level AI be cheaper than decoding the
genome?)
Eliezer Because the genome is encrypted even worse than natural
Eliezer language.
(a) By decoding the genome, I meant merely finding the sequence
(should have been clear in
Fully decoding the human genome is almost impossible. Not only is there the
problem of protein folding, which I think even supercomputers can't fully
solve, but the purpose for the structure of each protein depends on
interaction with the incredibly complex molecular structures inside cells.
I actually just stumbled on something, from a totally different work I was doing, but possibly interesting:http://simple.wikipedia.org/wiki/Main_PageAn entire wikipedia, using simple english, that should be much much easier to parse than its more complex brother.JamesBillK [EMAIL PROTECTED] wrote:
James Jef Allbright [EMAIL PROTECTED] wrote: Russell Wallace
James wrote:
Syntactic ambiguity isn't the problem. The reason computers don't
understand English is nothing to do with syntax, it's because they
don't understand the world.
It's easy to parse The cat sat on the mat into
sit
James Below Shouls be Jef, but I will respond as wellOrig Quotes: But the computer still doesn't understand the sentence, because it doesn't know what cats, mats and the act of sitting _are_. (The best test of such understanding is not language - it's having the computer draw an animation of
Eric Baum wrote:
James Jef Allbright [EMAIL PROTECTED] wrote: Russell Wallace
James wrote:
Syntactic ambiguity isn't the problem. The reason computers don't
understand English is nothing to do with syntax, it's because they
don't understand the world.
snip
But the computer still
Jef wrote:
Each of these examples is of a physical system responding
with some degree of effectiveness based on an internal model
that represents with some degree of fidelity its local
environment. Its an unnecessary complication, and leads to
endless discussions of qualia,
James and Jef, my appologies for misattributing the question.
There is a phenomenon colloquially called understanding that is
displayed by people and at best rarely displayed within limitted
domains by extant computer programs. If you want to have any hope of
constructing an AGI, you are going
methods of natural evolution.
- Jef
-Original Message-
From: Eric Baum [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 07, 2006 1:44 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Natural versus formal AI interface languages
James and Jef, my appologies for misattributing
Jef wrote:
As I see it, the present key challenge of artificial intelligence is to
develop a fast and frugal method of finding fast and frugal methods,
However, this in itself is not possible. There can be a fast method
of finding fast and frugal methods, or a frugal method of finding fast
Richard, The Blocks World (http://hci.stanford.edu/~winograd/shrdlu/) was over 36 years ago, and was a GREAT demonstration of what can be done with natural language. It handled a wide variety of items, albeit with a very limited environment. Currently MIT is doing work with robitics that uses the
Ben, I think it would be beneficial, at least to me, to see a list of tasks. Not as a "defining" measure in any way. But as a list of work items that a general AGI should be able to complete effectively. I started on a list, and pulled some information off the net before, but never completed one.
On 11/6/06, James Ratcliff wrote:
In some form or another we are going to HAVE to have a natural language
interface, either a translation program that can convert our english to the
machine understandable form, or a simplified form of english that is
trivial for a person to quickly understand
Hi,
On 11/6/06, James Ratcliff [EMAIL PROTECTED] wrote:
Ben,
I think it would be beneficial, at least to me, to see a list of tasks.
Not as a defining measure in any way. But as a list of work items that a
general AGI should be able to complete effectively.
I agree, and I think that this
I dont believe that was the goal or lesson of the http://en.wikipedia.org/wiki/SHRDLU project.It was mainly centered aroudn a small test environment (the block world)and being able to create an interface that would allow the user to speak and be answered in a natural language.And in that goal it
Ben Goertzel [EMAIL PROTECTED] wrote: Hi,On 11/6/06, James Ratcliff <[EMAIL PROTECTED]> wrote: Ben, I think it would be beneficial, at least to me, to see a list of tasks. Not as a "defining" measure in any way. But as a list of work items that a general AGI should be able to complete
How much of the Novamente system is meant to be autonomous, and how much
will be responding only from external stymulus such as a question or a task
given externally.
Is it intended after awhile to run on its own where it would be up 24
hours a day, exploring potentially some by itself, or more
- Original Message
From: BillK [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, November 6, 2006 10:08:09 AM
Subject: Re: [agi] Natural versus formal AI interface languages
Ogden said that it would take seven years to learn English, seven
months for Esperanto, and seven weeks
Richard Loosemore wrote:
...
This is a question directed at this whole thread, about simplifying
language to communicate with an AI system, so we can at least get
something working, and then go from there
This rationale is the very same rationale that drove researchers into
Blocks World
PROTECTED]
- Original Message
From: Charles D Hixson [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, November 5, 2006 4:46:12 PM
Subject: Re: [agi] Natural versus formal AI interface languages
Richard Loosemore wrote:
...
This is a question directed at this whole thread, about
I'll keep this short, just to weigh in a vote - I
completelyagree with this. AGI will be measured by what we recognize
as intelligent behavior andthe usefulness ofthat intelligence for
tasks beyond the capabilities ofordinary software. Normal metrics
don't apply.
Russell Wallace wrote:
On 11/4/06, Russell Wallace [EMAIL PROTECTED] wrote:
On 11/4/06, Ben Goertzel [EMAIL PROTECTED] wrote:
I of course don't think that SHRDLU vs. AGISim is a fair comparison.
Agreed. SHRDLU didn't even try to solve the real problems - for the simple
and sufficient reason that it was impossible to
of the tests.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, November 3, 2006 10:51:16 PM
Subject: Re: Re: Re: Re: [agi] Natural versus formal AI interface languages
I am happy enough with the long-term goal
Eliezer S. Yudkowsky wrote:
Pei Wang wrote:
On 11/2/06, Eric Baum [EMAIL PROTECTED] wrote:
Moreover, I argue that language is built on top of a heavy inductive
bias to develop a certain conceptual structure, which then renders the
names of concepts highly salient so that they can be readily
Jef,Even given a hand created checked and correct small but comprehensive Knowledge Representation of the sample world, it is STILL not a trivial effort to get the sentences from the complicated form of english into some computer processable format. The cat example you gave is unfortunalty not the
Not necessarily childrens language, as tehy have their own problems and often use the wrong words and rules of grammar, but a simplified english, a reduced rule set.Something like no compound sentences for a start. I believe most everything can be written without compound sentences, and that would
James Ratcliff wrote:
Not necessarily childrens language, as tehy have their own problems and
often use the wrong words and rules of grammar, but a simplified
english, a reduced rule set.
Something like no compound sentences for a start. I believe most
everything can be written without
It does not help that words in SHRDLU are grounded in an artificial world. Its
failure to scale hints that approaches such as AGI-Sim will have similar
problems. You cannot simulate complexity.
I of course don't think that SHRDLU vs. AGISim is a fair comparison.
Among other
- Original Message
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, November 3, 2006 9:28:24 PM
Subject: Re: Re: [agi] Natural versus formal AI interface languages
I do not agree that having precise quantitative measures of system
intelligence is critical
Another reason for measurements is that it makes your goals concrete. How do you define general
intelligence? Turing gave us a well defined goal, but there are some shortcomings. The Turing test is
subjective, time consuming, isn't appropriate for robotics, and really isn't a good goal if it
I am happy enough with the long-term goal of independent scientific
and mathematical discovery...
And, in the short term, I am happy enough with the goals of carrying
out the (AGISim versions of) the standard tasks used by development
psychologists to study childrens' cognitive behavior...
I
On 11/4/06, Ben Goertzel [EMAIL PROTECTED] wrote:
I of course don't think that SHRDLU vs. AGISim is a fair comparison.Agreed. SHRDLU didn't even try to solve the real problems - for the simple and sufficient reason that it was impossible to make a credible attempt at such on the hardware of the
Pei (2) A true AGI should have the potential to learn any natural
Pei language (though not necessarily to the level of native
Pei speakers).
This embodies an implicit assumption about language which is worth
noting.
It is possible that the nature of natural language is such that humans
could
Thats a totally different problem, and considering the massive knowledge whole currently about how the human brain works, we would have some major problems in that area, though it is interesting. One other problem there, what about two way communications? You are proposing to have the brain talk
On 10/31/06, John Scanlon [EMAIL PROTECTED] wrote:
One of the major obstacles to real AI is the belief
thatknowledge ofa natural language is necessary for
intelligence. Ahuman-level intelligent system should be expected to
have the ability to learn a natural language, but it is not
Russell Wallace wrote:
Syntactic ambiguity isn't the problem. The reason computers don't
understand English is nothing to do with syntax, it's because they
don't understand the world.
It's easy to parse The cat sat on the mat into
sentence
verb sit /verb
subject cat
- Original Message
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 31, 2006 9:26:15 PM
Subject: Re: Re: [agi] Natural versus formal AI interface languages
Here is how I intend to use Lojban++ in teaching Novamente. When
Novamente is controlling
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 31, 2006 9:26:15 PM
Subject: Re: Re: [agi] Natural versus formal AI interface languages
Here is how I intend to use Lojban++ in teaching Novamente. When
Novamente is controlling a humanoid agent in the AGISim
On 11/2/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
Pei Wang wrote:
On 11/2/06, Eric Baum [EMAIL PROTECTED] wrote:
Moreover, I argue that language is built on top of a heavy inductive
bias to develop a certain conceptual structure, which then renders the
names of concepts highly
Eliezer unless P != NP and the concepts are genuinely encrypted. And
I am of course assuming P != NP, which seems to me a safe assumption.
If P = NP, and mind exploits that fact (which I don't believe) then
we are at a serious handicap in producing an AGI till we understand
why P = NP, but it
Mahoney [EMAIL PROTECTED] wrote:
- Original Message
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 31, 2006 9:26:15 PM
Subject: Re: Re: [agi] Natural versus formal AI interface languages
Here is how I intend to use Lojban++ in teaching Novamente. When
Hi,
I think an interesting goal would be to teach an AGI to write software. If I
understand your explanation, this is the same problem.
Yeah, it's the same problem.
It's a very small step from Lojban to a programming language, and in
fact Luke Kaiser and I have talked about making a
On 11/2/06, Eric Baum [EMAIL PROTECTED] wrote:
So Pei's comments are in some sense wishes. To be charitable--
maybe I should say beliefs supported by his experience.
But they are not established facts. It remains a possibility,
supported by reasonable evidence,
that language learning may be an
Hi.
It's a very small step from Lojban to a programming language, and in
fact Luke Kaiser and I have talked about making a programming language
syntax based on Lojban, using his Speagram program interpreter
framework.
The nice thing about Lojban is that it does have the flexibility to be
used
Luke wrote:
It seems to be like this: when you start programming, even though the
syntax is still natural, the language gets really awkward and does not
resemble the way you would express the same thing naturally. For me it
just shows that the real problem is somewhere deeper, in the semantic
PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 31, 2006 9:03 PM
Subject: Re: [agi] Natural versus formal AI interface languages
Artificial languages that remove ambiguity like Lojban do not bring us any
closer to solving the AI problem. It is straightforward to convert between
artificial
John Scanlon wrote:
Ben,
I did read your stuff on Lojban++, and it's the sort of language
I'm talking about. This kind of language lets the computer and the
user meet halfway. The computer can parse the language like any other
computer language, but the terms and constructions are
John,One of the major obstacles to real AI is the belief
thatknowledge ofa natural language is necessary for
intelligence.I agree. And it's IMO nearly impossible for AGI to learn/understand NL when its only info source is NL. We get some extra [meta] data from our senses when learning NL (which
On 11/1/06, Charles D Hixson wrote:
So. Lojban++ might be a good language for humans to communicate to an
AI with, but it would be a lousy language in which to implement that
same AI. But even for this purpose the language needs a verifier to
insure that the correct forms are being followed.
The AGI really does need to be able to read and write english or another natural language to be decently useful, people are just NOT goign to learn or be impressed with a machine that spurts out something incoherent (which they already can do)It is suprising how little actuall semantic ambiguity
Forgot to add there is a large amount of syntactic and Word sense disambiguity, but there are some programs out there that handle that to a remarkable extent as well, and I believe can be improved upon.And for many tasks, I dont see any reason not to have some back and forth feedback in the loop
BillK wrote:
On 11/1/06, Charles D Hixson wrote:
So. Lojban++ might be a good language for humans to communicate to an
AI with, but it would be a lousy language in which to implement that
same AI. But even for this purpose the language needs a verifier to
insure that the correct forms are
Perhaps there is a shortcut to all of this.
Provide the AGI with the hardware and software to jack into one or more human
brains and let the bio-software of the human brain be the language interface development tool.
I think we are creating some of this the hardware.
This also puts AGI in a
- Original Message -
From: Gregory
Johnson
Provide the AGI with the hardware and software to jack into one or more
humanbrains and let the bio-software of the human brain be the language
interface development tool.
Jacking into the human brain? That is hardly
a shortcut to AGI,
intelligence, and
this will not happen right away. Dumb, to less dumb, to somewhat smart, to
smart is a necessary progression.
- Original Message -
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 31, 2006 9:08 AM
Subject: Re: [agi] Natural versus formal AI
John Scanlon wrote:
One of the major obstacles to real AI is the belief that knowledge of a
natural language is necessary for intelligence. A human-level
intelligent system should be expected to have the ability to learn a
natural language, but it is not necessary. It is better to start with
Let's don't confuse two statements:
(1) To be able to use a natural language (so as to passing Turing
Test) is not a necessary condition for a system to be intelligent.
(2) A true AGI should have the potential to learn any natural language
(though not necessarily to the level of native
On 10/31/06, Matt Mahoney [EMAIL PROTECTED] wrote:
I guess the AI problem is solved, then. I can already communicate with my
computer using formal, unambiguous languages. It already does a lot of
things better than most humans, like arithmetic, chess, memorizing long
lists and recalling them
I guess the AI problem is solved, then. I can already communicate with my computer using formal, unambiguous languages. It already does a lot of things better than most humans, like arithmetic, chess, memorizing long lists and recalling them perfectly...If a machine can't pass the Turing test,
John --
See
lojban.org
and
http://www.goertzel.org/papers/lojbanplusplus.pdf
-- Ben G
On 10/31/06, John Scanlon [EMAIL PROTECTED] wrote:
One of the major obstacles to real AI is the belief that knowledge of a
natural language is necessary for intelligence. A human-level intelligent
: [agi] Natural versus formal AI interface languages
On 10/31/06, Matt Mahoney [EMAIL PROTECTED] wrote:
I guess the AI problem is solved, then. I can already communicate with
my
computer using formal, unambiguous languages. It already does a lot of
things better than most humans, like
: Tuesday, October 31, 2006 12:24 PM
Subject: Re: [agi] Natural versus formal AI interface languages
John --
See
lojban.org
and
http://www.goertzel.org/papers/lojbanplusplus.pdf
-- Ben G
On 10/31/06, John Scanlon [EMAIL PROTECTED] wrote:
One of the major obstacles to real AI
For comparison, here are some versions of
I saw the man with the telescope
in Lojban++ ...
[ http://www.goertzel.org/papers/lojbanplusplus.pdf ]
1)
mi pu see le man sepi'o le telescope
I saw the man, using the telescope as a tool
2)
mi pu see le man pe le telescope
I saw the man who was with
Hi,
Which brings up a question -- is it better to use a language based on
term or predicate logic, or one that imitates (is isomorphic to) natural
languages? A formal language imitating a natural language would have the
same kinds of structures that almost all natural languages have:
Pei Wang wrote:
Let's don't confuse two statements:
(1) To be able to use a natural language (so as to passing Turing
Test) is not a necessary condition for a system to be intelligent.
(2) A true AGI should have the potential to learn any natural language
(though not necessarily to the level
Eliezer wrote:
Natural language isn't. Humans have one specific idiosyncratic
built-in grammar, and we might have serious trouble learning to
communicate in anything else - especially if the language was being used
by a mind quite unlike our own.
Well, some humans have learned to communicate
Artificial languages that remove ambiguity like Lojban do not bring us any
closer to solving the AI problem. It is straightforward to convert between
artificial languages and structured knowledge (e.g first order logic), but it
is still a hard (AI complete) problem to convert between natural
I know people can learn Lojban, just like they can learn Cycl or LISP. Lets
not repeat these mistakes. This is not training, it is programming a knowledge
base. This is narrow AI.
-- Matt Mahoney, [EMAIL PROTECTED]
You seem not to understand the purpose of using Lojban to help teach an
93 matches
Mail list logo