On 11/9/06, Eric Baum <[EMAIL PROTECTED]> wrote:
It is true that much modern encryption is based on simple algorithms.
However, some crypto-experts would advise more primitive approaches.
RSA is not known to be hard, even if P!=NP, someone may find a
number-theoretic trick tomorrow that factors.
Hi Richard,
I don't really want to get too sidetracked, but even if Immerman's
analysis were correct, would this make a difference to the way that Eric
was using NP-Hard, though?
No, Immerman's perspective on complexity classes doesn't really affect
your objections...
Firstly, the descriptive
Eric Baum wrote:
The argument, in very brief, is the following. Evolution found a
very compact program that does the right thing. (This is my
hypothesis, not claimed proved but lots of reasons to believe it
given in WIT?.) Finding such programs is NP-hard.
Richard> Hold it right there. As far
Ben Goertzel wrote:
It's just that "problem X is NP-hard" means roughly "Any problem Y in
NP is polynomial-time reducible to problem X", and your example did
not seem to exemplify this...
All your example seemed to exemplify was a problem that was solvable
in polynomial time (class P, not class
On 11/24/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
I have seen this kind of computational complexity talk so often, and it
is just (if you'll forgive an expression of frustration here) just
driving me nuts. It is ludicrous: these concepts are being bandied
about as if they make the argu
>>
>>
>> The argument, in very brief, is the following. Evolution found a
>> very compact program that does the right thing. (This is my
>> hypothesis, not claimed proved but lots of reasons to believe it
>> given in WIT?.) Finding such programs is NP-hard.
Richard> Hold it right there. As far
It's just that "problem X is NP-hard" means roughly "Any problem Y in
NP is polynomial-time reducible to problem X", and your example did
not seem to exemplify this...
All your example seemed to exemplify was a problem that was solvable
in polynomial time (class P, not class NP-hard)
However, th
Ben Goertzel wrote:
Richard,
I know it's peripheral to your main argument, but in this example ...
Suppose that the computational effort that evolution needs to build
"different sized" language understanding mechanisms scales as:
2.5 * (N/7 + 1)^^6 planet-years
... where "different sized" is
Richard,
I know it's peripheral to your main argument, but in this example ...
Suppose that the computational effort that evolution needs to build
"different sized" language understanding mechanisms scales as:
2.5 * (N/7 + 1)^^6 planet-years
... where "different sized" is captured by the valu
[snip]...
Richard> This is precisely where I think the false assumption is
Richard> buried. When I say that grammar learning can be dependent on
Richard> real world knowledge, I mean specifically that there are
Richard> certain conceptual primitives involved in the basic design of
Richard> a con
Richard> Eric Baum wrote:
> I don't think the proofs depend on any special assumptions about
the > nature of learning.
I beg to differ. IIRC the sense of "learning" they require is
induction over example sentences. They exclude the use of real
world knowledge, in sp
The primitive terms arent random, just some of the structure of it.
English standard does Sub VB Ob, while others do
VB Subj Ob
or another manner, as long as they are known and roughly consistently used, the
actual choice coudl well be random there and not matter,
but a 'concept' of a dog in
Eric Baum wrote:
I don't think the proofs depend on any special assumptions about
the > nature of learning.
I beg to differ. IIRC the sense of "learning" they require is
induction over example sentences. They exclude the use of real
world knowledge, in spite of the fact that such knowledge (o
>> > I don't think the proofs depend on any special assumptions about
>> the > nature of learning.
>>
>> I beg to differ. IIRC the sense of "learning" they require is
>> induction over example sentences. They exclude the use of real
>> world knowledge, in spite of the fact that such knowledge (
Eric Baum wrote:
Sorry for my delay in responding... too busy to keep up with most
of this, just got some downtime and scanning various messages:
I don't know what you mean by incrementally updateable, > but if
you look up the literature on language learning, you will find >
that learning vari
Sorry for my delay in responding... too busy to keep up with most
of this, just got some downtime and scanning various messages:
>> > I don't know what you mean by incrementally updateable, > but if
>> you look up the literature on language learning, you will find >
>> that learning various sorts
> I don't think the proofs depend on any special assumptions about the
> nature of learning.
I beg to differ. IIRC the sense of "learning" they require is induction
over example sentences. They exclude the use of real world knowledge,
in spite of the fact that such knowledge (or at least ) are
Ben Goertzel wrote:
> I don't know what you mean by incrementally updateable,
> but if you look up the literature on language learning, you will find
> that learning various sorts of relatively simple grammars from
> examples, or even if memory serves examples and queries, is NP-hard.
> Try looki
> I don't know what you mean by incrementally updateable,
> but if you look up the literature on language learning, you will find
> that learning various sorts of relatively simple grammars from
> examples, or even if memory serves examples and queries, is NP-hard.
> Try looking for Dana Angluin's
Eric Baum wrote:
Matt wrote:
Anyway, my point is that decoding the human genome or natural language is n=
ot as hard as breaking encryption. It cannot be because these systems are =
incrementally updatable, unlike ciphers. This allows you to use search str=
ategies that run in polynomial time.
To: agi@v2.listbox.com
Sent: Sunday, November 12, 2006 9:29:13 AM
Subject: Re: [agi] Natural versus formal AI interface languages
Matt wrote:
Anyway, my point is that decoding the human genome or natural language is n=
ot as hard as breaking encryption. It cannot be because these systems are =
Matt wrote:
Anyway, my point is that decoding the human genome or natural language is n=
ot as hard as breaking encryption. It cannot be because these systems are =
incrementally updatable, unlike ciphers. This allows you to use search str=
ategies that run in polynomial time. A key search requ
(n^2) time with n = 10^9 is much faster than brute force
cryptanalysis in O(2^n) time with n = 128.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Eric Baum <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, November 9, 2006 12:18:34 PM
Subject: Re: [agi] Natur
Matt Mahoney wrote:
Protein folding is hard. We can't even plug in a simple formula like H2O and
compute physical properties like density or melting point.
This seems to be a rapidly improving area:
http://tech.groups.yahoo.com/group/transhumantech/message/36865
--
Brian Atkins
Singularity
Eric Baum <[EMAIL PROTECTED]> wrote:
>Matt wrote:
>Changing one bit of the key or plaintext affects every bit of the cipherte=
xt.
>That is simply not true of most encryptions. For example, Enigma.=20
Matt:
Enigma is laughably weak compared to modern encryption, such as AES, RSA, S=
HA-256, ECC,
November 8, 2006 10:22:09 PM
Subject: Re: [agi] Natural versus formal AI interface languages
Fully decoding the human genome is almost impossible. Not only is there the
problem of protein folding, which I think even supercomputers can't fully
solve, but the purpose for the structure of each pr
Eric Baum <[EMAIL PROTECTED]> wrote:
>Matt wrote:
>Changing one bit of the key or plaintext affects every bit of the ciphertext.
>That is simply not true of most encryptions. For example, Enigma.
Enigma is laughably weak compared to modern encryption, such as AES, RSA,
SHA-256, ECC, etc. Enigm
John> Fully decoding the human genome is almost impossible. Not only
John> is there the problem of protein folding, which I think even
John> supercomputers can't fully solve, but the purpose for the
John> structure of each protein depends on interaction with the
John> incredibly complex molecular
Matt wrote:
Changing one bit of the key or plaintext affects every bit of the ciphertext.
That is simply not true of most encryptions. For example, Enigma.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com
Fully decoding the human genome is almost impossible. Not only is there the
problem of protein folding, which I think even supercomputers can't fully
solve, but the purpose for the structure of each protein depends on
interaction with the incredibly complex molecular structures inside cells.
A
Eric Baum wrote:
Eliezer> Eric Baum wrote:
(Why should producing a human-level AI be cheaper than decoding the
genome?)
Eliezer> Because the genome is encrypted even worse than natural
Eliezer> language.
(a) By decoding the genome, I meant merely finding the sequence
(should have been clear
human genome.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Eliezer S. Yudkowsky <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wednesday, November 8, 2006 3:23:10 PM
Subject: Re: [agi] Natural versus formal AI interface languages
Eric Baum wrote:
> (Why sho
Ben Goertzel <[EMAIL PROTECTED]> wrote:
>I am afraid that it may not be possible to find an initial project that is both
>
>* small
>* clearly a meaningfully large step along the path to AGI
>* of significant practical benefit
I'm afraid you're right. It is especially difficult because there is a
Eliezer> Eric Baum wrote:
>> (Why should producing a human-level AI be cheaper than decoding the
>> genome?)
Eliezer> Because the genome is encrypted even worse than natural
Eliezer> language.
(a) By decoding the genome, I meant merely finding the sequence
(should have been clear in context), wh
Eric Baum wrote:
(Why should producing a human-level AI be cheaper than decoding the
genome?)
Because the genome is encrypted even worse than natural language.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligen
Eric Baum wrote:
> As I and Jef and you appear to agree, extant Intelligence
> works because it exploits structure *of our world*; there is
> and can be (unless P=NP or some such radical and unlikely
> possibility) no such thing as as "General" Intelligence that
> works in all worlds.
I'm go
Eric wrote:
The challenge is to find a methodology
for producing fast enough and frugal enough code, where that
methodology is practicable. For example, as a rough upper bound,
it would be practicable if it required 10,000 programmer years and
1,000,000 PC-years (i.e a $3Bn budget).
(Why should
Ben> Jef wrote:
>> As I see it, the present key challenge of artificial intelligence
>> is to develop a fast and frugal method of finding fast and frugal
>> methods,
Ben> However, this in itself is not possible. There can be a fast
Ben> method of finding fast and frugal methods, or a frugal meth
Jef wrote:
As I see it, the present key challenge of artificial intelligence is to
develop a fast and frugal method of finding fast and frugal methods,
However, this in itself is not possible. There can be a fast method
of finding fast and frugal methods, or a frugal method of finding fast
and
m for recognizing
and compressing those regularities in "the world" faster than the
original blind methods of natural evolution.
- Jef
> -Original Message-
> From: Eric Baum [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, November 07, 2006 1:44 PM
> To: agi@v
James and Jef, my appologies for misattributing the question.
There is a phenomenon colloquially called "understanding" that is
displayed by people and at best rarely displayed within limitted
domains by extant computer programs. If you want to have any hope of
constructing an AGI, you are goin
Jef wrote:
> Each of these examples is of a physical system responding
> with some degree of effectiveness based on an internal model
> that represents with some degree of fidelity its local
> environment. Its an unnecessary complication, and leads to
> endless discussions of qualia, conscio
Eric Baum wrote:
> James> Jef Allbright <[EMAIL PROTECTED]> wrote: Russell Wallace
> James> wrote:
>
> >> Syntactic ambiguity isn't the problem. The reason computers don't
> >> understand English is nothing to do with syntax, it's because they
> >> don't understand the world.
> >> But the
James Below Shouls be Jef, but I will respond as wellOrig Quotes:> But the computer still doesn't understand the sentence, because it > doesn't know what cats, mats and the act of sitting _are_. (The best > test of such understanding is not language - it's having the > computer draw an animation o
James> Jef Allbright <[EMAIL PROTECTED]> wrote: Russell Wallace
James> wrote:
>> Syntactic ambiguity isn't the problem. The reason computers don't
>> understand English is nothing to do with syntax, it's because they
>> don't understand the world.
>> It's easy to parse "The cat sat on the mat
I actually just stumbled on something, from a totally different work I was doing, but possibly interesting:http://simple.wikipedia.org/wiki/Main_PageAn entire wikipedia, using simple english, that should be much much easier to parse than its more complex brother.JamesBillK <[EMAIL PROTECTED]> wrote
- Original Message
From: BillK <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Monday, November 6, 2006 10:08:09 AM
Subject: Re: [agi] Natural versus formal AI interface languages
>Ogden said that it would take seven years to learn English, seven
>months for Esperanto, and
How much of the Novamente system is meant to be autonomous, and how much
will be responding only from external stymulus such as a question or a task
given externally.
Is it intended after awhile to run "on its own" where it would be up 24
hours a day, exploring potentially some by itself, or more
Ben Goertzel <[EMAIL PROTECTED]> wrote: Hi,On 11/6/06, James Ratcliff <[EMAIL PROTECTED]> wrote:> Ben,> I think it would be beneficial, at least to me, to see a list of tasks.> Not as a "defining" measure in any way. But as a list of work items that a> general AGI should be able to complete effe
I dont believe that was the goal or lesson of the http://en.wikipedia.org/wiki/SHRDLU project.It was mainly centered aroudn a small test environment (the block world)and being able to create an interface that would allow the user to speak and be answered in a natural language.And in that goal it se
Hi,
On 11/6/06, James Ratcliff <[EMAIL PROTECTED]> wrote:
Ben,
I think it would be beneficial, at least to me, to see a list of tasks.
Not as a "defining" measure in any way. But as a list of work items that a
general AGI should be able to complete effectively.
I agree, and I think that thi
On 11/6/06, James Ratcliff wrote:
In some form or another we are going to HAVE to have a natural language
interface, either a translation program that can convert our english to the
machine understandable form, or a simplified form of english that is
trivial for a person to quickly understand
Ben, I think it would be beneficial, at least to me, to see a list of tasks. Not as a "defining" measure in any way. But as a list of work items that a general AGI should be able to complete effectively. I started on a list, and pulled some information off the net before, but never completed on
Richard, The Blocks World (http://hci.stanford.edu/~winograd/shrdlu/) was over 36 years ago, and was a GREAT demonstration of what can be done with natural language. It handled a wide variety of items, albeit with a very limited environment. Currently MIT is doing work with robitics that uses th
ey, [EMAIL PROTECTED]
- Original Message
From: Charles D Hixson <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Sunday, November 5, 2006 4:46:12 PM
Subject: Re: [agi] Natural versus formal AI interface languages
Richard Loosemore wrote:
> ...
> This is a question directed at th
Richard Loosemore wrote:
...
This is a question directed at this whole thread, about simplifying
language to communicate with an AI system, so we can at least get
something working, and then go from there
This rationale is the very same rationale that drove researchers into
Blocks World
tests.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Ben Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Friday, November 3, 2006 10:51:16 PM
Subject: Re: Re: Re: Re: [agi] Natural versus formal AI interface languages
> I am happy enough with the long-
On 11/4/06, Russell Wallace <[EMAIL PROTECTED]> wrote:
On 11/4/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> I of course don't think that SHRDLU vs. AGISim is a fair comparison.
Agreed. SHRDLU didn't even try to solve the real problems - for the simple
and sufficient reason that it was impossibl
I'll keep this short, just to weigh in a vote - I
completely agree with this. AGI will be measured by what we recognize
as intelligent behavior and the usefulness of that intelligence for
tasks beyond the capabilities of ordinary software. Normal metrics
don't apply.
Russell Wallace wr
On 11/4/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
I of course don't think that SHRDLU vs. AGISim is a fair comparison.Agreed. SHRDLU didn't even try to solve the real problems - for the simple and sufficient reason that it was impossible to make a credible attempt at such on the hardware of the d
I am happy enough with the long-term goal of independent scientific
and mathematical discovery...
And, in the short term, I am happy enough with the goals of carrying
out the (AGISim versions of) the standard tasks used by development
psychologists to study childrens' cognitive behavior...
I don
Another reason for measurements is that it makes your goals concrete. How do you define "general
intelligence"? Turing gave us a well defined goal, but there are some shortcomings. The Turing test is
subjective, time consuming, isn't appropriate for robotics, and really isn't a good goal if i
- Original Message
From: Ben Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Friday, November 3, 2006 9:28:24 PM
Subject: Re: Re: [agi] Natural versus formal AI interface languages
>I do not agree that having precise quantitative measures of system
>intelligence is
It does not help that words in SHRDLU are grounded in an artificial world. Its
failure to scale hints that approaches such as AGI-Sim will have similar
problems. You cannot simulate complexity.
I of course don't think that SHRDLU vs. AGISim is a fair comparison.
Among other counterarguments
I think SHRDLU (Blocks World) would have been more interesting if the language
model was learned rather than programmed. There is an important lesson here,
and Winograd knew it: this route is a dead end. Adult English has a complexity
of about 10^9 bits (my estimate). SHRDLU has a complexity
James Ratcliff wrote:
Not necessarily childrens language, as tehy have their own problems and
often use the wrong words and rules of grammar, but a simplified
english, a reduced rule set.
Something like no compound sentences for a start. I believe most
everything can be written without compou
2006 9:26:15 PMSubject: Re: Re: [agi] Natural versus formal AI interface languages>Here is how I intend to use Lojban++ in teaching Novamente. When>Novamente is controlling a humanoid agent in the AGISim simulation>world, the human teacher talks to it about what it is doing. I would>l
Jef, Even given a hand created checked and correct small but comprehensive Knowledge Representation of the sample world, it is STILL not a trivial effort to get the sentences from the complicated form of english into some computer processable format. The cat example you gave is unfortunalty not th
Eliezer S. Yudkowsky wrote:
Pei Wang wrote:
On 11/2/06, Eric Baum <[EMAIL PROTECTED]> wrote:
Moreover, I argue that language is built on top of a heavy inductive
bias to develop a certain conceptual structure, which then renders the
names of concepts highly salient so that they can be readily
Luke wrote:
It seems to be like this: when you start programming, even though the
syntax is still natural, the language gets really awkward and does not
resemble the way you would express the same thing naturally. For me it
just shows that the real problem is somewhere deeper, in the semantic
rep
Hi.
It's a very small step from Lojban to a programming language, and in
fact Luke Kaiser and I have talked about making a programming language
syntax based on Lojban, using his Speagram program interpreter
framework.
The nice thing about Lojban is that it does have the flexibility to be
used a
On 11/2/06, Eric Baum <[EMAIL PROTECTED]> wrote:
So Pei's comments are in some sense wishes. To be charitable--
maybe I should say beliefs supported by his experience.
But they are not established facts. It remains a possibility,
supported by reasonable evidence,
that language learning may be an
Hi,
I think an interesting goal would be to teach an AGI to write software. If I
understand your explanation, this is the same problem.
Yeah, it's the same problem.
It's a very small step from Lojban to a programming language, and in
fact Luke Kaiser and I have talked about making a program
box.com
Sent: Thursday, November 2, 2006 3:45:42 PM
Subject: Re: Re: [agi] Natural versus formal AI interface languages
Yes, teaching an AI in Esperanto would make more sense than teaching
it in English ... but, would not serve the same purpose as teaching it
in Lojban++ and a natural language in paralle
Eliezer> unless P != NP and the concepts are genuinely encrypted. And
I am of course assuming P != NP, which seems to me a safe assumption.
If P = NP, and mind exploits that fact (which I don't believe) then
we are at a serious handicap in producing an AGI till we understand
why P = NP, but it
Hi.
What about using "baby English" instead of an artificial language?
That seems to be good for experiments, but unluckily it does not seem to
have the benefits of real natural language, as there is neither a big body
of text written in baby English nor many people wanting to talk it to a mac
On 11/2/06, Eliezer S. Yudkowsky <[EMAIL PROTECTED]> wrote:
Pei Wang wrote:
> On 11/2/06, Eric Baum <[EMAIL PROTECTED]> wrote:
>
>> Moreover, I argue that language is built on top of a heavy inductive
>> bias to develop a certain conceptual structure, which then renders the
>> names of concepts h
t;[EMAIL PROTECTED]> wrote:
- Original Message
From: Ben Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Tuesday, October 31, 2006 9:26:15 PM
Subject: Re: Re: [agi] Natural versus formal AI interface languages
>Here is how I intend to use Lojban++ in teaching Novamente.
Pei Wang wrote:
On 11/2/06, Eric Baum <[EMAIL PROTECTED]> wrote:
Moreover, I argue that language is built on top of a heavy inductive
bias to develop a certain conceptual structure, which then renders the
names of concepts highly salient so that they can be readily
learned. (This explains how w
- Original Message
From: Ben Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Tuesday, October 31, 2006 9:26:15 PM
Subject: Re: Re: [agi] Natural versus formal AI interface languages
>Here is how I intend to use Lojban++ in teaching Novamente. When
>Novamente is c
On 11/2/06, Eric Baum <[EMAIL PROTECTED]> wrote:
Pei> (2) A true AGI should have the potential to learn any natural
Pei> language (though not necessarily to the level of native
Pei> speakers).
This embodies an implicit assumption about language which is worth
noting.
It is possible that the n
Russell Wallace wrote:
> Syntactic ambiguity isn't the problem. The reason computers don't
> understand English is nothing to do with syntax, it's because they
> don't understand the world.
> It's easy to parse "The cat sat on the mat" into
>
> sit
> cat
> on
>
On 10/31/06, John Scanlon <[EMAIL PROTECTED]> wrote:
One of the major obstacles to real AI is the belief
that knowledge of a natural language is necessary for
intelligence. A human-level intelligent system should be expected to
have the ability to learn a natural language, but it is not
Thats a totally different problem, and considering the massive knowledge whole currently about how the human brain works, we would have some major problems in that area, though it is interesting. One other problem there, what about two way communications? You are proposing to have the brain talk
Pei> (2) A true AGI should have the potential to learn any natural
Pei> language (though not necessarily to the level of native
Pei> speakers).
This embodies an implicit assumption about language which is worth
noting.
It is possible that the nature of natural language is such that humans
could
- Original Message -
From: Gregory
Johnson
>Provide the AGI with the hardware and software to jack into one or more
human>brains and let the bio-software of the human brain be the language
interface development tool.
Jacking into the human brain? That is hardly
a shortcut to
Perhaps there is a shortcut to all of this.
Provide the AGI with the hardware and software to jack into one or more human
brains and let the bio-software of the human brain be the language interface development tool.
I think we are creating some of this the hardware.
This also puts AGI in a pos
BillK wrote:
On 11/1/06, Charles D Hixson wrote:
So. Lojban++ might be a good language for humans to communicate to an
AI with, but it would be a lousy language in which to implement that
same AI. But even for this purpose the language needs a "verifier" to
insure that the correct forms are be
Forgot to add there is a large amount of syntactic and Word sense disambiguity, but there are some programs out there that handle that to a remarkable extent as well, and I believe can be improved upon.And for many tasks, I dont see any reason not to have some back and forth feedback in the loop fo
The AGI really does need to be able to read and write english or another natural language to be decently useful, people are just NOT goign to learn or be impressed with a machine that spurts out something incoherent (which they already can do)It is suprising how little actuall semantic ambiguity th
On 11/1/06, Charles D Hixson wrote:
So. Lojban++ might be a good language for humans to communicate to an
AI with, but it would be a lousy language in which to implement that
same AI. But even for this purpose the language needs a "verifier" to
insure that the correct forms are being followed.
John,>One of the major obstacles to real AI is the belief
that knowledge of a natural language is necessary for
intelligence.I agree. And it's IMO nearly impossible for AGI to learn/understand NL when its only info source is NL. We get some extra [meta] data from our senses when learning NL (whic
John Scanlon wrote:
Ben,
I did read your stuff on Lojban++, and it's the sort of language
I'm talking about. This kind of language lets the computer and the
user meet halfway. The computer can parse the language like any other
computer language, but the terms and constructions are design
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, October 31, 2006 9:03 PM
Subject: Re: [agi] Natural versus formal AI interface languages
Artificial languages that remove ambiguity like Lojban do not bring us any
closer to solving the AI problem. It is straightforwar
I know people can learn Lojban, just like they can learn Cycl or LISP. Lets
not repeat these mistakes. This is not training, it is programming a knowledge
base. This is narrow AI.
-- Matt Mahoney, [EMAIL PROTECTED]
You seem not to understand the purpose of using Lojban to help teach an AI.
Artificial languages that remove ambiguity like Lojban do not bring us any
closer to solving the AI problem. It is straightforward to convert between
artificial languages and structured knowledge (e.g first order logic), but it
is still a hard (AI complete) problem to convert between natural an
Eliezer wrote:
"Natural" language isn't. Humans have one specific idiosyncratic
built-in grammar, and we might have serious trouble learning to
communicate in anything else - especially if the language was being used
by a mind quite unlike our own.
Well, some humans have learned to communicate
Pei Wang wrote:
Let's don't confuse two statements:
(1) To be able to use a natural language (so as to passing Turing
Test) is not a necessary condition for a system to be intelligent.
(2) A true AGI should have the potential to learn any natural language
(though not necessarily to the level of
Hi,
Which brings up a question -- is it better to use a language based on
term or predicate logic, or one that imitates (is isomorphic to) natural
languages? A formal language imitating a natural language would have the
same kinds of structures that almost all natural languages have: nouns
For comparison, here are some versions of
"I saw the man with the telescope"
in Lojban++ ...
[ http://www.goertzel.org/papers/lojbanplusplus.pdf ]
1)
mi pu see le man sepi'o le telescope
"I saw the man, using the telescope as a tool"
2)
mi pu see le man pe le telescope
"I saw the man who was
1 - 100 of 108 matches
Mail list logo