Pei Wang's uncertain logic is **not** probabilistic, though it uses
frequency calculations
We have our own probabilistic logic theory called Probabilistic Logic
Networks (PLN), which will be described in a book to be released
toward the end of this year or the start of 2008.
The
Hi,
Well, Jaynes showed that the PI can be derived from another
assumption, right?: That equivalent states of information yield
equivalent probabilities
This seems to also be dealt with at the end of Cox's book The
Algebra of Probable Inference where he derives the standard entropy
There is not a clear reason why reasoning and learning must be
unified. Can you elaborate on the advantages of such an approach?
To answer that question I would have to know how you are defining
those terms.
The learning problem in AGI is difficult partly because GOFAI
Yes, you can reduce nearly all commonsense inference to a few rules,
but only if your rules and your knowledge base are not fully
formalized...
Fully formalizing things, as is necessary for software
implementation, makes things substantially more complicated.
Give it a try and see!
into an AGI's mind -- I just think the knowledge in this
latter category is not **sufficient** in itself So, we can take
a hybrid approach in Novamente.
-- Ben G
On Jan 25, 2007, at 6:58 PM, YKY (Yan King Yin) wrote:
On 1/25/07, Ben Goertzel [EMAIL PROTECTED] wrote:
If there is a major
Begin forwarded message:
From: Damien Broderick [EMAIL PROTECTED]
Date: January 23, 2007 3:37:32 PM EST
To: 'ExI chat list' [EMAIL PROTECTED],
[EMAIL PROTECTED]
Subject: [extropy-chat] 10 Questions for György Buzsáki
Reply-To: ExI chat list [EMAIL PROTECTED]
I do suspect that superhumanly intelligent AI~s are intrinsically
uncontrollable by humans...
Ben G
On 12/25/06, Philip Goetz [EMAIL PROTECTED] wrote:
On 12/22/06, Ben Goertzel [EMAIL PROTECTED] wrote:
I don't consider there is any correct language for stuff like this,
but I believe my use
, Ben Goertzel [EMAIL PROTECTED] wrote:
erased along with it. So, e.g. even though you give up your supergoal
of drinking yourself to death, you may involuntarily retain your
subgoal of drinking (even though you started doing it only out of a
desire to drink yourself to death).
I don't think
model that, and if we do, how close
in any way is it to humanity?
What are the intrinsic motivating factors of a fully-autonomous AGI?
Or is that just too 'alien' for us?
James Ratcliff
Ben Goertzel [EMAIL PROTECTED] wrote:
Another aspect I have had to handle is the different temperal aspects
, and believe the motivational systems (though dang hard) are very
important to a truly autonomous AGI, and the controlling factor in its
behaviour and goal creating ability.
James Ratcliff
Ben Goertzel [EMAIL PROTECTED] wrote:
Initially, the Novamente system's motivations will be
-- please its
I intend to start at a bit higher age level of teen / reduced knowledge
adult,
That is not possible in an approach that, like Novamente, is primarily
experiential-learning-based...
-- Ben
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options,
Ratcliff
Ben Goertzel [EMAIL PROTECTED] wrote:
I intend to start at a bit higher age level of teen / reduced knowledge
adult,
That is not possible in an approach that, like Novamente, is primarily
experiential-learning-based...
-- Ben
-
This list is sponsored by AGIRI: http://www.agiri.org
to something a bit more workable... and Im young,
so have a bit of time for mistakes.
James Ratcliff
Ben Goertzel [EMAIL PROTECTED] wrote:
On 12/8/06, James Ratcliff wrote:
What are the meta-goal properties there defined?
For example:
-- have as few distinct supergoals as possible
-- keep
The topic of the relation between rationality and goals came up on the
extropy-chat list recently, and I wrote a long post about it, which I
think is also relevant to some recent discussions on this list...
-- Ben
***
SUPERGOALS VERSUS SUBGOALS
Another aspect I have had to handle is the different temperal aspects of
goals/states, like immediate gains vs short term and long terms goals and
how they can coexist together. This is difficult to grasp as well.
In Novamente, this is dealt with by having goals explicitly refer to time-scope.
Hi,
It seems to me that discussing AI or human thought in terms of goals and
subgoals is a very narrow-AI approach and destined to fail in general
application.
I think it captures a certain portion of what occurs in the human
mind. Not a large portion, perhaps, but an important portion.
Hi Richard,
Once again, I have to say that this characterization ignores the
distinctions I have been making between goal-stack (GS) systems and
diffuse motivational constraint (DMC) systems. As such, it only
addresses one set of possibilities for how to drive the behavior of an AGI.
And
I believe that the human mind incorporates **both(( a set of goal
stacks (mainly useful in deliberative thought), *and* a major role for
diffuse motivational constraints (guiding most mainly-unconscious
thought). I suggest that functional AGI systems will have to do so,
also.
Also, I believe
Pei,
As usual, comparing my views to yours reveals subtle differences in terminology!
I can see now that my language of implicit versus explicit goals is
confusing in a non-Novamente context, and actually even in a Novamente
context. Let me try to rephrase the distinction
IMPLICIT GOAL: a
John,
On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote:
I don't believe that the singularity is near, or that it will even occur. I
am working very hard at developing real artificial general intelligence, but
from what I know, it will not come quickly. It will be slow and
incremental. The
If, on the other hand, all we have is the present approach to AI then I
tend to agree with you John: ludicrous.
Richard Loosemore
IMO it is not sensible to speak of the present approach to AI
There are a lot of approaches out there... not an orthodoxy by any means...
-- Ben G
-
This
I see a singularity, if it occurs at all, to be at least a hundred years
out.
To use Kurzweil's language, you're not thinking in exponential time ;-)
The artificial intelligence problem is much more difficult
than most people imagine it to be.
Most people have close to zero basis to even
On 12/4/06, Mark Waser [EMAIL PROTECTED] wrote:
Philip Goetz gave an example of an intrusion detection system that learned
information that was not comprehensible to humans. You argued that he
could
have understood it if he tried harder.
No, I gave five separate alternatives most of
Hi,
The only real case where a human couldn't understand the machine's reasoning
in a case like this is where there are so many entangled variables that the
human can't hold them in comprehension -- and I'll continue my contention
that this case is rare enough that it isn't going to be a
We're reaching the point of agreeing to disagree except . . . .
Are you really saying that nearly all of your decisions can't be explained
(by you)?
Well, of course they can be explained by me -- but the acronym for
that sort of explanation is BS
One of Nietzsche's many nice quotes is
Well, of course they can be explained by me -- but the acronym for
that sort of explanation is BS
I take your point with important caveats (that you allude to). Yes, nearly
all decisions are made as reflexes or pattern-matchings on what is
effectively compiled knowledge; however, it is the
.
And how hard-wired are these goals, and how (simply) do we really hard-wire
them atall?
Our goal of staying alive appears to be biologically preferred or
something like that, but can definetly be overridden by depression / saving
a person in a burning building.
James Ratcliff
Ben Goertzel [EMAIL
But I'm not at all sure how important that difference is . . . . With the
brain being a massively parallel system, there isn't necessarily a huge
advantage in compiling knowledge (I can come up with both advantages and
disadvantages) and I suspect that there are more than enough surprises that
The statement, You cannot turn off hunger or pain is sensible.
In fact, it's one of the few statements in the English language that
is LITERALLY so. Philosophically, it's more certain than
I think, therefore I am.
If you maintain your assertion, I'll put you in my killfile, because
we cannot
in such a way that they coexist with the internally created
goals.
I have worked on the rudiments of an AGI system, but am having trouble
defining its internal goal systems.
James Ratcliff
Ben Goertzel [EMAIL PROTECTED] wrote:
Regarding the definition of goals and supergoals, I have made attempts
IMO, humans **can** reprogram their top-level goals, but only with
difficulty. And this is correct: a mind needs to have a certain level
of maturity to really reflect on its own top-level goals, so that it
would be architecturally foolish to build a mind that involved
revision of supergoals at
Matt Maohoney wrote:
My point is that when AGI is built, you will have to trust its answers based
on the correctness of the learning algorithms, and not by examining the
internal data or tracing the reasoning.
Agreed...
I believe this is the fundamental
flaw of all AI systems based on
I think that our propensity for music is pretty damn simple: it's a
side-effect of the general skill-learning machinery that makes us memetic
substrates. Tunes are trajectories in n-space as are the series of motor
signals involved in walking, throwing, hitting, cracking nuts, chipping
stones,
stream analysis in the context of understanding
tonal patterns, but that doesn't mean we can't apply it elsewhere
Indeed, one thing that Mithen argues is precisely that we DO apply it
elsewhere, e.g. in music...
-- Ben
On 12/2/06, William Pearson [EMAIL PROTECTED] wrote:
On 02/12/06, Ben
Would you argue that any of your examples produce good results that are
not comprehensible by humans? I know that you sometimes will argue that the
systems can find patterns that are both the real-world simplest explanation
and still too complex for a human to understand -- but I don't
On 11/29/06, Philip Goetz [EMAIL PROTECTED] wrote:
On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote:
I defy you to show me *any* black-box method that has predictive power
outside the bounds of it's training set. All that the black-box methods are
doing is curve-fitting. If you give them
Richard,
This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...
ben
On 11/29/06, Philip Goetz [EMAIL PROTECTED] wrote:
On 11/19/06, Richard Loosemore [EMAIL PROTECTED] wrote:
The goal-stack AI might very well turn out simply not to be
On 11/28/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
On Monday 27 November 2006 10:35, Ben Goertzel wrote:
Amusingly, one of my projects at the moment is to show that
Novamente's economic attention allocation module can display
Hopfield net type content-addressable-memory behavior
My approach,
admittedly unusual, is to assume I have all the processing power and memory I
need, up to a generous estimate of what the brain provides (a petawords and
100 petaMACs), and then see if I can come up with operations that do what it
does. If not it, would be silly to try and do the
Amusingly, one of my projects at the moment is to show that
Novamente's economic attention allocation module can display
Hopfield net type content-addressable-memory behavior on simple
examples. As a preliminary step to integrating it with other aspects
of Novamente cognition (reasoning,
HI,
Therefore, the problem of using an n-space representation for AGI is
not its theoretical possibility (it is possible), but its practical
feasibility. I have no doubt that for many limited application,
n-space representation is the most natural and efficient choice.
However, for a general
I constructed a while ago (mathematically) a detailed mapping from
Novamente Atoms (nodes/links) into n-dimensional vectors. You can
certainly view the state of a Novamente system at a given point in
time as a collection of n-vectors, and the various cognition methods
in Novamente as mappings
Hi Richard,
I don't really want to get too sidetracked, but even if Immerman's
analysis were correct, would this make a difference to the way that Eric
was using NP-Hard, though?
No, Immerman's perspective on complexity classes doesn't really affect
your objections...
Firstly, the
The point of using Lojban for proto-AGI's is to enable productive,
interactive conversations with AGI's at a fairly early stage in their
development ...
Of course, mining masses of online English text is a better way for
the system to gain general knowledge about science, politics, human
Oh, I think the representation is quite important. In particular, logic lets
you in for gazillions of inferences that are totally inapropos and no good
way to say which is better. Logic also has the enormous disadvantage that you
tend to have frozen the terms and levels of abstraction. Actual
Richard,
I know it's peripheral to your main argument, but in this example ...
Suppose that the computational effort that evolution needs to build
different sized language understanding mechanisms scales as:
2.5 * (N/7 + 1)^^6 planet-years
... where different sized is captured by the value
Well, in the language I normally use to discuss AI planning, this
would mean that
1)keeping charged is a supergoal
2)
The system knows (via hard-coding or learning) that
finding the recharging socket == keeping charged
(i.e. that the former may be considered a subgoal of the latter)
3)
The
According to your classification,
structure (e.g., to build brain models)
behavior (e.g., to simulate human mind)
capability (e.g., to solve hard problems)
function (e.g., to have cognitive facilities)
principle (e.g., to be adaptive and rational)
Novamente is based on the final 3 categories,
Agree, too --- that is why I said you want almost everything. However,
whenever a design decision is made, you usually consider more about
the system's problem-solving ability, and less about the consistency
of its theoretical foundation --- of course, you may argue that it
don't conflict with
.
Still though, you are right that I remain something of a pragmatic
opportunist as an AGI designer, even though I'm a purist re philosophy
of mind. Engineering is an opportunistic pursuit, IMO ;=)
ben
On 11/18/06, Ben Goertzel [EMAIL PROTECTED] wrote:
Agree, too --- that is why I said you want
Rings and Models are appropriated terms, but the mathematicians
involved would never be so stupid as to confuse them with the real
things. Marcus Hutter and yourself are doing precisely that.
I rest my case.
Richard Loosemore
Please, let us avoid explicitly insulting one another, on this
Rings and Models are appropriated terms, but the mathematicians
involved would never be so stupid as to confuse them with the real
things. Marcus Hutter and yourself are doing precisely that.
I rest my case.
Richard Loosemore
IMO these analogies are not fair.
The mathematical notion of a
3. If translating natural language to a structured representation is not
hard, then do it. People have been working on this for 50 years without
success. Doing logical inference is the easy part.
Actually, a more accurate statement would be Doing individual logical
inference steps is the easy
Hi,
I would also argue that a large number of weak pieces of evidence also
means that Novamente does not *understand* the domain that it is making a
judgment in. It is merely totally up weight of evidence.
I would say that intuition often consists, internally, in large part,
of summing
I don't know what you mean by incrementally updateable,
but if you look up the literature on language learning, you will find
that learning various sorts of relatively simple grammars from
examples, or even if memory serves examples and queries, is NP-hard.
Try looking for Dana Angluin's
I don't think the proofs depend on any special assumptions about the
nature of learning.
I beg to differ. IIRC the sense of learning they require is induction
over example sentences. They exclude the use of real world knowledge,
in spite of the fact that such knowledge (or at least
My question is: am I wrong that there are still people out there that
buy the symbol-system hypothesis? including the idea that a system based on
the mechanical manipulation of statements in logic, without a foundation of
primary intelligence to support it, can produce thought?
The
So, in the way that you've described this, I totally agree with you. I
guess I was attacking a paper tiger that any real thinking person involved
in AI doesn't bother with anymore.
I'm not sure about that ... Cyc seems to be based on the idea that
logical manipulation of symbols denoting
YKY says:
The Novamente design is modular, in two senses:
1) there is a high-level architecture consisting of a network of
functionally specialized lobes -- a lobe for language processing, a
lobe for visual perception, a lobe for general cognition etc.
2) each lobe contains a set of
Richard,
So it is with redefinitions of the term understanding to be synonymous
with a variety of compression. This is an egregious distortion of the
real meaning of the term, and *everything* that follows from that
distortion is just nonsense.
Richard Loosemore.
This discussion of word
2. Ben raised the issue of learning. I think we should divide learning
into 3 parts:
(1) linguistic eg grammar
(2) semantic / concepts
(3) generic / factual.
This leaves out a lot, for instance procedure learning and
metalearning... and also perceptual learning (e.g. object
In Novamente, the synthesis of probabilistic logical inference and
probabilistic evolutionary learning is to be used to carry out all of
the above kinds of learning you mention, and more
Well, then your architecture would be monolithic and not modular. I think
it's a good choice to
Hi,
About
But a simple example is
ate a pepperoni pizza
ate a tuna pizza
ate a VEGAN SUPREME pizza
ate a Mexican pizza
ate a pineapple pizza
I feel this discussion of sentence parsing and interpretation is
taking a somewhat misleading direction, by focusing on examples that
are in fact very
Eric wrote:
The challenge is to find a methodology
for producing fast enough and frugal enough code, where that
methodology is practicable. For example, as a rough upper bound,
it would be practicable if it required 10,000 programmer years and
1,000,000 PC-years (i.e a $3Bn budget).
(Why should
About
http://www.physorg.com/news82190531.html
Rabinovich and his colleague at the Institute for Nonlinear Science at the
University of California, San Diego, Ramon Huerta, along with Valentin
Afraimovich at the Institute for the Investigation of Optical Communication
at the University of
Richard wrote:
What Rabinovich et al appear to do is to buy some mathematical
tractability by applying their idea to a trivially simple neural model.
That means they know a lot of detail about a model that, if used for
anything realistic (like building an intelligence) would *then* beg so
many
Jef wrote:
As I see it, the present key challenge of artificial intelligence is to
develop a fast and frugal method of finding fast and frugal methods,
However, this in itself is not possible. There can be a fast method
of finding fast and frugal methods, or a frugal method of finding fast
Hi,
On 11/6/06, James Ratcliff [EMAIL PROTECTED] wrote:
Ben,
I think it would be beneficial, at least to me, to see a list of tasks.
Not as a defining measure in any way. But as a list of work items that a
general AGI should be able to complete effectively.
I agree, and I think that this
How much of the Novamente system is meant to be autonomous, and how much
will be responding only from external stymulus such as a question or a task
given externally.
Is it intended after awhile to run on its own where it would be up 24
hours a day, exploring potentially some by itself, or more
On 11/4/06, Russell Wallace [EMAIL PROTECTED] wrote:
On 11/4/06, Ben Goertzel [EMAIL PROTECTED] wrote:
I of course don't think that SHRDLU vs. AGISim is a fair comparison.
Agreed. SHRDLU didn't even try to solve the real problems - for the simple
and sufficient reason that it was impossible
It does not help that words in SHRDLU are grounded in an artificial world. Its
failure to scale hints that approaches such as AGI-Sim will have similar
problems. You cannot simulate complexity.
I of course don't think that SHRDLU vs. AGISim is a fair comparison.
Among other
Another reason for measurements is that it makes your goals concrete. How do you define general
intelligence? Turing gave us a well defined goal, but there are some shortcomings. The Turing test is
subjective, time consuming, isn't appropriate for robotics, and really isn't a good goal if it
I am happy enough with the long-term goal of independent scientific
and mathematical discovery...
And, in the short term, I am happy enough with the goals of carrying
out the (AGISim versions of) the standard tasks used by development
psychologists to study childrens' cognitive behavior...
I
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 31, 2006 9:26:15 PM
Subject: Re: Re: [agi] Natural versus formal AI interface languages
Here is how I intend to use Lojban++ in teaching Novamente. When
Novamente is controlling a humanoid agent in the AGISim
Hi,
I think an interesting goal would be to teach an AGI to write software. If I
understand your explanation, this is the same problem.
Yeah, it's the same problem.
It's a very small step from Lojban to a programming language, and in
fact Luke Kaiser and I have talked about making a
Luke wrote:
It seems to be like this: when you start programming, even though the
syntax is still natural, the language gets really awkward and does not
resemble the way you would express the same thing naturally. For me it
just shows that the real problem is somewhere deeper, in the semantic
John --
See
lojban.org
and
http://www.goertzel.org/papers/lojbanplusplus.pdf
-- Ben G
On 10/31/06, John Scanlon [EMAIL PROTECTED] wrote:
One of the major obstacles to real AI is the belief that knowledge of a
natural language is necessary for intelligence. A human-level intelligent
For comparison, here are some versions of
I saw the man with the telescope
in Lojban++ ...
[ http://www.goertzel.org/papers/lojbanplusplus.pdf ]
1)
mi pu see le man sepi'o le telescope
I saw the man, using the telescope as a tool
2)
mi pu see le man pe le telescope
I saw the man who was with
Hi,
Which brings up a question -- is it better to use a language based on
term or predicate logic, or one that imitates (is isomorphic to) natural
languages? A formal language imitating a natural language would have the
same kinds of structures that almost all natural languages have:
This looks exciting...
http://www.pcper.com/article.php?aid=302type=expertpid=1
A system Intel is envisioning, with 100 tightly connected cores on a
chip, each with 32MB of local SRAM ...
This kind of hardware, it seems, would enable the implementation of a
powerful Novamente AGI system on a
For anyone in the DC area, the following event may be interesting...
Not directly AGI-relevant, but interesting in that one day virtual
worlds like Second Life may be valuable for AGI in terms of giving
them a place to play around and interact with humans, without need for
advanced robotics...
Eliezer wrote:
Natural language isn't. Humans have one specific idiosyncratic
built-in grammar, and we might have serious trouble learning to
communicate in anything else - especially if the language was being used
by a mind quite unlike our own.
Well, some humans have learned to communicate
I know people can learn Lojban, just like they can learn Cycl or LISP. Lets
not repeat these mistakes. This is not training, it is programming a knowledge
base. This is narrow AI.
-- Matt Mahoney, [EMAIL PROTECTED]
You seem not to understand the purpose of using Lojban to help teach an
Me, interviewed by R.U. Sirius, on AGI, the Singularity, philosophy of
mind/emotion/immortality and so forth:
http://mondoglobo.net/neofiles/?p=78
Audio only...
-- Ben
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
Hi,
There is something about the gist of your response that seemed strange
to me, but I think I have put my finger on it: I am proposing a general
*class* of architectures for an AI-with-motivational-system. I am not
saying that this is a specific instance (with all the details nailed
down)
...
-- Ben G
On 10/25/06, Richard Loosemore [EMAIL PROTECTED] wrote:
Ben Goertzel wrote:
Loosemore wrote:
The motivational system of some types of AI (the types you would
classify as tainted by complexity) can be made so reliable that the
likelihood of them becoming unfriendly would
Hi,
On 10/24/06, Pei Wang [EMAIL PROTECTED] wrote:
Hi Ben,
As you know, though I think AGISim is interesting, I'd rather directly
try the real thing. ;-)
I felt that way too once, and so (in 1996) I did directly try the real
thing. Building a mobile robot and experimenting with it was fun,
I used to be of the opinion that doing robotics in simulation was a waste of
time. The simulations were too perfect. To simplistic compared to the
nitty gritty of real world environments. Algorithms developed and optimised
for simulated environments would not translate well (or at all) into
Hi Matt,Regarding logic-based knowledge representation and language/perceptual/action learning -- I understand the nature of your confusion, because the point you are confused on is exactly the biggest point of confusion for new members of the Novamente AI team.
A very careful distinction needs to
Hi, For instance, this means that the cat concept may well not be
expressed by a single cat term, but perhaps by a complex learned (probabilistic) logical predicate.I don't think it's really useful to discuss representing word meaningswithout a sufficiently powerful notion of context (which is
the vast majority of critical patterns for really understanding something like love...
-- BenOn 10/23/06, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
On 10/23/06, Ben Goertzel [EMAIL PROTECTED] wrote: 2) the distinction between
2a) using ungrounded formal symbols to pretend to represent knowledge,
e.g
So my question is: what is needed to extend language models to the level of
compound sentences? More training data? Different training data? A new
theory of language acquisition? More hardware? How much?
What is needed is:
A better training approach, involving presentation of compound
That's right... I need to update the AGIRI site to reflect the replacement of Loglish with Lojban++, which is a better thought out proposal along the same conceptual linesSee the document
http://www.goertzel.org/papers/lojbanplusplus.pdffor information on the Lojban++ project, which I think is a
Hi,sary of terms pertinent to our discussions, including Ben's suggestion of the terms:
-- perception
-- emergence
-- symbol grounding
-- logicOf course, those were just four terms selected at random and not intended as terms having any special role in the ontology of mind...I remain psyched
Hi YKY,I agree with you that we (the human race) are theoretically close to AGI, in the sense that 5 years of concerted effort by 10 of the right people, implementing, testing and teaching the right software code, could bring us to a human-level AGI.
And, I agree that there is no one true path to
YKY made some points about the existence of conflict issues between different AGI theorists ...
So, the way I see it, the question is how to reconcile different ways of doing thingsso that we can work together and achieve our common goal more effectively.
Since there is nounique solution to the
Brian,Definitely, the idea is that the Mind Ontology should be completely free to copy and re-use. Perhaps it would be best to put it under a separate URL just for clarity in this regard; I'll think about this...
thxBenOn 10/15/06, Brian Atkins [EMAIL PROTECTED] wrote:
I think it sounds like a
Ben Goertzel wrote:
There's a special section in this week's Science called Modeling the
Mind
that should be of interest to many denizens of this list. Here are the
titles:
Of Bytes and Brains
Peter Stern and John Travis
Science 6 October 2006: 75.
http://www.sciencemag.org/cgi/content
Hi,
My concern about G0 is that the problem of integrating first order logic or
structured symbolic knowledge with language and sensory/motor data is
unsolved, even when augmented with weighted connections to represent
probability and/or confidence (e.g. fuzzy logic, Bayesian systems,
Well, this boils down to unanswered questions of theoretical physics.
According to quantum theory, any finite physical system can be
approximated arbitrarily closely by a quantum Turing machine (see some
old papers of David Deutsch, which prove this). And, a quantum
Turing machine can provably
901 - 1000 of 1549 matches
Mail list logo