Matthias,
You seem - correct me - to be going a long way round saying that words are
different from concepts - they're just sound-and-letter labels for concepts,
which have a very different form. And the processing of words/language is
distinct from and relatively simple compared to the
for internal calculations.
These details will be not visible from the linguistic point of view. Just
think about communicating computers and you will know what I mean.
- Matthias
Mike Tintner [mailto:[EMAIL PROTECTED] wrote:
Matthias,
You seem - correct me - to be going a long way round saying
Trent: Oh you just hit my other annoyance.
How does that work?
Mirror neurons
IT TELLS US NOTHING.
Trent,
How do they work? By observing the shape of humans and animals , (what
shape they're in), our brain and body automatically *shape our bodies to
mirror their shape*, (put
Trent,
I should have added that our brain and body, by observing the mere
shape/outline of others bodies as in Matisse's Dancers, can tell not only
how to *shape* our own outline, but how to dispose of our *whole body* -
we transpose/translate (or flesh out) a static two-dimensional body
David:Mike, these statements are an *enormous* leap from the actual study of
mirror neurons. It's my hunch that the hypothesis paraphrased above is
generally true, but it is *far* from being fully supported by, or understood
via, the empirical evidence.
[snip] these are all original
Matthias:
I do not agree that body mapping is necessary for general intelligence. But
this would be one of the easiest problems today.
In the area of mapping the body onto another (artificial) body, computers
are already very smart:
See the video on this page:
http://www.image-metrics.com/
Matthias: I think here you can see that automated mapping between different
faces is
possible and the computer can smoothly morph between them. I think, the
performance is much better than the imagination of humans can be.
http://de.youtube.com/watch?v=nice6NYb_WA
Matthias,
Perhaps we're
position of two faces had to be
adjusted manually.
- Matthias Heger
Mike Tintner wrote:
Matthias: I think here you can see that automated mapping between
different
faces is
possible and the computer can smoothly morph between them. I think, the
performance is much better than the imagination
Ben: I defy you to give me any neuroscience or cog sci result that cannot be
clearly explained using computable physics.
Ben,
As discussed before, no current computational approach can replicate the
brain's ability to produce a memory in what we can be v. confident are only a
few
Ben: I don't have time to summarize all that stuff I already wrote in emails
either ;-p
Ben,
I asked you to at least *label* what your explanation of scientific
creativity is.. Just a label, Ben. Books that are properly organized and
constructed (and sell), usually do have clearly labelled
Trent : If you disagree with my paraphrasing of your opinion Colin, please
feel free to rebut it *in plain english* so we can better figure out
what the hell you're on about.
Well, I agree that Colin hasn't made clear what he stands for
[neo-]computationally. But perhaps he is doing us a
why don't you start AGI-tech on the forum? enough people have expressed an
interest - simply reconfirm - and start posting there
- Original Message -
From: Derek Zahn
To: agi@v2.listbox.com
Sent: Wednesday, October 15, 2008 9:09 PM
Subject: RE: [agi] META: A possible
Colin:
others such as Hynna and Boahen at Stanford, who have an unusual hardware
neural architecture...(Hynna, K. M. and Boahen, K. 'Thermodynamically
equivalent silicon models of voltage-dependent ion channels', Neural
Computation vol. 19, no. 2, 2007. 327-350.) ...and others ... then things
Will:There is a reason why lots of the planets biomass has stayed as
bacteria. It does perfectly well like that. It survives.
Too much processing power is a bad thing, it means less for
self-preservation and affecting the world. Balancing them is a tricky
proposition indeed
Interesting thought.
Colin,
Yes you and Rescher are going in a good direction, but you can make it all
simpler still, by being more specific..
We can take it for granted that we're talking here mainly about whether
*incomplete* creative works should be criticised.
If we're talking about scientific theories, then
ends in reply to a message from a few days back ...
Mike Tintner wrote:
***
Be honest - when and where have you ever addressed creative problems?
[Just count how many problems I have raised)..
***
In my 1997 book FROM COMPLEXITY TO CREATIVITY
***
Just
As I understand the way you guys and AI generally work, you create
well-organized spaces which your programs can systematically search for
options. Let's call them nets - which have systematic, well-defined and
orderly-laid-out connections between nodes.
But it seems clear that natural
like my first attempt at defining
programs a long time ago, which failed to distinguish between sequences and
structures of instructions - and was then pounced on by AI-ers.
On Sat, Oct 11, 2008 at 7:38 AM, Mike Tintner [EMAIL PROTECTED] wrote:
As I understand the way you guys and AI
I guess the obvious follow up question is when your systems search among
options for a response to a situation, they don't search in a systematic way
through spaces of options? They can just start anywhere and end up anywhere in
the system's web of knowledge - as you can in searching the Web
Pei:The NARS solution fits people's intuition
You guys keep talking - perfectly reasonably - about how your logics do or
don't fit your intuition. The logical question is - how - on what
principles - does your intuition work? What ideas do you have about this?
What I should have added is that presumably your intuition must work on
radically different principles to your logics - otherwise you could
incorporate it/them
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed:
containing appropriate
nodes/links and importing them) or one can start from a blank slate and let the
whole structure emerge as it will...
Ben G
On Sat, Oct 11, 2008 at 9:38 AM, Mike Tintner [EMAIL PROTECTED] wrote:
Ben,
Some questions then.
You don't have any spaces
Ben,
I think that's all been extremely clear -and I think you've been very good in
all your different roles :). Your efforts have produced a v. good group -and a
great many thanks for them.
And, just to clarify: the fact that I set up this list and pay $12/month for
its hosting, and deal
Terren:autopoieisis. I wonder what your thoughts are about it?
Does anyone have any idea how to translate that biological principle into
building a machine, or software? Do you or anyone else have any idea what it
might entail? The only thing I can think of that comes anywhere close is the
intelligence or beyond.
Terren
--- On Fri, 10/10/08, Mike Tintner [EMAIL PROTECTED] wrote:
From: Mike Tintner [EMAIL PROTECTED]
Subject: Re: [agi] open or closed source for AGI project?
To: agi@v2.listbox.com
Date: Friday, October 10, 2008, 11
arrive at
something with human-level intelligence or beyond.
Terren
--- On Fri, 10/10/08, Mike Tintner [EMAIL PROTECTED] wrote:
From: Mike Tintner [EMAIL PROTECTED]
Subject: Re: [agi] open or closed source for AGI project
Russell : Whoever said you
need to protect ideas is just shilly-shallying you. Ideas have no
market value; anyone capable of taking them up, already has more ideas
of his own than time to implement them.
In AGI, that certainly seems to be true - ideas are crucial, but require
such a massive
Ben,
V. interesting and helpful to get this pretty clearly stated general position.
However:
To put it simply, once an AGI can understand human language we can teach it
stuff.
you don't give any prognostic view about the acquisition of language. Mine is -
in your dreams. Arguably, most
This is fine and interesting, but hasn't anybody yet read Kauffman's
Reinventing the Sacred (publ this year)? The entire book is devoted to this
theme and treats it globally, ranging from this kind of emergence in
physics, to emergence/evolution of natural species, to emergence/deliberate
Ben:I didn't read that book but I've read dozens of his papers ... it's cool
stuff but does not convince me that engineering AGI is impossible ... however
when I debated this with Stu F2F I'd say neither of us convinced each other ;-)
...
Ben,
His argument (like mine), is that AGI is
problem for the development of AGI because in
my
opinion the difference between a human and a monkey is only fine tuning.
And
nature needed millions of years for this fine tuning.
I think there is no way to avoid this problem but this problem is no show
stopper.
- Matthias
Mike Tintner wrote
Matthias (cont),
Alternatively, if you'd like *the* creative ( somewhat mathematical)
problem de nos jours - how about designing a bail-out fund/ mechanism for
either the US or the world, that will actually work? No show-stopper for
your AGI? [How would you apply logic here, Abram?]
Ben,
I am frankly flabberghasted by your response. I have given concrete example
after example of creative, domain-crossing problems, where obviously there is
no domain or frame that can be applied to solving the problem (as does
Kauffman) - and at no point do you engage with any of them - or
Brad:Unfortunately,
as long as the mainstream AGI community continue to hang on to what
should, by now, be a thoroughly-discredited strategy, we will never (or
too late) achieve human-beneficial AGI.
Brad,
Perhaps you could give a single example of what you mean by non-human
intelligence.
John,
Sorry if I missed something, but I can't see any attempt by you to
schematise/ classify emotions as such, e.g.
melancholy, sorrow, bleakness...
joy, exhilaration, euphoria..
(I'd be esp. interested in any attempt to establish a gradation of emotional
terms).
Do you have anything
rather leave the issue there. ..
regards,
Colin Hales
Mike Tintner wrote:
Colin:
1) Empirical refutation of computationalism...
.. interesting because the implication is that if anyone
doing AGI lifts their finger over a keyboard thinking they can be
directly involved in programming
Matthias: I think it is extremely important, that we give an AGI no bias
about
space and time as we seem to have.
Well, I ( possibly Ben) have been talking about an entity that is in many
places at once - not in NO place. I have no idea how you would swing that -
other than what we already
mentioned this point because your question has relations to the more
fundamental question whether and which bias we should give AGI for the
representation of space and time.
Ursprüngliche Nachricht-
Von: Mike Tintner [mailto:[EMAIL PROTECTED]
Gesendet: Samstag, 4. Oktober 2008 14:13
An: agi@v2
Matt:The problem you describe is to reconstruct this image given the highly
filtered and compressed signals that make it through your visual perceptual
system, like when an artist paints a scene from memory. Are you saying that
this process requires a consciousness because it is otherwise not
of creativity - and creative possibilities - in a given
medium. A somewhat formalised maths, since creators usually find ways to
transcend and change their medium - but useful nevertheless. Is such a maths
being pursued?
On Sat, Oct 4, 2008 at 8:37 PM, Mike Tintner [EMAIL PROTECTED] wrote
Colin:
1) Empirical refutation of computationalism...
.. interesting because the implication is that if anyone
doing AGI lifts their finger over a keyboard thinking they can be
directly involved in programming anything to do with the eventual
knowledge of the creature...they have already
their
inferences/ideas within one default context ... for starters...
ben
On Fri, Oct 3, 2008 at 8:43 PM, Mike Tintner [EMAIL PROTECTED] wrote:
The foundation of the human mind and system is that we can only be in one
place at once, and can only be directly, fully conscious of that place. Our
Ben: the reason AGI is so hard has to do with Santa Fe Institute style
complexity ...
Intelligence is not fundamentally grounded in any particular mechanism but
rather in emergent structures
and dynamics that arise in certain complex systems coupled with their
environments
Characterizing what
Ben: analogy is mathematically a matter of finding mappings that match certain
constraints. The traditional AI approach to this would be to search the
constrained space of mappings using some search heuristic. A complex systems
approach is to embed the constraints into a dynamical system and
Can't resist, Ben..
it is provable that complex systems methods can solve **any** analogy problem,
given appropriate data
Please indicate how your proof applies to the problem of developing an AGI
machine. (I'll allow you to specify as much appropriate data as you like -
any data, of
of us who get the joke ;-)
ben
On Tue, Sep 30, 2008 at 3:38 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Can't resist, Ben..
it is provable that complex systems methods can solve **any** analogy
problem, given appropriate data
Please indicate how your proof applies
Ben,
I must assume you are being genuine here - and don't perceive that you have not
at any point illustrated how complexity might lead to the solution of any
given general (domain-crossing) problem of AGI.
Your OpenCog design also does not illustrate how it is to solve problems - how
it
Ben and Stephen,
AFAIK your focus - and the universal focus - in this debate on how and whether
language can be symbolically/logically interpreted - is on *individual words
and sentences.* A natural place to start. But you can't stop there - because
the problems, I suggest, (hard as they
David,
Thanks for reply. Like so many other things, though, working out how we
understand texts is central to understanding GI - and something to be done
*now*. I've just started looking at it, but immediately I can see that what the
mind does - how it jumps around in time and space and POV
it by a surface approach, simply analysing how words
are used in however many million verbally related sentences in texts on the
net.
http://video.google.ca/videoplay?docid=-7933698775159827395ei=Z1rhSJz7CIvw-QHQyNkCq=nltkvt=lf
NLTK video ;O
On 9/29/08, Mike Tintner [EMAIL PROTECTED] wrote
Ben,
Er, you seem to be confirming my point. Tomasello from Wiki is an early child
development psychologist. I want a model that keeps going to show the stages of
language acquistion from say 7-13, on through teens, and into the twenties -
that shows at what stages we understand
is for.
Anyway, the point is, understanding passages is not a new field, just
a neglected one.
--Abram
On Mon, Sep 29, 2008 at 3:23 PM, Mike Tintner [EMAIL PROTECTED]
wrote:
Ben and Stephen,
AFAIK your focus - and the universal focus - in this debate on how and
whether language can be symbolically
[Comment: Aren't logic and common sense *opposed*?]
Discursive [logical, propositional] Knowledge vs Practical [tacit] Knowledge
http://www.polis.leeds.ac.uk/assets/files/research/working-papers/wp24mcanulla.pdf
a) Knowledge: practical and discursive
Most, if not all understandings of
Thanks, Ben, Dmitri for replies.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
Piecing through the notice below with my renowned ignorance, it occurs to me
to ask: does the brain/ cerebellum demonstrate as much general intelligence
and flexibility in its movements as in its consciously directed thinking?
... In its ability to vary muscle coordination patterns (
So can *you* understand credit default swaps?
Here's the scary part of today's testimony everyone seems to have missed:
SEC chairman Chris Cox's statement that the Credit Default Swap (CDS) market
is completely unregulated. It's size? Somewhere in the $50 TRILLION
range.
Ben,
Are CDS significantly complicated then - as an awful lot of professional,
highly intelligent people are claiming?
So can *you* understand credit default swaps?
Yes I can, having a PhD in math and having studied a moderate amount of
mathematical finance ...
Steve:
If I were selling a technique like Buzan then I would agree. However, someone
selling a tool to merge ALL techniques is in a different situation, with a
knowledge engine to sell.
The difference AFAICT is that Buzan had an *idea* - don't organize your
thoughts about a subject in random
Pei:In a broad sense, formal logic is nothing but
domain-independent and justifiable data manipulation schemes. I
haven't seen any argument for why AI cannot be achieved by
implementing that
Have you provided a single argument as to how logic *can* achieve AI - or
to be more precise,
Ben: Mike:
(And can you provide an example of a single surprising metaphor or analogy
that have ever been derived logically? Jiri said he could - but didn't.)
It's a bad question -- one could derive surprising metaphors or analogies by
random search, and that wouldn't prove anything
.
MegaHAL is kinda creative and poetic, and he does generate some funky and
surprising metaphors ... but alas he is not an AGI...
-- Ben
On Sat, Sep 20, 2008 at 11:30 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Ben: Mike:
(And can you provide an example of a single surprising
Ben, Just to be clear, when I said no argument re how logic will produce
AGI.. I meant, of course, as per the previous posts, ..how logic will
[surprisingly] cross domains etc. That, for me, is the defining characteristic
of AGI. All the rest is narrow AI.
Steve:question: Why bother writing a book, when a program is a comparable
effort that is worth MUCH more?
Well,because when you do just state basic principles - as you constructively
started to do - I think you'll find that people can't even agree about those -
any more than they can agree
[You'll note that arguably the single greatest influence on people's thoughts
about AGI here is Google - basically Google search - and that still means to
most text search. However, video search other kinds of image search [along
with online video broadcasting] are already starting to
Mike, Google has had basically no impact on the AGI thinking of myself or 95%
of the other serious AGI researchers I know..
When did you start thinking about creating an online virtual AGI?.
---
agi
Archives:
Mike, Google has had basically no impact on the AGI thinking of myself or 95%
of the other serious AGI researchers I know...
Ben,
Come again. Your thinking about a superAGI, and AGI takeoff, is not TOTALLY
dependent on Google? You would stlll argue that a superAGI is possible WITHOUT
Steve:
Thanks for wringing my thoughts out. Can you twist a little tighter?!
Steve,
A v. loose practical analogy is mindmaps - it was obviously better for Buzan to
develop a sub-discipline/technique 1st, and a program later.
What you don't understand, I think, in all your reasoning about
Ben:I would not even know about AI had I never encountered paper, yet the
properties of paper have really not been inspirational in my AGI design
efforts...
Your unconscious keeps talking to you. It is precisely paper that mainly shapes
your thinking about AI. Paper has been the defining
Steve:View #2 (mine, stated from your approximate viewpoint) is that simple
programs (like Dr. Eliza) have in the past and will in the future do things
that people aren't good at. This includes tasks that encroach on
intelligence, e.g. modeling complex phonema and refining designs.
Steve,
In
TITLE: Case-by-case Problem Solving (draft)
AUTHOR: Pei Wang
ABSTRACT: Case-by-case Problem Solving is an approach in which the
system solves the current occurrence of a problem instance by taking
the available knowledge into consideration, under the restriction of
available resources. It is
Ben,
I'm only saying that CPS seems to be loosely equivalent to wicked,
ill-structured problem-solving, (the reference to convergent/divergent (or
crystallised vs fluid) etc is merely to point out a common distinction in
psychology between two kinds of intelligence that Pei wasn't aware of in
at 8:51 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Ben,
I'm only saying that CPS seems to be loosely equivalent to wicked,
ill-structured problem-solving, (the reference to convergent/divergent (or
crystallised vs fluid) etc is merely to point out a common distinction in
psychology
Ben,
It's hard to resist my interpretation here - that Pei does sound as if he is
being truly non-algorithmic. Just look at the opening abstract sentences.
(However, I have no wish to be pedantic - I'll accept whatever you guys say you
mean).
Case-by-case Problem Solving is an approach in
Also, could you give an example of a computer program, that can be run on a
digital computer, that is not does not embody an algorithm according to your
definition?
thx
ben
On Thu, Sep 18, 2008 at 9:15 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Ben,
Ah well, then I'm confused
Matt,
Thanks for reference. But it's still somewhat ambiguous. I could somewhat
similarly outline a non-procedure procedure which might include steps like
Think about the problem then Do something, anything - whatever first comes
to mind and If that doesn't work, try something else.
But as
Terren: I send this along because it's a great example of how systems that
self-organize can result in structures and dynamics that are more complex
and efficient than anything we can purposefully design. The applicability
to
the realm of designed intelligence is obvious.
Vlad: . Even if
Jiri and Matt et al,
I'm getting v. confident about the approach I've just barely begun to
outline. Let's call it realistics - the title for a new, foundational
branch of metacognition, that will oversee all forms of information, incl.
esp. language, logic, and maths, and also all image
Matt,
What are you being so tetchy about? The issue is what it takes for any
agent, human or machine.to understand information .
You give me an extremely complicated and ultimately weird test/paper, which
presupposes that machines, humans and everyone else can only exhibit, and be
tested
Matt: How are you going to understand the issues behind programming a
computer for human intelligence if you have never programmed a computer?
Matt,
We simply have a big difference of opinion. I'm saying there is no way a
computer [or agent, period] can understand language if it can't
Jiri,
Quick answer because in rush. Notice your if ... Which programs actually
do understand any *general* concepts of orientation? SHRDLU I will gladly
bet, didn't...and neither do any others.
The v. word orientation indicates the reality that every picture has a
point of view, and refers
to understand 3D without
having a body.
Jiri
On Thu, Sep 11, 2008 at 11:24 AM, Mike Tintner [EMAIL PROTECTED]
wrote:
Jiri,
Quick answer because in rush. Notice your if ... Which programs
actually
do understand any *general* concepts of orientation? SHRDLU I will gladly
bet, didn't...and neither do
a computer understands something and when it just reacts as
if it understands. What is the test? Otherwise, you could always claim
that a machine doesn't understand anything because only humans can do
that.
-- Matt Mahoney, [EMAIL PROTECTED]
--- On Thu, 9/11/08, Mike Tintner [EMAIL
Mike Tintner [EMAIL PROTECTED] wrote:
To understand is to REALISE what [on earth, or
in the [real] world] is being talked about.
Matt: Nice dodge. How do you distinguish between when a computer realizes
something and when it just reacts as if it realizes it?
Yeah, I know. Turing dodged
Matt,
To understand/realise is to be distinguished from (I would argue) to
comprehend statements.
The one is to be able to point to the real objects referred to. The other is
merely to be able to offer or find an alternative or dictionary definition
of the statements. A translation. Like the
Matt: Humor detection obviously requires a sophisticated language model and
knowledge of popular culture, current events, and what jokes have been told
before. Since entertainment is a big sector of the economy, an AGI needs all
human knowledge, not just knowledge that is work related.
In
Obviously you have no plans for endowing your computer with a self and a
body, that has emotions and can shake with laughter. Or tears.
Actually, many of us do. And this is why your posts are so problematical.
You invent what *we* believe and what we intend to do. And then you
criticize
There is no computer or robot that keeps getting physically excited or
depressed by its computations. (But it would be a good idea).
you don't even realize that laptops (and many other computers -- not to
mention appliances) currently do precisely what you claim that no computer
or robot
Emotional laptops? On 2nd thoughts it's like Thomas the Tank Engine... If
s.o. hasn't done it already, there is big money here. Even bigger than you
earn, if that's humanly possible. Lenny the Laptop...? A really personal
computer. Whatddya think? Ideas? [Shh, darling, Lenny's thinking...]
[n.b. my posts are arriving in a weird order]
Jiri: MTWithout a body, you couldn't understand the joke.
False. Would you also say that without a body, you couldn't understand
3D space ?
Jiri,
You have to offer a reason why something is False :). You're saying it's
that 3D space *can* be
:
You're saying it's that 3D space *can* be understood without a body?
Er, false.
http://en.wikipedia.org/wiki/SHRDLU
And SHRDLU can generally recognize whether any obect is in any another
object - whether a doll is in a box or lying between two walls, whether a
box is in another box,
Narrow AI : Stereotypical/ Patterned/ Rational
Matt: Suppose you write a program that inputs jokes or cartoons and outputs
whether or not they are funny
AGI : Stereotype-/Pattern-breaking/Creative
What you rebellin' against?
Whatcha got?
Marlon Brando. The Wild One (1953) On screen,
Matt,
Humor is dependent not on inductive reasoning by association, reversed or
otherwise, but on the crossing of whole matrices/ spaces/ scripts .. and
that good old AGI standby, domains. See Koestler esp. for how it's one
version of all creativity -
, let me try the same. All of
my experience about 'Mike Tintner' is symbolic, nothing visual, but it
still makes you real enough to me...
I'm sorry if it sounds rude
Pei,
You attribute to symbols far too broad powers that they simply don't have -
and demonstrably, scientifically, don't have
Jiri: Mike,
If you think your AGI know-how is superior to the know-how of those
who already built testable thinking machines then why don't you try to
build one yourself?
Jiri,
I don't think I know much at all about machines or software never claim
to. I think I know certain, only certain,
technical subjects - exactly what you're asking
others to do.
Otherwise, you have to admit the folly of trying to compel any such folks
to move from their hard-earned perspectives, if you're not willing to do
that yourself.
Terren
--- On Sun, 9/7/08, Mike Tintner [EMAIL PROTECTED] wrote:
From
Will,
Yes, humans are manifestly a RADICALLY different machine paradigm- if you
care to stand back and look at the big picture.
Employ a machine of any kind and in general, you know what you're getting -
some glitches (esp. with complex programs) etc sure - but basically, in
general, it
Sorry - para Our unreliability .. should have contined..
Our unreliabilty is the negative flip-side of our positive ability to stop
an activity at any point, incl. the beginning and completely change tack/
course or whole approach, incl. the task itself, and even completely
contradict
DZ:AGI researchers do not think of intelligence as what you think of as a
computer program -- some rigid sequence of logical operations programmed by a
designer to mimic intelligent behavior.
1. Sequence/Structure. The concept I've been using is not that a program is a
sequence of operations
Er sorry - my question is answered in the interesting Slashdot thread
(thanks again):
Past studies have shown how many neurons are involved in a single, simple
memory. Researchers might be able to isolate a few single neurons in the
process of summoning a memory, but that is like saying that
OK, I'll bite: what's nondeterministic programming if not a contradiction?
Again - v. briefly - it's a reality - nondeterministic programming is a
reality, so there's no material, mechanistic, software problem in getting a
machine to decide either way. The only problem is a logical one of
201 - 300 of 788 matches
Mail list logo