On Thu, Dec 18, 2008 at 9:52 AM, YKY (Yan King Yin)
wrote:
> How about funding from academia -- would that be significant? I mean,
> can I expect to get research grants right after I get a PhD?
Depends how much time your thesis supervisor has gotten you writing
grant applications during your thi
Is there some reason why so many people on this list can't quote?
I guess I should just be thankful that you didn't top post.
Trent
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
On Mon, Dec 1, 2008 at 11:19 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
> You said "QUANTUM THEORY REALLY HAS NOTHING DIRECTLY TO DO WITH
> UNCOMPUTABILITY."
Please don't quote people using this style, it hurts my eyes.
> But quantum theory does appear to be directly related to limits of the
> comp
On Thu, Nov 27, 2008 at 1:43 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> "Intelligence" that is rationality without imagination, symbol manipulation
> without image manipulation, basically paper-based rather than screen-based
> (or "consciousness"-based), isn't intelligence at all.
Although thi
On Wed, Nov 26, 2008 at 8:51 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> I am not aware of any published papers proposing pure RSI (without input) as
> a path to AGI. But in 2002 there were experiments with AI boxing to test the
> feasibility of detecting and containing unfriendly AI, discusse
On Wed, Nov 26, 2008 at 12:30 AM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
> It certainly wasn't a strawman as of a couple of years ago; I've had
> arguments with people who seemed to seriously believe in the
> possibility of creating AI in a sealed box in someone's basement
> without any feedbac
On Tue, Nov 25, 2008 at 11:31 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> One of the problems in defining RSI in a mathematically vigorous way is
> coming up with a definition that is also useful. If a system has input, then
> there is really no definition that distinguishes self improvement fr
I read the paper.
Although I see what you're trying to achieve in this paper, I think
your conclusions are far from being, well, conclusive. You've taken a
couple of terms that are thrown around the AI/Singularity community,
assigned an arbitrary mathematical definition of your own devising,
th
On Sat, Nov 22, 2008 at 9:18 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
> It may well have great potential for the early stages of the transhumanist
> transformation.
Yup. Long way to go before Neuromancer.
Trent
---
agi
Archives: https://www.listbox.com/m
On Fri, Nov 21, 2008 at 11:02 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Since such luminaries as Jerry Fodor have said much the same thing, I think
> I stand in fairly solid company.
Wow, you said Fodor without being critical of his work. Is that legal?
Trent
-
On Wed, Nov 19, 2008 at 6:20 PM, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
>>Trent Waddington wrote:
>>Apparently, it was Einstein who said that if you can't explain it to
>>your grandmother then you don't understand it.
>
> That was Richard Feynman
When
On Wed, Nov 19, 2008 at 9:29 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Clearly, this can be done, and has largely been done already ... though
> cutting and pasting or summarizing the relevant literature in emails would
> not a productive use of time
Apparently, it was Einstein who said that i
On Tue, Nov 18, 2008 at 8:38 PM, Bob Mottram <[EMAIL PROTECTED]> wrote:
> I think most people have at least a few beliefs which cannot be strictly
> justified rationally
You would think that. :)
Trent
---
agi
Archives: https://www.listbox.com/member/arc
On Tue, Nov 18, 2008 at 4:07 PM, Colin Hales
<[EMAIL PROTECTED]> wrote:
> I'd like to dispel all such delusion in this place so that neurally inspired
> AGI gets discussed accurately, even if your intent is to "explain
> P-consciousness away"... know exactly what you are explaining away and
> exact
On Tue, Nov 18, 2008 at 2:50 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Intelligence was
> clearly at first *distributed* through a proto-nervous system throughout the
> body. Watch a sea anemone wait and then grab, and then devour a fish that
> approaches it and you will be convinced of that. T
On Tue, Nov 18, 2008 at 10:21 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
> I am talking about the type of awareness that we humans have when we say we
> are "conscious" of something.
You must talk to different humans to me. I've not had anyone use the
word "conscious" around me in decades.. and usu
On Tue, Nov 18, 2008 at 9:03 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
> I think a good enough definition to get started with is that which we humans
> feel our minds are directly aware of, including awareness of senses,
> emotions, perceptions, and thoughts. (This would include much of what
> Rich
On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> I mean that people are free to decide if others feel pain. For example, a
> scientist may decide that a mouse does not feel pain when it is stuck in the
> eye with a needle (the standard way to draw blood) even though it s
Richard,
After reading your paper and contemplating the implications, I
believe you have done a good job at describing the intuitive notion of
"consciousness" that many lay-people use the word to refer to. I
don't think your explanation is fleshed out enough for those
lay-people, but its certai
On Mon, Nov 17, 2008 at 10:47 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> I will not be replying to any further messages from you because you are
> wasting my time.
Welcome to the Internet.
Trent
---
agi
Archives: https://www.listbox.com/member/arc
On Sat, Nov 15, 2008 at 6:42 PM, Charles Hixson
<[EMAIL PROTECTED]> wrote:
> To an extent I agree with you. I have in the past argued that a thermostat
> is minimally conscious. But please note the *minimally*.
I invite you then to consider the horrors being inflicted upon my CPU
by Microsoft so
As I believe the "is that conciousness?" debate could go on forever, I
think I should make an effort here to save this thread.
Setting aside the objections of vegetarians and animal lovers, many
hard nosed scientists decided long ago that jamming things into the
brains of monkeys and the like is j
On Wed, Nov 12, 2008 at 8:58 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> As I explained, animals that have no concept of death have nevertheless
> evolved to fear most of the things that can kill them. Humans have learned to
> associate these things with death, and invented the concept of consc
On Wed, Nov 5, 2008 at 9:31 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> As a second example, the video game Grand Theft Auto allows you to have
> simulated sex with prostitutes and then beat them to death to get your money
> back. While playing, I declined to do so, even though it was irrationa
On Mon, Nov 3, 2008 at 1:56 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> In terms of MMOs, I suppose you could think of Selmer's approach as allowing
> "scripting in a highly customized variant of Prolog" ... which might not be
> a bad
> thing, but is different from creating learning systems..
On Mon, Nov 3, 2008 at 4:50 PM, Steve Richfield
<[EMAIL PROTECTED]> wrote:
> Taking off my AGI hat and putting on my Simulated Christian hat for a
> moment...
Must you?
Trent
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: http
On Mon, Nov 3, 2008 at 1:22 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> So, yes, his stuff is not ELIZA-like, it's based on a fairly sophisticated
> crisp-logic-theorem-prover back end, and a well-thought-out cognitive
> architecture.
>
>From what I saw in the presentation, it looks like this is
On Mon, Nov 3, 2008 at 7:50 AM, Bob Mottram <[EMAIL PROTECTED]> wrote:
> http://kryten.mm.rpi.edu/PRES/SYNCHARIBM0807/sb_ka_etal_cogrobustsynchar_082107v1.mov
Is it just me or is that mov broken?
The slides don't update, the audio is clipping, etc.
Interesting that they're using Piaget tasks in
On Mon, Nov 3, 2008 at 6:56 AM, Mark Waser <[EMAIL PROTECTED]> wrote:
> Is this sarcasm, irony, or are you that unaware of current popular culture
> (i.e. Terminator Chronicles on TV, a new Terminator movie in the works, "I,
> Robot", etc.)?
The quote is from the early 80s.. pre-Terminator hysteri
On Mon, Nov 3, 2008 at 7:17 AM, Nathan Cook <[EMAIL PROTECTED]> wrote:
> This article (http://www.sciam.com/article.cfm?id=defining-evil) about a
> chatbot programmed to have an 'evil' intentionality, from Scientific
> American, may be of some interest to this list. Reading the researcher's
> perso
On Wed, Oct 29, 2008 at 11:29 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> Trent,
>
> A comment in my role as list administrator:
> Let's keep the discussion on the level of ideas not people, please.
>
> No ad hominem attacks such as "You're a gas bag", etc.
If he's free to talk about virtual c
On Wed, Oct 29, 2008 at 11:11 PM, Benjamin Johnston
<[EMAIL PROTECTED]> wrote:
> Your last two emails to YKY were rude and unhelpful. If you felt a burning
> desire to express yourself rudely, you could have done so by emailing him
> privately.
I'm publicly telling him to piss off. I *could* have
On Wed, Oct 29, 2008 at 10:13 PM, YKY (Yan King Yin)
<[EMAIL PROTECTED]> wrote:
> I don't recall hearing an argument from you. All your replies to me
> are rather rude one liners.
As opposed to everyone else, who either doesn't reply to you or humors you.
Get over yourself.
Trent
On Wed, Oct 29, 2008 at 4:04 PM, YKY (Yan King Yin)
<[EMAIL PROTECTED]> wrote:
> Last time Ben's argument was that the virtual credit method confuses
> for-profit and charity emotions in people. At that time it sounded
> convincing, but after some thinking I realized that it is actually
> complete
On Fri, Oct 24, 2008 at 1:04 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> We've been over this one several times in the past (perhaps you haven't been
> here). Blind people can "see" - they can draw the shapes of objects. . They
> create their visual shapes out of touch.Touch comes prior to vision
On Fri, Oct 24, 2008 at 10:38 AM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> I think humans represent chess by a huge number of *visual* patterns.
http://www.eyeway.org/inform/sp-chess.htm
Trent
---
agi
Archives: https://www.listbox.com/member/archiv
On Fri, Oct 24, 2008 at 8:48 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> I suspect that's a half-truth...
Well as a somewhat good chess instructor myself, I have to say I
completely agree with it. People who play well against computers
rarely rank above first time players.. in fact, most of the
On Fri, Oct 24, 2008 at 8:41 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Yes ... at the moment the styles of human and computer chess players are
> different enough that doing well against computer players does not imply
> doing nearly equally well against human players ... though it certainly
>
On Thu, Oct 23, 2008 at 6:11 PM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> I am sure that everyone who learns chess by playing against chess computers
> and is able to learn good chess playing (which is not sure as also not
> everyone can learn to be a good mathematician) will be able to be a
On Thu, Oct 23, 2008 at 3:19 PM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> I do not think that it is essential for the quality of my chess who had
> taught me to play chess.
> I could have learned the rules from a book alone.
> Of course these rules are written in a language. But this is not
On Thu, Oct 23, 2008 at 11:23 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> So how does yet another formal language processing system help us understand
> natural language? This route has been a dead end for 50 years, in spite of
> the ability to always make some initial progress before getting stu
On Thu, Oct 23, 2008 at 8:39 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> If you consider programming an AI social activity, you very
> unnaturally generalized this term, confusing other people. Chess
> programs do learn (certainly some of them, and I guess most of them),
> not everything is har
On Wed, Oct 22, 2008 at 8:24 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> Current AIs learn chess without engaging in social activities ;-).
> And chess might be a good drosophila for AI, if it's treated as such (
> http://www-formal.stanford.edu/jmc/chess.html ).
> This was uncalled for.
No, t
On Wed, Oct 22, 2008 at 6:23 PM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> I see no argument in your text against my main argumentation, that an AGI
> should be able to learn chess from playing chess alone. This I call straw
> man replies.
No-one can learn chess from playing chess alone.
Ch
On Wed, Oct 22, 2008 at 3:20 PM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> It seems to me that many people think that embodiment is very important for
> AGI.
I'm not one of these people, but I at least learn what their
arguments. You seem to have made up an argument which you've then
knocke
On Wed, Oct 22, 2008 at 11:21 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Personally my view is as follows. Science does not need to intuitively
> explain all
> aspects of our experience: what it has to do is make predictions about
> finite sets of finite-precision observations, based on previou
On Sat, Oct 18, 2008 at 2:39 PM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> a certain degree (mirror neurons).
Oh you just hit my other annoyance.
"How does that work?"
"Mirror neurons"
IT TELLS US NOTHING.
Trent
---
agi
Archives: https://www.list
On Fri, Oct 17, 2008 at 12:32 PM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> In my opinion language itself is no real domain for intelligence at all.
> Language is just a communication protocol. You have patterns of a certain
> domain in your brain you have to translate your internal pattern
>
On Fri, Oct 17, 2008 at 11:00 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> I don't claim that Ben's OpenCog design is flawed or that it could not
> produce a "smarter than human" artificial scientist. I do claim that this
> step would not launch a singularity. You cannot produce a seed AI.
It's
On Fri, Oct 17, 2008 at 8:17 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Likewise, writing software has to be understood in terms of natural language
> learning and modeling. A programming language is a compromise between what
> humans can understand and what machines can understand. Humans lea
On Thu, Oct 16, 2008 at 2:05 PM, charles griffiths
<[EMAIL PROTECTED]> wrote:
> You have a point, but how would you propose giving specifications to the
> AGI-programmer? Teach it English? Always have it modify working programs with
> the same objectives (e.g., reduce runtime/memory, avoid crashi
On Thu, Oct 16, 2008 at 12:50 PM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> The reasons:
> 1. The domain is well understood.
> 2. The domain has regularities. Therefore a high intelligent algorithm has a
> chance to outperform less intelligent algorithms
> 3. The domain can be modeled easily
On Wed, Oct 15, 2008 at 9:59 PM, <[EMAIL PROTECTED]> wrote:
> Also, from what I've seen, it's not a position that I think
> I've ever seen defended in any convincing way, and I kind of suspect it
> can't be. Indeed, it sets off my crank-alert.
Yes, thank you.
If I can summarize Colin's opinion,
On Wed, Oct 15, 2008 at 4:48 PM, Colin Hales
<[EMAIL PROTECTED]> wrote:
> you have to be exposed directly to all the actual novelty in the natural
> world, not the novelty
> recognised by a model of what novelty is. Consciousness (P-consciousness and
> specifically and importantly visual P-conscio
http://www.cs.utoronto.ca/~ilya/pubs/2007/inf_deep_net_utml.pdf
Enjoy.
Trent
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/mem
On Mon, Oct 6, 2008 at 10:03 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Arguably, for instance, camera+lidar gives enough data for reconstruction of
> the visual scene ... note that lidar gives more more accurate 3D depth ata
> than stereopsis...
Is that even true anymore? I thought the big re
On Wed, Oct 1, 2008 at 8:03 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Your OpenCog design also does not illustrate how it is to solve problems -
> how it is, for example, to solve the problems of concept, especially
> speculative concept,, formation.
http://www.opencog.org/wiki/OpenCogPrime:Wi
On Sun, Sep 28, 2008 at 3:22 PM, Eric Burton <[EMAIL PROTECTED]> wrote:
> http://www.jargon.net/jargonfile/h/HelenKellermode.html
>
> Thought that was funny, goodbye :)
Is there an entry for Anne Frank?
Trent
---
agi
Archives: https://www.listbox.com/memb
On Wed, Sep 24, 2008 at 1:55 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> The financial world has not yet gone through the process of
> agreeing on how to value these financial instruments in a global economic
> regime like this one
Agreeing on prices eh? That sounds just great :)
Trent
-
On Tue, Sep 23, 2008 at 7:57 AM, Eric Burton <[EMAIL PROTECTED]> wrote:
> Are Geoffrey Hinton's neural nets available as a library somewhere?
> I'd like to try them myself if possible. What I'm doing now closely
> approximates character recognition... also I wonder if they would
> evolve richer beh
On Mon, Sep 22, 2008 at 8:29 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> How do you estimate your confidence in this assertion that developing
> AGI (singularity capable) requires this insane effort (odds of the bet
> you'd take for it)? This is an easily falsifiable statement, if a
> small gro
On Sat, Sep 20, 2008 at 4:37 PM, Brad Paulsen <[EMAIL PROTECTED]> wrote:
> Oh, OK, so I added the stuff in the parentheses. Sue me.
Hehe, indeed. Although I'm sure Powerset has some nice little
relationship links between words, I'm a little skeptical about the
claim to "meaning". I don't mean t
On Sat, Sep 20, 2008 at 8:46 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> But if you can learn these types of patterns then with no additional effort
> you can learn patterns that directly solve the problem...
This kind of reminds me of the "people think in their natural
language" theory that St
On Fri, Sep 19, 2008 at 11:34 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> So perhaps you could name some applications of AGI that don't fall into the
> categories of (1) doing work or (2) augmenting your brain?
Perhaps you could list some uses of a computer that don't fall into
the category of
On Fri, Sep 19, 2008 at 7:54 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Perhaps there are some applications I haven't thought of?
Bahahaha.. Gee, ya think?
Trent
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.l
On Fri, Sep 19, 2008 at 6:57 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> general intelligence at the human level
I hear you say these words a lot. I think, by using the word "level",
you're trying to say something different to "general intelligence just
like humans have" but I'm not sure everyo
On Fri, Sep 19, 2008 at 7:30 AM, David Hart <[EMAIL PROTECTED]> wrote:
> Take the hypothetical case of R. Marketroid, who's hardware is on the books
> as an asset at ACME Marketing LLC and who's programming has been tailered by
> ACME to suit their needs. Unbeknownst to ACME, RM has decided to writ
On Fri, Sep 19, 2008 at 3:36 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Lets distinguish between the two major goals of AGI. The first is to automate
> the economy. The second is to become immortal through uploading.
Umm, who's goals are these? Who said they are "the [..] goals of
AGI"? I'm
On Thu, Sep 18, 2008 at 8:08 PM, David Hart <[EMAIL PROTECTED]> wrote:
> Original works produced by software as a tool where a human operator is
> involved at some stage is a different case from original works produced by
> software exclusively and entirely under its own direction. The latter has n
69 matches
Mail list logo