[agi] Artificial humor

2008-09-09 Thread Matt Mahoney
A model of artificial humor, a machine that tells jokes, or at least inputs jokes and outputs whether or not they are funny. Identify associations of the form (A ~ B) and (B ~ C) in the audience language model where (A ~ C) is believed to be false or unlikely through other associations. Test whe

Re: [agi] Artificial humor

2008-09-09 Thread Mike Tintner
Matt, Humor is dependent not on inductive reasoning by association, reversed or otherwise, but on the crossing of whole matrices/ spaces/ scripts .. and that good old AGI standby, domains. See Koestler esp. for how it's one version of all creativity - http://www.casbs.org/~turner/art/deacon_

Re: [agi] Artificial humor

2008-09-09 Thread Mike Tintner
Here you go - should be dead simple to analyse formula - and produce program:) http://www.energyquest.ca.gov/games/jokes/light_bulb.html How many software engineers does it take to change a light bulb? Two. One always leaves in the middle of the project. -

Re: [agi] Artificial humor

2008-09-10 Thread Mike Tintner
Matt: Humor detection obviously requires a sophisticated language model and knowledge of popular culture, current events, and what jokes have been told before. Since entertainment is a big sector of the economy, an AGI needs all human knowledge, not just knowledge that is work related. In many

Re: [agi] Artificial humor

2008-09-10 Thread Mark Waser
your total fabrications (a.k.a. mental masturbation). - Original Message - From: "Mike Tintner" <[EMAIL PROTECTED]> To: Sent: Wednesday, September 10, 2008 7:18 AM Subject: Re: [agi] Artificial humor Matt: Humor detection obviously requires a sophisticated language mode

Re: [agi] Artificial humor

2008-09-10 Thread Mike Tintner
Obviously you have no plans for endowing your computer with a self and a body, that has emotions and can shake with laughter. Or tears. Actually, many of us do. And this is why your posts are so problematical. You invent what *we* believe and what we intend to do. And then you criticize your

Re: [agi] Artificial humor

2008-09-10 Thread Mark Waser
Tomcat and arguing that the wings don't move right for a bird and, besides, it's too unstable for a human to fly (unassisted :-). Read the papers in the first link and *maybe* we can have a useful conversation . . . . - Original Message - From: "Mike Tintner" <[EMAIL P

Re: [agi] Artificial humor

2008-09-10 Thread Mike Tintner
t recognize it. You're looking at the blueprints of F-14 Tomcat and arguing that the wings don't move right for a bird and, besides, it's too unstable for a human to fly (unassisted :-). Read the papers in the first link and *maybe* we can have a useful conversation . . . . -

Re: [agi] Artificial humor

2008-09-10 Thread Mark Waser
l Message - From: "Mike Tintner" <[EMAIL PROTECTED]> To: Sent: Wednesday, September 10, 2008 10:18 AM Subject: Re: [agi] Artificial humor 1.Autonomic [disembodied] computing" is obviously radically different from having a body with a sympathetically controlled engine are

Re: [agi] Artificial humor

2008-09-10 Thread Mike Tintner
There is no computer or robot that keeps getting physically excited or depressed by its computations. (But it would be a good idea). you don't even realize that laptops (and many other computers -- not to mention appliances) currently do precisely what you claim that no computer or robot does.

Re: [agi] Artificial humor

2008-09-10 Thread Mark Waser
MAIL PROTECTED]> To: Sent: Wednesday, September 10, 2008 12:31 PM Subject: Re: [agi] Artificial humor There is no computer or robot that keeps getting physically excited or depressed by its computations. (But it would be a good idea). you don't even realize that laptops (and many

Re: [agi] Artificial humor

2008-09-10 Thread Matt Mahoney
--- On Wed, 9/10/08, Mike Tintner <[EMAIL PROTECTED]> wrote: > 4.To have a sense of humour, as I more or less indicated, > you have to be > able to identify with the "funny guy" making the > error - and that is an > *embodied* identification. The humour that gets the > biggest, most physical >

Re: [agi] Artificial humor

2008-09-10 Thread Eric Burton
f the incompetent). > > You seem more intent on winning an argument than learning or even honestly > addressing the points that you yourself raised. > > I'll let you go back to your fantasies of being smarter than the rest of us > now. > > - Original Message - &

Re: [agi] Artificial humor

2008-09-10 Thread Eric Burton
I've seen humour modelled as a form of mental dissonance, when an expectation is defied, especially a grave one. It may arise, then, as a higher-order recognition of bizarreness in the overall state of the mind at that point. Humour seems to me to be somehow fundamental to intelligence, rather than

Re: [agi] Artificial humor

2008-09-10 Thread Eric Burton
Here is an example I recall. A vine crosses your path and you think there is a snake on your foot. Then you realize the nature of the vine but the systemic effects of snake fear do not immediately subside. The result is calming laughter. Perhaps, then, it's an evolved compensation mechanism for bio

Re: [agi] Artificial humor

2008-09-10 Thread Mike Tintner
Matt, Yes embodiment is essential to the unconscious brain's recognizing the joke in the first place. If the joke is, say, about a guy getting his cock caught in a zipper, it's your embodied identification with him, that has you doubling up and clutching your guts with laughter. Without a bod

Re: [agi] Artificial humor

2008-09-10 Thread Jiri Jelinek
On Wed, Sep 10, 2008 at 2:39 PM, Mike Tintner <[EMAIL PROTECTED]> wrote: >Without a body, you couldn't understand the joke. False. Would you also say that without a body, you couldn't understand 3D space ? BTW it's kind of sad that people find it funny when others get hurt. I wonder what are the

Re: [agi] Artificial humor

2008-09-10 Thread Matt Mahoney
I think artificial humor has gotten little attention because humor (along with art and emotion) is mostly a right-brain activity, while science, math, and language are mostly left-brained. It should be no surprise that since most AI researches are left-brained, their interest is in studying prob

Re: [agi] Artificial humor

2008-09-10 Thread John LaMuth
gnored John LaMuth www.ethicalvalues.com - Original Message - From: "Matt Mahoney" <[EMAIL PROTECTED]> To: Sent: Wednesday, September 10, 2008 1:53 PM Subject: Re: [agi] Artificial humor I think artificial humor has gotten little attention because humor (along with

Re: [agi] Artificial humor

2008-09-10 Thread Russell Wallace
The most plausible explanation I've heard is that humor evolved as a social weapon for use by a group of low status individuals against a high status individual. This explains why laughter is involuntarily contagious, why it mostly occurs in conversation, why children like watching Tom and Jerry an

Re: [agi] Artificial humor

2008-09-11 Thread Samantha Atkins
On Sep 10, 2008, at 12:29 PM, Jiri Jelinek wrote: On Wed, Sep 10, 2008 at 2:39 PM, Mike Tintner <[EMAIL PROTECTED] > wrote: Without a body, you couldn't understand the joke. False. Would you also say that without a body, you couldn't understand 3D space ? It depends on what is meant by, an

Re: [agi] Artificial humor

2008-09-11 Thread Valentina Poletti
I think it's the surprize that makes you laugh actually, not physical pain in other people. I find myself laughing at my own mistakes often - not because they hurt (in fact if they did hurt they wouldn't be funny) but because I get surprized by them. Valentina On 9/10/08, Jiri Jelinek <[EMAIL PRO

Re: [agi] Artificial humor

2008-09-11 Thread Jiri Jelinek
Samantha, & Mike, >> Would you also say that without a body, you couldn't understand >> 3D space ? > > It depends on what is meant by, and the value of, "understand 3D space". > If the intelligence needs to navigate or work with 3D space or even > understand intelligence whose very concepts are fi

Re: [agi] Artificial humor

2008-09-11 Thread BillK
On Thu, Sep 11, 2008 at 2:28 PM, Jiri Jelinek wrote: > If you talk to a program about changing 3D scene and the program then > correctly answers questions about [basic] spatial relationships > between the objects then I would say it understands 3D. Of course the > program needs to work with a queri

Re: [agi] Artificial humor

2008-09-11 Thread Mike Tintner
Jiri, Quick answer because in rush. Notice your "if" ... Which programs actually do understand any *general* concepts of orientation? SHRDLU I will gladly bet, didn't...and neither do any others. The v. word "orientation" indicates the reality that every picture has a point of view, and refe

RE: [agi] Artificial humor

2008-09-11 Thread John G. Rose
> From: John LaMuth [mailto:[EMAIL PROTECTED] > > As I have previously written, this issue boils down as "one is serious > or > one is not to be taken this way" a meta-order perspective)... the key > feature in humor and comedy -- the meta-message being "don't take me > seriously" > > That is why

Re: [agi] Artificial humor

2008-09-11 Thread Mark Waser
llenge vehicles? - Original Message - From: "Mike Tintner" <[EMAIL PROTECTED]> To: Sent: Thursday, September 11, 2008 11:24 AM Subject: Re: [agi] Artificial humor Jiri, Quick answer because in rush. Notice your "if" ... Which programs actually do understand any *genera

Re: [agi] Artificial humor

2008-09-11 Thread Jiri Jelinek
Mike, Imagine a simple 3D scene with 2 different-size spheres. A simple program allows you to change positions of the spheres and it can answer question "Is the smaller sphere inside the bigger sphere?" [Yes|Partly|No]. I can write such program in no time. Sure, it's extremely simple, but it deals

Re: [agi] Artificial humor

2008-09-11 Thread Mike Tintner
Jiri, Clearly a limited 3d functionality is possible for a program such as you describe - as for SHRDLU. But what we're surely concerned with here is generality. So fine start with a restricted world of say different kinds of kid's blocks and similar. But then the program must be able to tell

Re: [agi] Artificial humor

2008-09-11 Thread Mark Waser
r do any others" when talking about being able to "understand any *general* concepts of orientation" - Original Message - From: "Mike Tintner" <[EMAIL PROTECTED]> To: Sent: Thursday, September 11, 2008 1:31 PM Subject: Re: [agi] Artificial humor

Re: [agi] Artificial humor

2008-09-11 Thread Matt Mahoney
ans can do that. -- Matt Mahoney, [EMAIL PROTECTED] --- On Thu, 9/11/08, Mike Tintner <[EMAIL PROTECTED]> wrote: > From: Mike Tintner <[EMAIL PROTECTED]> > Subject: Re: [agi] Artificial humor > To: agi@v2.listbox.com > Date: Thursday, September 11, 2008, 1:31 PM > Jiri,

Re: [agi] Artificial humor

2008-09-11 Thread Mike Tintner
Otherwise, you could always claim that a machine doesn't understand anything because only humans can do that. -- Matt Mahoney, [EMAIL PROTECTED] --- On Thu, 9/11/08, Mike Tintner <[EMAIL PROTECTED]> wrote: From: Mike Tintner <[EMAIL PROTECTED]> Subject: Re: [agi] Artificial

Re: [agi] Artificial humor

2008-09-11 Thread Matt Mahoney
Mike Tintner <[EMAIL PROTECTED]> wrote: >To "understand" is to "REALISE" what [on earth, or >in the [real] world] is being talked about. Nice dodge. How do you distinguish between when a computer realizes something and when it just reacts as if it realizes it? Yeah, I know. Turing dodged the qu

Re: [agi] Artificial humor

2008-09-11 Thread Mike Tintner
Mike Tintner <[EMAIL PROTECTED]> wrote: To "understand" is to "REALISE" what [on earth, or in the [real] world] is being talked about. Matt: Nice dodge. How do you distinguish between when a computer realizes something and when it just reacts as if it realizes it? Yeah, I know. Turing do

Re: [agi] Artificial humor

2008-09-11 Thread John LaMuth
- Original Message - From: "John G. Rose" <[EMAIL PROTECTED]> To: Sent: Thursday, September 11, 2008 8:28 AM Subject: RE: [agi] Artificial humor >> From: John LaMuth [mailto:[EMAIL PROTECTED] >> >> As I have previously written, this issue boils down

Re: [agi] Artificial humor

2008-09-11 Thread Jiri Jelinek
Mike, >The plane flew over the hill >The play is over Using a formal language can help to avoid many of these issues. >But then the program must be able to tell what is "in" what or outside, what >is behind/over etc. The communication module in my experimental AGI design includes several speci

Re: [agi] Artificial humor... P.S

2008-09-11 Thread Mike Tintner
Matt, "To understand/realise" is to be distinguished from (I would argue) "to comprehend" statements. The one is to be able to point to the real objects referred to. The other is merely to be able to offer or find an alternative or dictionary definition of the statements. A translation. Like the

Re: [agi] Artificial humor... P.S

2008-09-12 Thread Matt Mahoney
--- On Thu, 9/11/08, Mike Tintner <[EMAIL PROTECTED]> wrote: > "To understand/realise" is to be distinguished > from (I would argue) "to comprehend" statements. How long are we going to go round and round with this? How do you know if a machine "comprehends" something? Turing explained why he d

Re: [agi] Artificial humor... P.S

2008-09-12 Thread Mike Tintner
Matt, What are you being so tetchy about? The issue is what it takes for any agent, human or machine.to understand information . You give me an extremely complicated and ultimately weird test/paper, which presupposes that machines, humans and everyone else can only exhibit, and be tested o

Re: [agi] Artificial humor... P.S

2008-09-12 Thread Matt Mahoney
--- On Fri, 9/12/08, Mike Tintner <[EMAIL PROTECTED]> wrote: > Matt, > > What are you being so tetchy about? The issue is what it > takes for any > agent, human or machine.to understand information . How are you going to understand the issues behind programming a computer for human intellige

Re: [agi] Artificial humor... P.S

2008-09-12 Thread Mike Tintner
Matt: How are you going to understand the issues behind programming a computer for human intelligence if you have never programmed a computer? Matt, We simply have a big difference of opinion. I'm saying there is no way a computer [or agent, period] can understand language if it can't basic

Re: [agi] Artificial humor... P.S

2008-09-13 Thread Matt Mahoney
llowing program understand? main(){printf("Ah, now I understand!");} You need a precise test. That is what Turing did. -- Matt Mahoney, [EMAIL PROTECTED] --- On Sat, 9/13/08, Mike Tintner <[EMAIL PROTECTED]> wrote: > From: Mike Tintner <[EMAIL PROTECTED]> > Subject: Re:

Re: [agi] Artificial [Humor ] vs Real Approaches to Information

2008-09-12 Thread Mike Tintner
Jiri and Matt et al, I'm getting v. confident about the approach I've just barely begun to outline. Let's call it "realistics" - the title for a new, foundational branch of metacognition, that will oversee all forms of information, incl. esp. language, logic, and maths, and also all image for

Re: [agi] Artificial [Humor ] vs Real Approaches to Information

2008-09-12 Thread Bryan Bishop
On Friday 12 September 2008, Mike Tintner wrote: > "to understand a piece of information and its "information objects", > (eg words) , is to "realise" (or know) how they refer to "real > objects" in the real world, (and, ideally, and often necessarily,  to > be able to point to and engage with thos

Re: [agi] Artificial [Humor ] vs Real Approaches to Information

2008-09-12 Thread Jiri Jelinek
Mike, > How will you understand, and recognize when information objects/ e.g > language/words are "unreal" ? e.g. Turn yourself inside out. >... unreal/untrue/metaphorical in different and sometimes multiple >simultaneous ways It's like teaching a baby. You don't want to use confusing language/m