Re: [agi] Nao Nao

2010-08-13 Thread Ian Parker
There is one further point which is absolutely fundamental
in operating system/compiler theory. The user should be unaware of how the
work is divided up. A robot may simply have a WiFi router and very little
else, or it might have considerable on board processing. The user should not
be aware of this.


  = Ian Parker



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Nao Nao

2010-08-12 Thread Ian Parker
We are getting down to some of the nitty gritty. To a considerable extent
what is holding robotics back is the lack of common standards. We can think
about what we might need. One would instinctively start with a CAD/CAM
package like ProEngineer. We can thus descibe a robot in terms of assembles
and parts. A single joint is a part, a human finger has 3 joints is an
assembly. A hand is an assembly. We get this by using CAD.

A robotic language has to be composed as follows.

class Part{

}

class Assemble{

}

An assembly/part will have a position. The simplest command is to move from
one position to another. Note that a position is a multidimensional quantity
and describes the positions of each part.

"*Pick up ball*" is a complex command. We first have to localise the ball,
determine the position required to grasp the ball any then put the parts
into a position so that the ball moves into a new position.

Sounds complicated? Yes it is, but a lot of the basic work has already been
done. The first time a task is performed the system would have to compute
from first principles. The second time it would have some stored positions.
The system could "*learn*".

A position is a vector (multidimensional) 2 robots will have twice the
dimensions of a single robot.

"*Move bed upstairs*" is a twin robot problem, but no different in principle
from a single robot problem. Above all I think we must start off
mathematically and construct a language of maximum generality. It should be
pointed out too that there programs which will evaluate forces in a
multi-limb environment. In fact matrix theory was devised in the 19th
century.


  - Ian Parker

On 12 August 2010 15:17, John G. Rose  wrote:

> Typically the demo is some of the best that it can do. It looks like the
> robot is a mass produced model that has some really basic handling
> capabilities, not that it is made to perform work. It could still have
> relatively advanced microprocessor and networking system, IOW parts of the
> brain could run on centralized servers. I don't think they did that BUT it
> could.
>
>
>
> But it looks like one Nao can talk to another Nao. What's needed here is a
> standardized robot communication protocol. So a Nao could talk to a vacuum
> cleaner or a video cam or any other device that supports the protocol.
> Companies may resist this at first as they want to grab market share and
> don't understand the benefit.
>
>
>
> John
>
>
>
> *From:* Mike Tintner [mailto:tint...@blueyonder.co.uk]
> *Sent:* Thursday, August 12, 2010 4:56 AM
> *To:* agi
> *Subject:* Re: [agi] Nao Nao
>
>
>
> John,
>
>
>
> Any more detailed thoughts about its precise handling capabilities? Did it,
> first, not pick up the duck independently,  (without human assistance)? If
> it did,  what do you think would be the range of its object handling?  (I
> had an immediate question about all this - have asked the site for further
> clarificiation - but nothing yet).
>
>
>
> *From:* John G. Rose 
>
> *Sent:* Thursday, August 12, 2010 5:46 AM
>
> *To:* agi 
>
> *Subject:* RE: [agi] Nao Nao
>
>
>
> I wasn't meaning to portray pessimism.
>
>
>
> And that little sucker probably couldn't pick up a knife yet.
>
>
>
> But this is a paradigm change happening where we will have many networked
> mechanical entities. This opens up a whole new world of security and privacy
> issues...
>
>
>
> John
>
>
>
> *From:* David Jones [mailto:davidher...@gmail.com]
>
> Way too pessimistic in my opinion.
>
> On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose 
> wrote:
>
> Aww, so cute.
>
>
>
> I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
> sensory information back to the main servers with all the other Nao's all
> collecting personal data in a massive multi-agent geo-distributed
> robo-network.
>
>
>
> So cuddly!
>
>
>
> And I wonder if it receives and executes commands, commands that come in
> over the network from whatever interested corporation or government pays the
> most for access.
>
>
>
> Such a sweet little friendly Nao. Everyone should get one :)
>
>
>
> John
>
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/>| 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
>
> <http://www.listbox.com>
>
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/>| 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
>
> <http://www.listbox.com>
>
>
>   *agi*

[agi] Re: [agi] P≠NP

2010-08-12 Thread Ian Parker
This is a very powerful argument, but is not quite a rigorous
proof. Thermodynamics is like saying that because all zeros below 10^20 have
a real part of 0.5 therefore there are no non trivial zeros for which that
is not the case. What I am saying is pedantic, very pedantic but will still
affect Clay's view of the matter.

You will *not* be able to decode Blackberry, of course.


  - Ian Parker

2010/8/12 John G. Rose 

> BTW here is the latest one:
>
>
>
> http://www.win.tue.nl/~gwoegi/P-versus-NP/Deolalikar.pdf
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-12 Thread Ian Parker
Someone who really believes that P=NP should go to Saudi Arabia or the
Emirates and crack the Blackberry code.


  - Ian Parker

On 12 August 2010 06:10, John G. Rose  wrote:

> > -Original Message-
> > From: Jim Bromer [mailto:jimbro...@gmail.com]
> Re: [agi] Re: Compressed Cross-Indexed Concepts
> >
> > David,
> > I am not a mathematician although I do a lot of computer-
> > related mathematical work of course.  My remark was directed toward John
> > who had suggested that he thought that there is some sophisticated
> > mathematical sub system that would (using my words here) provide such a
> > substantial benefit to AGI that its lack may be at the core of the
> > contemporary problem.  I was saying that unless this required mathemagic
> > then a scalable AGI system demonstrating how effective this kind of
> > mathematical advancement could probably be simulated using contemporary
> > mathematics.  This is not the same as saying that AGI is solvable by
> sanitized
> > formal representations any more than saying that your message is a
> sanitized
> > formal statement because it was dependent on a lot of computer
> > mathematics in order to send it.  In other words I was challenging John
> at
> that
> > point to provide some kind of evidence for his view.
> >
>
> I don't know if we need to create some new mathemagics, a breakthrough, or
> whatever. I just think using existing math to engineer it, using the math
> like if was software is what should be done. But you may be right perhaps
> proof of P=NP something similar is needed. I don't think so though.
>
> The main goal would be to leverage existing math to compensate for
> unnecessary and/or impossible computation. We don't need to re-evolve the
> wheel as we already figured that out. And computers are v. slow compared to
> other physical computations that are performed in the natural physical
> world.
>
> Maybe not - developing a system from scratch that discovers all of the
> discoveries over the millennia of science and civilization? Would that be
> possible?
>
> > I then went on to say, that for example, I think that fast SAT solutions
> would
> > make scalable AGI possible (that is, scalable up to a point that is way
> beyond
> > where we are now), and therefore I believe that I could create a
> simulation
> > of an AGI program to demonstrate what I am talking about.  (A simulation
> is
> > not the same as the actual thing.)
> >
> > I didn't say, nor did I imply, that the mathematics would be all there is
> to it.  I
> > have spent a long time thinking about the problems of applying formal and
> > informal systems to 'real world' (or other world) problems and the
> > application of methods is a major part of my AGI theories.  I don't
> expect
> you
> > to know all of my views on the subject but I hope you will keep this in
> mind
> > for future discussions.
>
> Using available skills and tools the best we can use them. And, inventing
> new tools by engineering utilitarian and efficient mathematical structure.
> Math is just like software in all this but way more powerful. And using the
> right math, the most general where it is called for and specific/narrow
> when
> needed. I don't see a problem with the specific most of the time but I
> don't
> know if many people get the general. Though it may be an error or lack of
> understanding on my part...
>
> John
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Nao Nao

2010-08-12 Thread Ian Parker
Just two quick comments. CCTV is already networked, the Police can track
smoothly from one camera to another. Second comment is that if you (say)
taking a heavy load upstairs you need 2 robots one holding each end. A
single PC can control them both. In fact a robot workshop will be a kind of
"cloud", in terms of "cloud" computing.


  - Ian Parker

On 12 August 2010 05:46, John G. Rose  wrote:

> I wasn't meaning to portray pessimism.
>
>
>
> And that little sucker probably couldn't pick up a knife yet.
>
>
>
> But this is a paradigm change happening where we will have many networked
> mechanical entities. This opens up a whole new world of security and privacy
> issues...
>
>
>
> John
>
>
>
> *From:* David Jones [mailto:davidher...@gmail.com]
>
> Way too pessimistic in my opinion.
>
> On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose 
> wrote:
>
> Aww, so cute.
>
>
>
> I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
> sensory information back to the main servers with all the other Nao's all
> collecting personal data in a massive multi-agent geo-distributed
> robo-network.
>
>
>
> So cuddly!
>
>
>
> And I wonder if it receives and executes commands, commands that come in
> over the network from whatever interested corporation or government pays the
> most for access.
>
>
>
> Such a sweet little friendly Nao. Everyone should get one :)
>
>
>
> John
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ian Parker
Point about DESTIN, it has no preconceived assumptions. Some of
the entities might be chairs, but it will not have been specifically told
about a chair.


  - Ian Parker

On 9 August 2010 12:50, Jim Bromer  wrote:

> The mind cannot determine whether or not -every- instance of a kind
> of object is that kind of object.  I believe that the problem must be a
> problem of complexity and it is just that the mind is much better at dealing
> with complicated systems of possibilities than any computer program.  A
> young child first learns that certain objects are called chairs, and that
> the furniture objects that he sits on are mostly chairs.  In a few cases,
> after seeing an odd object that is used as a chair for the first time (like
> seeing an odd outdoor chair that is fashioned from twisted pieces of wood)
> he might not know that it is a chair, or upon reflection wonder if it is or
> not.  And think of odd furniture that appears and comes into fashion for a
> while and then disappears (like the bean bag chair).  The question for me is
> not what the smallest pieces of visual information necessary to represent
> the range and diversity of kinds of objects are, but how would these diverse
> examples be woven into highly compressed and heavily cross-indexed pieces of
> knowledge that could be accessed quickly and reliably, especially for the
> most common examples that the person is familiar with.
> Jim Bromer
>
> On Mon, Aug 9, 2010 at 2:16 AM, John G. Rose wrote:
>
>>  Actually this is quite critical.
>>
>>
>>
>> Defining a chair - which would agree with each instance of a chair in the
>> supplied image - is the way a chair should be defined and is the way the
>> mind processes it.
>>
>>
>>
>> John
>>
>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ian Parker
What about DESTIN? Jim has talked about video. Could DESTIN be generalized
to 3 dimensions, or even n dimensions?


  - Ian Parker

On 9 August 2010 07:16, John G. Rose  wrote:

> Actually this is quite critical.
>
>
>
> Defining a chair - which would agree with each instance of a chair in the
> supplied image - is the way a chair should be defined and is the way the
> mind processes it.
>
>
>
> It can be defined mathematically in many ways. There is a particular one I
> would go for though...
>
>
>
> John
>
>
>
> *From:* Mike Tintner [mailto:tint...@blueyonder.co.uk]
> *Sent:* Sunday, August 08, 2010 7:28 AM
> *To:* agi
> *Subject:* Re: [agi] How To Create General AI Draft2
>
>
>
> You're waffling.
>
>
>
> You say there's a pattern for chair - DRAW IT. Attached should help you.
>
>
>
> Analyse the chairs given in terms of basic visual units. Or show how any
> basic units can be applied to them. Draw one or two.
>
>
>
> You haven't identified any basic visual units  - you don't have any. Do
> you? Yes/no.
>
>
>
> No. That's not "funny", that's a waste.. And woolly and imprecise through
> and through.
>
>
>
>
>
>
>
> *From:* David Jones 
>
> *Sent:* Sunday, August 08, 2010 1:59 PM
>
> *To:* agi 
>
> *Subject:* Re: [agi] How To Create General AI Draft2
>
>
>
> Mike,
>
> We've argued about this over and over and over. I don't want to repeat
> previous arguments to you.
>
> You have no proof that the world cannot be broken down into simpler
> concepts and components. The only proof you attempt to propose are your
> example problems that *you* don't understand how to solve. Just because
> *you* cannot solve them, doesn't mean they cannot be solved at all using a
> certain methodology. So, who is really making wild assumptions?
>
> The mere fact that you can refer to a "chair" means that it is a
> recognizable pattern. LOL. That fact that you don't realize this is quite
> funny.
>
> Dave
>
> On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner 
> wrote:
>
> Dave:No... it is equivalent to saying that the whole world can be modeled
> as if everything was made up of matter
>
>
>
> And "matter" is... ?  Huh?
>
>
>
> You clearly don't realise that your thinking is seriously woolly - and you
> will pay a heavy price in lost time.
>
>
>
> What are your "basic world/visual-world analytic units"  wh. you are
> claiming to exist?
>
>
>
> You thought - perhaps think still - that *concepts* wh. are pretty
> fundamental intellectual units of analysis at a certain level, could be
> expressed as, or indeed, were patterns. IOW there's a fundamental pattern
> for "chair" or "table." Absolute nonsense. And a radical failure to
> understand the basic nature of concepts which is that they are *freeform*
> schemas, incapable of being expressed either as patterns or programs.
>
>
>
> You had merely assumed that concepts could be expressed as patterns,but had
> never seriously, visually analysed it. Similarly you are merely assuming
> that the world can be analysed into some kind of visual units - but you
> haven't actually done the analysis, have you? You don't have any of these
> basic units to hand, do you? If you do, I suggest, reply instantly, naming a
> few. You won't be able to do it. They don't exist.
>
>
>
> Your whole approach to AGI is based on variations of what we can call
> "fundamental analysis" - and it's wrong. God/Evolution hasn't built the
> world with any kind of geometric, or other consistent, bricks. He/It is a
> freeform designer. You have to start thinking outside the
> box/brick/"fundamental unit".
>
>
>
> *From:* David Jones 
>
> *Sent:* Sunday, August 08, 2010 5:12 AM
>
> *To:* agi 
>
> *Subject:* Re: [agi] How To Create General AI Draft2
>
>
>
> Mike,
>
> I took your comments into consideration and have been updating my paper to
> make sure these problems are addressed.
>
> See more comments below.
>
> On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner 
> wrote:
>
> 1) You don't define the difference between narrow AI and AGI - or make
> clear why your approach is one and not the other
>
>
> I removed this because my audience is for AI researchers... this is AGI
> 101. I think it's clear that my design defines general as being able to
> handle the vast majority of things we want the AI to handle without
> requiring a change in design.
>
>
>
>
> 2) "Lear

Re: Stocks; was Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-08 Thread Ian Parker
OK Ben is then one step ahead of Forex. Point is time series analysis,
although it is narrow AI can be extremely powerful. The situation about "*
sentiment*" is different from that of Poker where there is a single
adversary bluffing. A time series analysis encompasses the *ensemble* of
different opinions. Statistical programs can model this accurately.

Ben presumably has techniques for mining the data about companies. The
difficulty, as I see it, of translating this into a stock exchange
prediction is the weighting of different factors. What in fact you will need
to complete the task is something like conjoint analysis.

We need, for example, to get an index for innovation. We can see
how important this is and how important other factors are by doing something
like conjoint analysis. Management will affect long term stock values. Forex
is concerned with day to day fluctuations where management performance
(except in terms of the manipulation of shares) is not important.

Conjoint analysis has been used by managements to indicate how they should
be managing. Ben should be able to tell managements how they can optimise
the value of their company based on historical data.

This is real AGI and there is a close tie up between prediction and how a
company should be managed. We know as a matter of historical record, for
example, that where you have to reduce a budget deficit you do it with 2
parts reduction in public expenditure and 1 part rise in taxation. The
Con/Lib Dem coalition is going for a 3:1 ratio. There will no double be
other things that will come out of data-mining.

Sorry no disrespect intended.

On 8 August 2010 18:09, Abram Demski  wrote:

> Ian,
>
> Be courteous-- Ben asked specifically that any arguments about which things
> are narrow-ai should start a separate topic.
>
> Yea, I did not intend to rule out any possible sources of information for
> the stock market prediction task. Ben has worked on a system which looked on
> the web for chatter about specific companies, for example.
>
> Even if it was just stock data being used, it wouldn't be just time-series
> analysis. It would at least be planning as well. Really, though, it includes
> acting with the behavior of potential adversaries in mind (like
> game-playing).
>
> Even if it *were* just time-series analysis, though, I think it would be a
> decent AGI application. That is because I think AGI technology should be
> good at time-series analysis! In my opinion, a good AGI learning algorithm
> should be useful for such tasks.
>
> So, yes, many of my examples could be attacked via narrow AI; but I think
> they would be handled *better* by AGI. That's why they are low-hanging
> fruit-- they are (hopefully) on the border.
>
> --Abram
>
> On Sun, Aug 8, 2010 at 11:58 AM, Ian Parker  wrote:
>
>> Just one point about Forex, your first entry. This is purely a time series
>> analysis as I understand it. It is narrow AI in fact. With AGI you would
>> expect interviews with the executives of listed companies, just as the big
>> investment houses do.
>>
>> AGI would be data mining of everything about a company as well as time
>> series analysis.
>>
>>
>>   - Ian Parker
>>
>> On 8 August 2010 02:35, Abram Demski  wrote:
>>
>>> Ben,
>>>
>>> -The oft-mentioned stock-market prediction;
>>> -data mining, especially for corporate data such as customer behavior,
>>> sales prediction, etc;
>>> -decision support systems;
>>> -personal assistants;
>>> -chatbots (think, an ipod that talks to you when you are lonely);
>>> -educational uses including human-like artificial teachers, but also
>>> including smart presentation-of-material software which decides what
>>> practice problem to ask you next, when to give tips, etc;
>>> -industrial design (engineering);
>>> ...
>>>
>>> Good luck to him!
>>>
>>> --Abram
>>>
>>> On Sat, Aug 7, 2010 at 9:10 PM, Ben Goertzel  wrote:
>>>
>>>> Hi,
>>>>
>>>> A fellow AGI researcher sent me this request, so I figured I'd throw it
>>>> out to you guys
>>>>
>>>> 
>>>> I'm putting together an AGI pitch for investors and thinking of low
>>>> hanging fruit applications to argue for. I'm intentionally not
>>>> involving any mechanics (robots, moving parts, etc.). I'm focusing on
>>>> voice (i.e. conversational agents) and perhaps vision-based systems.
>>>> Hellen Keller AGI, if you will :)
>>>>
>>>> Along those lines, I'd like any ideas you may have that would fall
>>>> under this desc

Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-08 Thread Ian Parker
Just one point about Forex, your first entry. This is purely a time series
analysis as I understand it. It is narrow AI in fact. With AGI you would
expect interviews with the executives of listed companies, just as the big
investment houses do.

AGI would be data mining of everything about a company as well as time
series analysis.


  - Ian Parker

On 8 August 2010 02:35, Abram Demski  wrote:

> Ben,
>
> -The oft-mentioned stock-market prediction;
> -data mining, especially for corporate data such as customer behavior,
> sales prediction, etc;
> -decision support systems;
> -personal assistants;
> -chatbots (think, an ipod that talks to you when you are lonely);
> -educational uses including human-like artificial teachers, but also
> including smart presentation-of-material software which decides what
> practice problem to ask you next, when to give tips, etc;
> -industrial design (engineering);
> ...
>
> Good luck to him!
>
> --Abram
>
> On Sat, Aug 7, 2010 at 9:10 PM, Ben Goertzel  wrote:
>
>> Hi,
>>
>> A fellow AGI researcher sent me this request, so I figured I'd throw it
>> out to you guys
>>
>> 
>> I'm putting together an AGI pitch for investors and thinking of low
>> hanging fruit applications to argue for. I'm intentionally not
>> involving any mechanics (robots, moving parts, etc.). I'm focusing on
>> voice (i.e. conversational agents) and perhaps vision-based systems.
>> Hellen Keller AGI, if you will :)
>>
>> Along those lines, I'd like any ideas you may have that would fall
>> under this description. I need to substantiate the case for such AGI
>> technology by making an argument for high-value apps. All ideas are
>> welcome.
>> 
>>
>> All serious responses will be appreciated!!
>>
>> Also, I would be grateful if we
>> could keep this thread closely focused on direct answers to this
>> question, rather than
>> digressive discussions on Helen Keller, the nature of AGI, the definition
>> of AGI
>> versus narrow AI, the achievability or unachievability of AGI, etc.
>> etc.  If you think
>> the question is bad or meaningless or unclear or whatever, that's
>> fine, but please
>> start a new thread with a different subject line to make your point.
>>
>> If the discussion is useful, my intention is to mine the answers into a
>> compact
>> list to convey to him
>>
>> Thanks!
>> Ben G
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
>
> --
> Abram Demski
> http://lo-tho.blogspot.com/
> http://groups.google.com/group/one-logic
>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-08 Thread Ian Parker
If you have a *physical* address an avatar needs to *physically* be there. -
Roxxy lives here with her friend Miss Al-Fasaq the belly dancer.

Chat lines as Steve describes are not too difficult. In fact the girls
(real) on a chat site have a sheet in front of them that gives the
appropriate response to a variety of questions. The WI (Women's Institute)
did an investigation of the sex industry, and one volunteer actually became
a "*chatterbox*".

Do such entities exist? Probably not in the sex industry, at least not yet.
Why do I believe this? Basically because if the sex industry were moving in
this direction it would without a doubt be looking at some metric of brain
activity to give the customer the best erotic experience. You don't ask Are
you gay? You have men making love to men, women-men and women-women. Fing
out what gives the customer the biggest kick. You set the story of the porm
video.

In terms of security I am impressed by the fact that large numbers of bombs
have been constructed that don't work and could not work. Hydrogen
Peroxide<http://en.wikipedia.org/wiki/Hydrogen_peroxide> can
only be prepared in the pure state by chemical reactions. It is unlikely
(see notes on vapour pressure at 50C) that anything viable could be produced
by distillation on a kitchen stove.

Is this due to deliberately misleading information? Have I given the game
away? Certainly misleading information is being sent out. However it
is probably not being sent out by robotic entities. After all nothing has
yet achieved Turing status.

In the case of sex it may not be necessary for the client to believe that he
is confronted by a "*real woman*". A top of the range masturbator/sex aid
may not have to pretend to be anything else.


  - Ian Parker

On 8 August 2010 07:30, John G. Rose  wrote:

> Well, these artificial identities need to complete a loop. Say the
> artificial identity acquires an email address, phone#, a physical address, a
> bank account, logs onto Amazon and purchases stuff automatically it needs to
> be able to put money into its bank account. So let's say it has a low profit
> scheme to scalp day trading profits with its stock trading account. That's
> the loop, it has to be able to make money to make purchases. And then
> automatically file its taxes with the IRS. Then it's really starting to look
> like a full legally functioning identity. It could persist in this fashion
> for years.
>
>
>
> I would bet that these identities already exist. What happens when there
> are many, many of them? Would we even know?
>
>
>
> John
>
>
>
> *From:* Steve Richfield [mailto:steve.richfi...@gmail.com]
> *Sent:* Saturday, August 07, 2010 8:17 PM
> *To:* agi
> *Subject:* Re: [agi] Epiphany - Statements of Stupidity
>
>
>
> Ian,
>
> I recall several years ago that a group in Britain was operating just such
> a chatterbox as you explained, but did so on numerous sex-related sites, all
> running simultaneously. The chatterbox emulated young girls looking for sex.
> The program just sat there doing its thing on numerous sites, and whenever a
> meeting was set up, it would issue a message to its human owners to alert
> the police to go and arrest the pedophiles at the arranged time and place.
> No human interaction was needed between arrests.
>
> I can imagine an adaptation, wherein a program claims to be manufacturing
> explosives, and is looking for other people to "deliver" those explosives.
> With such a story line, there should be no problem arranging deliveries, at
> which time you would arrest the would-be bombers.
>
> I wish I could tell you more about the British project, but they were VERY
> secretive. I suspect that some serious Googling would yield much more.
>
> Hopefully you will find this helpful.
>
> Steve
> =
>
> On Sat, Aug 7, 2010 at 1:16 PM, Ian Parker  wrote:
>
> I wanted to see what other people's views were.My own view of the risks is
> as follows. If the Turing Machine is built to be as isomorphic with humans
> as possible, it would be incredibly dangerous. Indeed I feel that the
> biological model is far more dangerous than the mathematical.
>
>
>
> If on the other hand the TM was *not* isomorphic and made no attempt to
> be, the dangers would be a lot less. Most Turing/Löbner entries are
> chatterboxes that work on databases. The database being filled as you chat.
> Clearly the system cannot go outside its database and is safe.
>
>
>
> There is in fact some use for such a chatterbox. Clearly a Turing machine
> would be able to infiltrate militant groups however it was constructed. As
> for it pretending to be stupid, it would have to know in what direction it
> had to be stupid. Hence it would have to be a go

Re: [agi] Epiphany - Statements of Stupidity

2010-08-07 Thread Ian Parker
I wanted to see what other people's views were.My own view of the risks is
as follows. If the Turing Machine is built to be as isomorphic with humans
as possible, it would be incredibly dangerous. Indeed I feel that the
biological model is far more dangerous than the mathematical.

If on the other hand the TM was *not* isomorphic and made no attempt to be,
the dangers would be a lot less. Most Turing/Löbner entries are chatterboxes
that work on databases. The database being filled as you chat. Clearly the
system cannot go outside its database and is safe.

There is in fact some use for such a chatterbox. Clearly a Turing machine
would be able to infiltrate militant groups however it was constructed. As
for it pretending to be stupid, it would have to know in what direction it
had to be stupid. Hence it would have to be a good psychologist.

Suppose it logged onto a jihardist website, as well as being able to pass
itself off as a true adherent, it could also look at the other members and
assess their level of commitment and knowledge. I think that the
true Turing/Löbner  test is not working in a laboratory environment but they
should log onto jihardist sites and see how well they can pass themselves
off. If it could do that it really would have arrived. Eventually it could
pass itself off as a "*peniti*" to use the Mafia term and produce arguments
from the Qur'an against the militant position.

There would be quite a lot of contracts to be had if there were a realistic
prospect of doing this.


  - Ian Parker

On 7 August 2010 06:50, John G. Rose  wrote:

> > Philosophical question 2 - Would passing the TT assume human stupidity
> and
> > if so would a Turing machine be dangerous? Not necessarily, the Turing
> > machine could talk about things like jihad without
> ultimately identifying with
> > it.
> >
>
> Humans without augmentation are only so intelligent. A Turing machine would
> be potentially dangerous, a really well built one. At some point we'd need
> to see some DNA as ID of another "extended" TT.
>
> > Philosophical question 3 :- Would a TM be a psychologist? I think it
> would
> > have to be. Could a TM become part of a population simulation that would
> > give us political insights.
> >
>
> You can have a relatively stupid TM or a sophisticated one just like
> humans.
> It might be easier to pass the TT by not exposing too much intelligence.
>
> John
>
> > These 3 questions seem to me to be the really interesting ones.
> >
> >
> >   - Ian Parker
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI & Alife

2010-08-06 Thread Ian Parker
This is much more interesting in the context of Evolution than it is for
the creation of AGI. Point is that all the things that have ben done would
have been done (much more simply in fact) from straightforward narrow
programs. However it demonstrates the early multicelluar organisms of the
Pre Cambrian and early Cambrian.

What AGI is interested in is how *language* evolves. That is to say the last
6 million years or so. We also need a process for creating AGI which is
rather more efficient than Evolution. We can't wait that time for something
to happen.


  - Ian Parker

On 6 August 2010 19:23, rob levy  wrote:

> Interesting article:
> http://www.newscientist.com/article/mg20727723.700-artificial-life-forms-evolve-basic-intelligence.html?page=1
>
> On Sun, Aug 1, 2010 at 3:13 PM, Jan Klauck wrote:
>
>> Ian Parker wrote
>>
>> > I would like your
>> > opinion on *proofs* which involve an unproven hypothesis,
>>
>> I've no elaborated opinion on that.
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Ian Parker
I think that some quite important philosofical questions are raised by
Steve's posting. I don't know BTW how you got it. I monitor all
correspondence to the group, and I did not see it.

The Turing test is not in fact a test of intelligence, it is a test of
similarity with the human. Hence for a machine to be truly Turing it would
have to make mistakes. Now any "*useful*" system will be made as intelligent
as we can make it. The TT will be seen to be an irrelevancy.

Philosophical question no 1 :- How useful is the TT.

As I said in my correspondence With Jan Klouk, the human being is stupid,
often dangerously stupid.

Philosophical question 2 - Would passing the TT assume human stupidity and
if so would a Turing machine be dangerous? Not necessarily, the Turing
machine could talk about things like jihad without
ultimately identifying with it.

Philosophical question 3 :- Would a TM be a psychologist? I think it would
have to be. Could a TM become part of a population simulation that would
give us political insights.

These 3 questions seem to me to be the really interesting ones.


  - Ian Parker

On 6 August 2010 18:09, John G. Rose  wrote:

> "statements of stupidity" - some of these are examples of cramming
> sophisticated thoughts into simplistic compressed text. Language is both
> intelligence enhancing and limiting. Human language is a protocol between
> agents. So there is minimalist data transfer, "I had no choice but to ..."
> is a compressed summary of potentially vastly complex issues. The mind gets
> hung-up sometimes on this language of ours. Better off at times to think
> less using English language and express oneself with a wider spectrum
> communiqué. Doing a dance and throwing paint in the air for example, as some
> **primitive** cultures actually do, conveys information also and is medium
> of expression rather than using a restrictive human chat protocol.
>
>
>
> BTW the rules of etiquette of the human language "protocol" are even more
> potentially restricting though necessary for efficient and standardized data
> transfer to occur. Like, TCP/IP for example. The "Etiquette" in TCP/IP is
> like an OSI layer, akin to human language etiquette.
>
>
>
> John
>
>
>
>
>
> *From:* Steve Richfield [mailto:steve.richfi...@gmail.com]
>
> To All,
>
> I have posted plenty about "statements of ignorance", our probable
> inability to comprehend what an advanced intelligence might be "thinking",
> heidenbugs, etc. I am now wrestling with a new (to me) concept that
> hopefully others here can shed some light on.
>
> People often say things that indicate their limited mental capacity, or at
> least their inability to comprehend specific situations.
>
> 1)  One of my favorites are people who say "I had no choice but to ...",
> which of course indicates that they are clearly intellectually challenged
> because there are ALWAYS other choices, though it may be difficult to find
> one that is in all respects superior. While theoretically this statement
> could possibly be correct, in practice I have never found this to be the
> case.
>
> 2)  Another one recently from this very forum was "If it sounds too good to
> be true, it probably is". This may be theoretically true, but in fact was,
> as usual, made as a statement as to why the author was summarily dismissing
> an apparent opportunity of GREAT value. This dismissal of something BECAUSE
> of its great value would seem to severely limit the authors prospects for
> success in life, which probably explains why he spends so much time here
> challenging others who ARE doing something with their lives.
>
> 3)  I used to evaluate inventions for some venture capitalists. Sometimes I
> would find that some basic law of physics, e.g. conservation of energy,
> would have to be violated for the thing to work. When I explained this to
> the inventors, their inevitable reply was "Yea, and they also said that the
> Wright Brothers' plane would never fly". To this, I explained that the
> Wright Brothers had invested ~200 hours of effort working with their crude
> homemade wind tunnel, and ask what the inventors have done to prove that
> their own invention would work.
>
> 4)  One old stupid standby, spoken when you have make a clear point that
> shows that their argument is full of holes "That is just your opinion". No,
> it is a proven fact for you to accept or refute.
>
> 5)  Perhaps you have your own pet "statements of stupidity"? I suspect that
> there may be enough of these to dismiss some significant fraction of
> prospective users of beyond-human-capability (I just hate the word
> "intelligence") programs.
&

Re: [agi] AGI & Int'l Relations

2010-08-02 Thread Ian Parker
On 1 August 2010 21:18, Jan Klauck  wrote:

> Ian Parker wrote
>
> > McNamara's dictum seems on
> > the face of it to contradict the validity of Psychology as a
> > science.
>
> I don't think so. That in unforseen events people switch to
> improvisation isn't suprising. Even an AGI, confronted with a novel
> situation and lacking data and models and rules for that, has to
> switch to ad-hoc heuristics.
>
> > Psychology, if is is a valid science can be used for modelling.
>
> True. And it's used for that purpose. In fact some models of
> psychology are so good that the simulation's results are consistent
> with what is empirically found in the real world.
>
> > Some of what McNamara has to say seems to me to be a little bit
> > contradictory. On the one hand he espouses "*gut feeling*". On the other
> > he says you should be prepared to change your mind.
>
> I don't see the contradiction. Changing one's mind refers to one's
> assumption and conceptual framings. You always operate under uncertainty
> and should be open for re-evaluation of what you believe.
>
> And the lower the probability of an event, the lesser are you prepared
> for it and you switch to gut feelings since you lack empirical experience.
> Likely that one's gut feelings operate within one's frame of mind.
>
> So these are two different levels.
>

This seems to link in with the very long running set of postings on
Solomonoff (or should it be -ov  -oв in Cyrillic). Laplace assigned a
probability of 50% to something we knew absolutely nothing about. I feel
that "*gut feelings*" are quite often wrong. Freeloading is very much
believed in by the "man in the street" but it is wroong and very much
oversimnplified.

Could I tell you something of the background of John Prescott. He is very
much a bruiser. He has a Trade Union background and has not had much
education. Many such people have a sense of inverted snobbery. Alan Sugar
says that he got around the World speaking only English, yet a firm that
employs linguists can more than double its sales overseas. Of course as I
think we all agree one of the main characteristics of AGI is its ability to
understand NL. AGI will thus be polyglot. Indeed one of the main tests will
be translation. "What is the difference between laying concrete at 50C and
fighting Israel?". First Turing question!

>
> > John Prescott at the Chilcot Iraq inquiry said that the test of
> > politicians was not hindsight, but courage and leadership. What the 
> > does he mean.
>
> Rule of thumb is that it's better to do something than to do nothing.
> You act, others have to react. As long as you lead the game, you can
> correct your own errors. But when you hesitate, the other parties will
> move first and you eat what they hand out to you.
>
> And don't forget that the people still prefer alpha-males that lead,
> not those that deeply think. It's more important to unite the tribe
> with screams and jumps against the enemy than to reason about budgets
> or rule of law--gawd how boring... :)
>

Yes, but an AGI system will have to balance budgets. In fact narrow AI is
making a contribution in the shape of Forex. I have claimed that perhaps AGI
will consist of a library of narrow AI. Forex, or rather software of the
Forex type will be an integral part of AGI. Could Forex manage the European
Central Bank? With modifications I think yes.

AGI will have to think about the rule of law as well, otherwise it will be
an intolerable and dangerous.

The alpha male syndrome is something we have to get away from, if we are
going to make progress of any kind.

>
> > It seems that "*getting things right*" is not a priority
> > for politicians.
>
> Keeping things running is the priority.
>

Thins will run, sort of, even if bad decisions are taken.

>
> --- Now to the next posting ---
>
> > This is an interesting article.
>
> Indeed.
>
> > Google is certain to uncover the *real motivators.*
>
> Sex and power.
>

Are you in effect claiming that the leaders of (say) terrorist movements are
motivated by power and do not have any ideology. It has been said that war
is individual unselfishness combined with corporate selfishness (interesting
quote to remember). I am not sure. What are the motivations of the
"unselfish" foot soldiers? How do leaders obtain their power. As Mr Cameron
rightly said the ISI is exporting terror. British Pakistanis though are free
agents. They do not have to be "*exported"* by the ISI. Why do they allow
themselves to be? They are *not* conscripts.


  - Ian Parker

>
>
>
> ---
> agi
> Archives: h

Re: [agi] AGI & Int'l Relations

2010-07-31 Thread Ian Parker
http://www.huffingtonpost.com/bob-jacobson/google-and-cia-invest-in_b_664525.html

This is an interesting article. Rather too alarmist though for my taste.
This in fact shows the type of social modelling I have in mind. The only
problem is that third world countries interactions are *not* on the Web.

There is, of course, absolutely no chance of a "Minority report". The object
will to be to find the factors that turn people to terrorism, or ordinary
crime for that matter. People will not be arrested, it will much more be a
question of counter groups being set up, and modeate (Islamic) leaders
encouraged. Still it is illustrative of the potential power in the present
version of Google.

In many ways a scientific approach will be better than John Prescott talking
utter drivel. Perhaps the people who will ultimately be most at risk will be
politicians as they exist now.

Let's take another example. In WW1 the British (and French) shot a large
number of soldiers for cowardice. The Germans shot only 2 and the Americans
did not shoot anybody. Haig was absolutely convinced of the
*freeloading* theory,
which was clearly at variance with the facts as the Germans and Americans
proved. People persist in working on completely the wrong theory.

Google is certain to uncover the *real motivators.*
*
*
*
*
*  - Ian Parker*


  - Ian Parker



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI & Alife

2010-07-31 Thread Ian Parker
Adding is simple proving is hard. This is a truism. I would like your
opinion on *proofs* which involve an unproven hypothesis, such as Riemann.
Hardy and Littlewood proved Goldbach with this assumption. Unfortunately the
does not apply. The truth of Goldbach does not imply the Riemann hypothesis.
Riemann would be proved if a converse was valid and the theorem proved
another way.

I am not really arguing deep philosophy, what I am saying is that a
non inscrutable system must go to its basic axioms.


  - Ian Parker

On 31 July 2010 00:25, Jan Klauck  wrote:

> Ian Parker wrote
>
> >> Then define your political objectives. No holes, no ambiguity, no
> >> forgotten cases. Or does the AGI ask for our feedback during mission?
> >> If yes, down to what detail?
> >
> > With Matt's ideas it does exactly that.
>
> How does it know when to ask? You give it rules, but those rules can
> be somehow imperfect. How are its actions monitored and sanctioned?
> And hopefully it's clear that we are now far from mathematical proof.
>
> > No we simply add to the axiom pool.
>
> Adding is simple, proving is not. Especially when the rules, goals,
> and constraints are not arithmetic but ontological and normative
> statements. Wether by NL or formal system, it's error-prone to
> specify our knowledge of the world (much of it is implicit) and
> teach it to the AGI. It's similar to law which is similar to math
> with referenced axioms and definitions and a substitution process.
> You often find flaws--most are harmless, some are not.
>
> Proofs give us islands of certainty in an explored sea within the
> ocean of the possible. We end up with heuristics. That's what this
> discussion is about, when I remember right. :)
>
> cu Jan
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI & Int'l Relations

2010-07-31 Thread Ian Parker
This echoes my feelings too. There is one other thing too. After my last
posting I realized that what I was talking about was general mathematics
rather than AI or AGI. Of course Polaris is AI, very much so, but Von
Neumann's nuclear war strategy was an evaluation of Minimax. Mind, once you
have a mathematical formulation you can quite easily transfer this to AI.

One should always approach problems rationally. McNamara's dictum seems on
the face of it to contradict the validity of Psychology as a
science. Psychology, if is is a valid science can be used for modelling.
Some of what McNamara has to say seems to me to be a little bit
contradictory. On the one hand he espouses "*gut feeling*". On the other he
says you should be prepared to change your mind. *Probieren geht über
studieren* the Vietnam war was lost..

John Prescott at the Chilcot Iraq inquiry said that the test of politicians
was not hindsight, but courage and leadership. What the  does he mean.
If an AGI system had taken such wrong decisions the programmers would be
sued massively. It seems that "*getting things right*" is not a priority for
politicians. Your Angela Merkel is the only scientist in high political
office. The only other person who springs to mind is Bashir Assad of Syria
who was an eye surgeon at Moorfield's Hospital. This is perhaps a theme that
can be developed.

I have already posted to the effect that AGI will spring from the Internet
and that there will be one AGI governing, or at any rate advising world
leaders. For reasons I have already gone into a "black box" AGI is not a
possibility. War will thus end. In fact war between developed countries has
already effectively ended. It has *not* ended terrorism or free enterprise
war. However if psychology is valid there are routes we could follow.


  - Ian Parker

On 31 July 2010 00:47, Jan Klauck  wrote:

> Ian Parker wrote
>
> > games theory
>
> It produced many studies, many strategies, but they weren't used that
> much in the daily business. It's used more as a general guide.
> And in times of crisis they preferred to rely on gut feelings. E.g.,
> see
> http://en.wikipedia.org/wiki/The_Fog_of_War
>
> > How do you cut
> > Jerusalem? Israel cuts and the Arabs then decide on the piece they want.
> > That is the simplest model.
>
> "For every complex problem there is an answer that is clear, simple,
> and wrong." (H. L. Mencken)
>
> SCNR. :)
>
> > This brings me to where I came in. How do you deal with irrational
> > decision
> > making. I was hoping that social simulation would be seeking to provide
> > answers. This does not seem to be the case.
>
> Models of limited rationality (like bounded rationality) are already
> used, e.g., in resource mangement & land use studies, peace and conflict
> studies and some more.
> The problem with those models is to say _how_much_ irrationality there
> is. We can assume (and model) perfect rationality and then measure the
> gap. Empirically most actors aren't fully irrational or behave random,
> so they approach the rational assumptions. What's often more missing is
> that actors lack information or the means to utilize them.
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI & Int'l Relations

2010-07-30 Thread Ian Parker
The only real attempt that I know of was that of Von Neumann and games
theory <http://en.wikipedia.org/wiki/John_von_Neumann>. It was in fact Von
Neumann who first suggested things like Prisoner's dilemma. This "*games*"
approach led to the
MAD<http://en.wikipedia.org/wiki/Mutual_assured_destruction> theory
of nuclear war. As we shall see the theory of nuclear deterrence has a
number of real holes in it.

In terms of Poker,
Polaris<http://www.google.co.uk/search?hl=en&rlz=1G1GGLQ_ENUK247&q=polaris+poker&aq=f&aqi=g3&aql=&oq=&gs_rfai=>,
initially at any rate assumes the Von Neumann zero sum strategy when
bluffing. Subsequently it observes the player's behaviour.

Another interesting piece of work is on cake
cutting<http://plus.maths.org/issue42/reviews/book1/index.html>by Ian
Stewart. The book was most interesting. The interesting thing is that
Ian has been consulted for international negotiations. How do you cut
Jerusalem? Israel cuts and the Arabs then decide on the piece they want.
That is the simplest model.

MAD has had some false assumptions. The  essential assumption is that the
holders of nuclear weapons are all good poker players and make rational, if
amoral decisions. For this reason it discounted the existence of rogue
states. No state is going to launch one or two nukes at Moscow or New York
if by so doing their annihilation is assured.

This brings me to where I came in. How do you deal with irrational decision
making. I was hoping that social simulation would be seeking to provide
answers. This does not seem to be the case.


  - Ian Parker

On 30 July 2010 18:54, Jan Klauck  wrote:

> (If you don't have time to read all this, scroll down to the
> questions.)
>
> I'm writing an article on the role of intelligent systems in the
> field of International Relations (IR). Why IR? Because in today's
> (and more so in tomorrow's) world the majority of national policies
> is influenced by foreign affairs--trade, migration, technology,
> global issues etc. (And because I got invited to write such an
> article for the IR community.)
>
> Link for a quick overview:
> http://en.wikipedia.org/wiki/International_relations
>
> The problem of foreign and domestic policy-making is to have
> appropriate data sources, models of the world and useful goals.
> Ideally both sides of the equation are brought into balance, which
> is difficult of course.
> Modern societies become more pluralistic, the world becomes more
> polycentric, technologies and social dynamics change faster and
> the overall scence becomes more complex. That's the trend.
> To make sense of that all policy/decision-makers have to handle
> this rising complexity.
>
> I know of several (academic) approaches to model IR, conflicts,
> macroeconomic and social processes. Only few are useful. And
> fewer are actually used (e.g., tax policy, economic policy).
> It's possible that some use even narrow AI for specific tasks.
> But I'm not aware of intelligent systems used by the IR community.
> From what I see do they rely more on studies done by analysts and
> news/intelligence reports.
>
> So my questions:
>
> (1) Do you know of intelligent systems for situational awareness,
> decision support, policy implementation and control that are used
> by the IR community (in whatever country)?
>
> (2) Or that are proposed to be used?
>
> (3) Do you know of any trends into this direction? Like extended
> C4ISR or ERP systems?
>
> (4) Do you know of intelligent systems used in the business world
> for strategic planning and operational control that could be used
> in IR?
>
> (5) Historical examples? Like
> http://en.wikipedia.org/wiki/Project_Cybersyn
> for the real-time control of the planned economy
>
> (6) Do you think the following statement is useful?
> Policy-making is a feedback loop which consists of awareness-
> decision-planing-action, where every part requires experience,
> trained cognitive abilites, high speed and precision of perception
> and assessment.
> (Background: ideal field for a supporting AGI to work in.)
>
> (6) Further comments?
>
> Thanks,
> Jan
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI & Alife

2010-07-29 Thread Ian Parker
On 28 July 2010 23:09, Jan Klauck  wrote:

> Ian Parker wrote
>
> >> "If we program a machine for winning a war, we must think well what
> >> we mean by winning."
> >
> > I wasn't thinking about winning a war, I was much more thinking about
> > sexual morality and men kissing.
>
> If we program a machine for doing X, we must think well what we mean
> by X.
>
> Now clearer?
>
> > "Winning" a war is achieving your political objectives in the war. Simple
> > definition.
>
> Then define your political objectives. No holes, no ambiguity, no
> forgotten cases. Or does the AGI ask for our feedback during mission?
> If yes, down to what detail?
>

With Matt's ideas it does exactly that.

>
> > The axioms which we cannot prove
> > should be listed. You can't prove them. Let's list them and all the
> > assumptions.
>
> And then what? Cripple the AGI by applying just those theorems we can
> prove? That excludes of course all those we're uncertain about. And
> it's not so much a single theorem that's problematic but a system of
> axioms and inference rules that changes its properties when you
> modify it or that is incomplete from the beginning.
>

No we simply add to the axiom pool. *All* I am saying is that we must always
have a lemma train taking us to the most fundamental Suppose I say

W=AσT4

Now I ask the system to prove this. At the bottom of the lemma trail will be
Clifford algebra. This relates Bose Einstein statistics to the spin, in this
case of the photon. It is Quantum Mechanics at a very fundamental level. A
Fermion has a half in its spin.

I can introduce as many axioms as I want. I can say that i = √-1. I can call
this statement an axiom, as a counter example of your natural numbers. In
constructing Clifford Algebra I make a number of statements.

This thinking in terms of axioms I repeat does not limit the power of AGI.
If we have a database you could almost say that a lemma trail was in essence
trivial.

What is does do is invalidate the biological model. *An absolute requirement
for AGI is openness.* In other words we must be able to examine the
arguments and their validity.

>
> Example (very plain just to make it clearer what I'm talking about):
>
> The natural numbers N are closed against addition. But N is not
> closed against subtraction, since n - m < 0 where m > n.
>
> You can prove the theorem that subtracting a positive number from
> another number decreases it:
>
> http://us2.metamath.org:88/mpegif/ltsubpos.html
>
> but you can still have a formal system that runs into problems.
> In the case of N it's missing closedness, i.e., undefined area.
> Now transfer this simple example to formal systems in general.
> You have to prove every formal system as it is, not just a single
> theorem. The behavior of an AGI isn't a single theorem but a system.
>
> > The heuristics could be tested in an off line system.
>
> Exactly. But by definition heuristics are incomplete, their solution
> space is smaller than the set of all solutions. No guarantee for the
> optimal solution, just probabilities < 1, elaborated hints.
>
> >>> Unselfishness going wrong is in fact a frightening thought. It would
> >>> in
> >>> AGI be a symptom of incompatible axioms.
> >>
> >> Which can happen in a complex system.
> >
> > Only if the definitions are vague.
>
> I bet against this.
>
> > Better to have a system based on "*democracy*" in some form or other.
>
> The rules you mention are goals and constraints. But they are heuristics
> you check during runtime.
>

That is true. Also see above. System cannot be inscruitable.


  - Ian Parker

>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Tweaking a few parameters

2010-07-29 Thread Ian Parker
Löbner/Turing tests start off with zero knowledge. Google works on the
Internet. What you need to start off with is some knowledge base. In fact
one possible Löbner contestant is Watson<http://www.research.ibm.com/deepqa/>.
Watson, unlike Google, runs on a single supercomputer. Watson is programmed
with the knowledge of Jeopardy contestants.

Watson's *knowledge* is essentially unlimited, the challenge is to process
NL.

I am going to ask you a question. Is chatting glorified Jeopardy? In
Jeopardy the database is constant throughout the competition. In
conversation we build a database. Lets put:-

1) Some facts about you into Watson.
2) A list of questions about you that Watson has to discover the answer to.
In short it would have to do Jeopardy on you in reverse.

This would be the next logical step from the competition.

BTW - The name Watson comes from the Thomas A. Watson Research Centre where
he lives, *not* the companion of Sherlock Holmes.


  - Ian Parker

On 28 July 2010 16:10, Jan Klauck  wrote:

> A. T. Murray wrote
>
> > Robot:  I AM ANDRU
> > Robot:  I AM ANDRU
> > Robot:  ANDRU HELPS KIDS
> > Robot:  KIDS MAKE ROBOTS
> > Robot:  ROBOTS NEED ME
> > Robot:  I IS I
> > Robot:  I AM ANDRU
> > Robot:  ANDRU HELPS KIDS
> > Robot:  KIDS MAKE ROBOTS
>
> > For the first time in our dozen-plus years of
> > developing MindForth, the AI acts like an
> > intelligence struggling to express itself,
>
> An artificial retard?
>
> > We seem to be dealing
> > with a true artificial intelligence here.
>
> Definitely.
>
> > Now we
> > upload the AI Mind to the World Wide Awakening Web.
>
> Next stop Singularity Station.
>
> :)
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI & Alife

2010-07-28 Thread Ian Parker
On 28 July 2010 19:56, Jan Klauck  wrote:

> Ian Parker wrote
>
> > What we would want
> > in a "*friendly"* system would be a set of utilitarian axioms.
>
> "If we program a machine for winning a war, we must think well what
> we mean by winning."
>

I wasn't thinking about winning a war, I was much more thinking about sexual
morality and men kissing.

"Winning" a war is achieving your political objectives in the war. Simple
definition.

>
> (Norbert Wiener, Cybernetics, 1948)
>
> > It is also important that AGI is fully axiomatic
> > and proves that 1+1=2 by set theory, as Russell did.
>
> Quoting the two important statements from
>
>
> http://en.wikipedia.org/wiki/Principia_Mathematica#Consistency_and_criticisms
>
> "Gödel's first incompleteness theorem showed that Principia could not
> be both consistent and complete."
>
> and
>
> "Gödel's second incompleteness theorem shows that no formal system
> extending basic arithmetic can be used to prove its own consistency."
>
> So in effect your AGI is either crippled but safe or powerful but
> potentially behaves different from your axiomatic intentions.
>

You have to state what your axioms are. Gödel's theorem does indeed state
that. You do have to make therefore some statements which are unprovable.
What I was in fact thinking in terms of was something like Mizar.
Mathematics starts off with simple ideas. The axioms which we cannot prove
should be listed. You can't prove them. Let's list them and all the
assumptions.

If we have a Mizar proof we assume things, and argue the case for a theorem
on what we have assumed. What you should be able to do is get from the ideas
of Russell and Bourbaki to something really meaty like Fermat's Last
Theorem, or the Riemann hypothesis.

The organization of Mizar (and Alcor which is a front end) is very much a
part of AGI. Alcor has in fact to do a similar job to Google in terms of a
search for theorems. Mizar though is different from Google in that we have
lemmas. You prove something by linking the lemmas up.

Suppose I were to search for "Riemann Hypothesis". Alcor should give me all
the theorems that depend on it. It should tell me about the density of
primes. It should tell me about the Goldbach conjecture, proved by Hardy and
Littlewood to depend on Riemann.

Google is a step towards AGI. An Alcor which could produce chains
of argument and find lemmas would be a big step to AGI.

Could Mizar contain knowledge which was non mathematical? In a sense it
already can. Mizar will contain Riemanian differential geometry. This is
simply a piece of pure maths. I am allowed to make a conjecture, an axiom if
you like that Riemann's differential geometry is in the shape of General
relativity the way in which the Universe works. I have stated this as
an unproven assertion, one that has been constantly verified experimentally
but unproven in the mathematical universe.

>
> > We will need morality to be axiomatically defined.
>
> As constraints, possibly. But we can only check the AGI in runtime for
> certain behaviors (i.e., while it's active), but we can't prove in
> advance whether it will break the constraints or not.
>
> Get me right: We can do a lot with such formal specifications and we
> should do them where necessary or appropriate, but we have to understand
> that our set of guaranteed behavior is a proper subset of the set of
> all possible behaviors the AGI can execute. It's heuristics in the end.
>

The heuristics could be tested in an off line system.

>
> > Unselfishness going wrong is in fact a frightening thought. It would in
> > AGI be a symptom of incompatible axioms.
>
> Which can happen in a complex system.
>

Only if the definitions are vague. The definition of happiness is vague.
Better to have a system based on "*democracy*" in some form or other. The
beauty of Matt's system is that we would remain ultimately in charge of the
system. We make rules such as no imprisonment without trial, minimum of laws
restriction personal freedom (men kissing), separation of powers in the
judiciary and executive and the reolution of disputed without violence.
These are I repeat *not* fundamental philosophical principles but rules
which our civilization has devised and have been found to work.

I have mentioned before that we could have more than 1 AGI system. All the "
*derived*" principles would be tested off line on another AGI system.

>
> > Suppose system A is monitoring system B. If system Bs
> > resources are being used up A can shut down processes in A. I talked
> > about computer gobledegook. I also have the feeling that with AGI we
> > should be able to get intelligible advice (in NL) a

Re: [agi] AGI & Alife

2010-07-28 Thread Ian Parker
Unselfishness gone wrong is a symptom. I think that this and all the other
examples should be cautionary for anyone who follows the biological model.
Do we want a system that thinks the way we do. Hell no! What we would want
in a "*friendly"* system would be a set of utilitarian axioms. That would
immediately make it think differently from us.

We certainly would not want a system which would arrest men kissing on a
park bench. In other words we would not want a system which was
axiomatically righteous. It is also important that AGI is fully axiomatic
and proves that 1+1=2 by set theory, as Russell did. This immediately takes
it out of the biological sphere.

We will need morality to be axiomatically defined.

Unselfishness going wrong is in fact a frightening thought. It would in AGI
be a symptom of incompatible axioms. In humans it is a real problem and it
should tell us that AGI cannot and should not be biologically based.

On 28 July 2010 15:59, Jan Klauck  wrote:

> Ian Parker wrote
>
> > There are the military costs,
>
> Do you realize that you often narrow a discussion down to military
> issues of the Iraq/Afghanistan theater?
>
> Freeloading in social simulation isn't about guys using a plane for
> free. When you analyse or design a system you look for holes in the
> system that allow people to exploit it. In complex systems that happens
> often. Most freeloading isn't much of a problem, just friction, but
> some have the power to damage the system too much. You have that in
> the health system, social welfare, subsidies and funding, the usual
> moral hazard issues in administration, services a.s.o.


> To come back to AGI: when you hope to design, say, a network of
> heterogenous neurons (taking Linas' example) you should be interested
> in excluding mechanisms that allow certain neurons to consume resources
> without delivering something in return because of the way resource
> allocation is organized. These freeloading neurons could go undetected
> for a while but when you scale the network up or confront it with novel
> inputs they could make it run slow or even break it.
>

In point of fact we can look at this another way. Lets dig a little bit
deeper<http://sites.google.com/site/aitranslationproject/computergobbledegook>.
If we have one AGI system we can have 2 (or 3 even, automatic landing in fog
is a triplex system). Suppose system A is monitoring system B. If system Bs
resources are being used up A can shut down processes in A. I talked about
computer gobledegook. I also have the feeling that with AGI we should be
able to get intelligible advice (in NL) about what was going wrong. For this
reason it would not be possible to overload AGI.

I have the feeling that perhaps one aim in AGI should be user friendly
systems. One product is in fact a form filler.

As far as society i concerned I think this all depends on how resource
limited we are. In a resource limited society freeloading is the biggest
issue. In our society violence in all its forms is the big issue. One need
not go to Iraq or Afghanistan for examples. There are plenty in ordinary
crime. "Happy" slapping, domestic violence, violence against children.

If the people who wrote computer viruses stole a large sum of money, what
they did would, to me at any rate, be more forgiveable. People take a
delight in wrecking things for other people, while not stealing very much
themselves. Iraq, Afghanistan and suicide murder is really simply an extreme
example of this. Why I come back to it is that the people feel they are
doing Allah's will. Happy slappers usually say they have nothing better to
do.

The fundamental fact about Western crime is that very little of it is to do
with personal gain or greed.

>
> > If someone were to come
> > along in the guise of social simulation and offer a reduction in
> > these costs the research would pay for itself many times over.
>
> SocSim research into "peace and conflict studies" isn't new. And
> some people in the community work on the Iraq/Afghanistan issue (for
> the US).
>
> > That is the way things should be done. I agree absolutely. We could in
> > fact
> > take steepest descent (Calculus) and GAs and combine them together in a
> > single composite program. This would in fact be quite a useful exercise.
>
> Just a note: Social simulation is not so much about GAs. You use
> agent systems and equation systems. Often you mix both in that you
> define the agent's behavior and the environment via equations, let
> the sim run and then describe the results in statistical terms or
> with curve fitting in equations again.
>
> > One last point. You say freeloading can cause o society to disintegrate.
> > One
> > society that has come pretty damn close to disintegration is I

Re: [agi] AGI & Alife

2010-07-28 Thread Ian Parker
One last point. You say freeloading can cause o society to disintegrate. One
society that has come pretty damn close to disintegration is Iraq.
The deaths in Iraq were very much due to sectarian blood letting.
Unselfishness if you like.

Would that the Iraqis (and Afghans) were more selfish.


  - Ian Parker



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI & Alife

2010-07-28 Thread Ian Parker
On 27 July 2010 21:06, Jan Klauck  wrote:
>
>
> > Second observation about societal punishment eliminating free loaders.
> The
> > fact of the matter is that "*freeloading*" is less of a problem in
> > advanced societies than misplaced unselfishness.
>
> Fact of the matter, hm? Freeloading is an inherent problem in many
> social configurations. 9/11 brought down two towers, freeloading can
> bring down an entire country.
>
> There are very considerable knock on costs. There is the mushrooming cost
of security  This manifests itself in many ways. There is the cost of
disruption to air travel. If someone rides on a plane without a ticket no
one's life is put at risk. There are the military costs, it costs $1m per
year to keep a soldier in Afghanistan. I don't know how much a Taliban
fighter costs, but it must be a lot less.

Clearly any reduction in these costs would be welcomed. If someone were to
come along in the guise of social simulation and offer a reduction in these
costs the research would pay for itself many times over. "What *you* are
interested in.

This may be a somewhat unpopular thing to say, but money *is* important.
Matt Mahoney has costed his view of AGI. I say that costs must be
recoverable as we go along. Matt, don't frighten people with a high estimate
of cost. Frighten people instead with the bill they are paying now for dumb
systems.


> > simulations seem :-
> >
> > 1) To be better done by Calculus.
>
> You usually use both, equations and heuristics. It depends on the
> problem, your resources, your questions, the people working with it
> a.s.o.
>

That is the way things should be done. I agree absolutely. We could in fact
take steepest descent (Calculus) and GAs and combine them together in a
single composite program. This would in fact be quite a useful exercise. We
would also eliminate genes that simply dealt with Calculus and steepest
descent.

I don't know whether it is useful to think in topological terms.


  - Ian Parker


>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Datamining and AGI

2010-07-27 Thread Ian Parker
http://bits.blogs.nytimes.com/2010/07/26/bringing-data-mining-into-the-mainstream/

This is a very interesting article. It repeats some of my long standing
assertions that AGI is a matter of pulling together loads of different
threads. A spreadsheet, as he puts it is not AGI but a spreadsheet is
essential for AGI.

The proposed software would bring the power of Google to the ordinary user
with this software. I think too that the software is essential from the *
political* view point. In capitalist societies like the US and EU we do not
think monopolies are a good idea. Google, unlike Microsoft, has kept within
anti trust laws. Yet its power is alarming. I do not think it is practical
to have 2 search engines. The best solution would be an open source, open
interface "*spreadsheet*" which other software could draw upon.

Example - NL Translation. Google has the best Arabic - English dictionary.
It is flawless at translating single words (dictionary). It does not however
take account of grammar or morphology. It also has the best corpus anywhere
in the world. Would a spreadsheet give me access to it?

A spreadsheet too is a vital early stage in Matt's AGI.


  - Ian Parker



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI & Alife

2010-07-27 Thread Ian Parker
I think I should say that for a problem to be suitable for GAs the space in
which it is embedded has to be non linear. Otherwise we have an easy
Calculus solution.

http://www.springerlink.com/content/h46r77k291rn/?p=bfaf36a87f704d5cbcb66429f9c8a808&pi=0

is described a fair number of such systems.


  - Ian Parker



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI & Alife

2010-07-27 Thread Ian Parker
I did take a look at the journal. There is one question I have with regard
to the assumptions. Mathematically the number of "prisoners" in
"Prisoner's dilemma" cooperating or not reflects the prevalence of
cooperators or non cooperators present. Evolution *should* tend to Von
Neumann's zero sum condition. This is an example of Calculus solving a
problem far neater and more elegantly than GAs which should only be used
where there is no good or obvious Calculus solution. This is my first
observation.

Second observation about societal punishment eliminating free loaders. The
fact of the matter is that "*freeloading*" is less of a problem in advanced
societies than misplaced unselfishness. The 9/11 hijackers performed the
most unselfish and unfreeloading acts. Hope I am not accused
of glorifying terrorism! How fundamental an issue is this? It is fundamental
in that simulations seem :-

1) To be better done by Calculus.
2) Not to be useful in providing simulations of things we are interested in.

Neither of these two is necessarily the case. We could in fact simulate
opinion formation by social interaction. There there would be no clear cut
Calculus outcome.

The third observation is that Google is itself a GA. It uses popular appeal
in its page ranking systems. This is relevant to Matt's ideas. You can, for
example, string programs or other entities together. Of course to do this
association one needs Natural Language. You will also need NL in stetting up
and describing any process of opinion formation. This is the great unsolved
problem. In fact any system not based on NL, but based on a analogue
response is Calculus describable.


  - Ian Parker

On 27 July 2010 14:00, Jan Klauck  wrote:

>
> > Seems like there could be many many interesting questions.
>
> Many of these are specialized issues that are researched in alife but
> more in social simulation. The Journal of Artificial Societies and
> Social Simulation
>
> http://jasss.soc.surrey.ac.uk/JASSS.html
>
> is a good starting point if anyone is interested.
>
> cu Jan
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-23 Thread Ian Parker
You have all missed one vital point. Music is repeating and it has a
symmetry. In dancing (song and dance) moves are repeated in
a symmetrical pattern.

Question why are we programmed to find symmetry? This question may be more
core to AGI than appears at first sight. Chearly an AGI system will have to
look for symmetry and do what Hardy described as "beautiful" maths.


  - Ian Parker

On 23 July 2010 04:02, Mike Archbold  wrote:

>
>
> On Thu, Jul 22, 2010 at 12:59 PM, deepakjnath wrote:
>
>> Why do we listen to a song sung in different scale and yet identify it as
>> the same song.?  Does it have something to do with the fundamental way in
>> which we store memory?
>>
>>
>
> Probably due to evolution?  Maybe at some point prior to words pitch was
> used in some variation.  You (an astrolopithicus etc, the spelling is f-ed
> up, I know) is not going to care what key you are singing "Watch out for
> that sabertooth tiger" in.  If you got messed up like that, can't hear the
> same song in a different key, you are cancelled out in evolution.  Just a
> guess.
>
> Mike Archbold
>
>
>
>> cheers,
>> Deepak
>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com/>
>>
>
>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] My Boolean Satisfiability Solver

2010-07-21 Thread Ian Parker
If I can express Arithmetic in logical terms it must be.


  - Ian Parker

On 21 July 2010 21:38, Jim Bromer  wrote:

> Well, Boolean Logic may be a part of number theory but even then it is
> still not the same as number theory.
>
> On Wed, Jul 21, 2010 at 4:01 PM, Jim Bromer  wrote:
>
>> Because a logical system can be applied to a problem, that does not mean
>> that the logical system is the same as the problem.  Most notably, the
>> theory of numbers contains definitions that do not belong to logic per se.
>> Jim Bromer
>>
>> On Wed, Jul 21, 2010 at 3:45 PM, Ian Parker  wrote:
>>
>>> But surely a number is a group of binary combinations if we represent the
>>> number in binary form, as we always can. The real theorems are those which
>>> deal with *numbers*. What you are in essence discussing is no more or
>>> less than the "*Theory of Numbers".*
>>> *
>>> *
>>> *  - Ian Parker
>>> *
>>>   On 21 July 2010 20:17, Jim Bromer  wrote:
>>>
>>>>   I haven't made any noteworthy progress on my attempt to create a
>>>> polynomial time Boolean Satisfiability Solver.
>>>> I am going to try to explore some more modest means of compressing
>>>> formulas in a way so that the formula will reveal more about individual
>>>> combinations (of the Boolean states of the variables that are True or
>>>> False), through the use of "strands" which are groups of combinations.  So 
>>>> I
>>>> am not trying to find a polynomial time solution at this point, I am just
>>>> going through the stuff that I have been thinking of, either explicitly or
>>>> implicitly during the past few years to see if I can get some means of
>>>> representing more about a formula in an efficient manner.
>>>>
>>>> Jim Bromer
>>>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>>> <http://www.listbox.com/>
>>>>
>>>
>>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com/>
>>>
>>
>>
>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] My Boolean Satisfiability Solver

2010-07-21 Thread Ian Parker
The Theory of Numbers as its name implies about numbers. Advanced Theory of
Number is also about things like Elliptic Functions, Modular functions,
Polynomials, Symmetry groups, the Riemann hypothesis.

What I am saying is I can express *ANY* numerical problem in binary form. I
can use numbers, expressible in any base to define the above. Logic is in
fact expressible if we take numbers of modulus 1, but that is another story.
You do not have to express all of logic in terms of the Theory of Numbers. I
am claiming that the Theory of Numbers, and all its advanced ramifications
are expressible in terms of logic.


  - Ian Parker

On 21 July 2010 21:01, Jim Bromer  wrote:

> Because a logical system can be applied to a problem, that does not mean
> that the logical system is the same as the problem.  Most notably, the
> theory of numbers contains definitions that do not belong to logic per se.
> Jim Bromer
>
> On Wed, Jul 21, 2010 at 3:45 PM, Ian Parker  wrote:
>
>> But surely a number is a group of binary combinations if we represent the
>> number in binary form, as we always can. The real theorems are those which
>> deal with *numbers*. What you are in essence discussing is no more or
>> less than the "*Theory of Numbers".*
>> *
>> *
>> *  - Ian Parker
>> *
>>   On 21 July 2010 20:17, Jim Bromer  wrote:
>>
>>>   I haven't made any noteworthy progress on my attempt to create a
>>> polynomial time Boolean Satisfiability Solver.
>>> I am going to try to explore some more modest means of compressing
>>> formulas in a way so that the formula will reveal more about individual
>>> combinations (of the Boolean states of the variables that are True or
>>> False), through the use of "strands" which are groups of combinations.  So I
>>> am not trying to find a polynomial time solution at this point, I am just
>>> going through the stuff that I have been thinking of, either explicitly or
>>> implicitly during the past few years to see if I can get some means of
>>> representing more about a formula in an efficient manner.
>>>
>>> Jim Bromer
>>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com/>
>>>
>>
>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com/>
>>
>
>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] My Boolean Satisfiability Solver

2010-07-21 Thread Ian Parker
But surely a number is a group of binary combinations if we represent the
number in binary form, as we always can. The real theorems are those which
deal with *numbers*. What you are in essence discussing is no more or less
than the "*Theory of Numbers".*
*
*
*  - Ian Parker
*
On 21 July 2010 20:17, Jim Bromer  wrote:

> I haven't made any noteworthy progress on my attempt to create a polynomial
> time Boolean Satisfiability Solver.
> I am going to try to explore some more modest means of compressing formulas
> in a way so that the formula will reveal more about individual combinations
> (of the Boolean states of the variables that are True or False), through the
> use of "strands" which are groups of combinations.  So I am not trying to
> find a polynomial time solution at this point, I am just going through the
> stuff that I have been thinking of, either explicitly or implicitly during
> the past few years to see if I can get some means of representing more about
> a formula in an efficient manner.
>
> Jim Bromer
>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-19 Thread Ian Parker
"What is the difference between laying concrete at 50C and fighting
Israel?". That is my question my 2 pennyworth. Other people can elaborate.

If that question can be answered you can have an automated advisor in B&Q.
Suppose I want to know about the characteristics of concrete. Of course one
thing you could do is go to B&Q and ask them what they would be looking for
in an avatar.


  - Ian Parker

On 19 July 2010 02:43, Colin Hales  wrote:

>  Try this one ...
> http://www.bentham.org/open/toaij/openaccess2.htm
> If the test subject can be a scientist, it is an AGI.
> cheers
> colin
>
>
> Steve Richfield wrote:
>
> Deepak,
>
> An intermediate step is the "reverse Turing test" (RTT), wherein people or
> teams of people attempt to emulate an AGI. I suspect that from such a
> competition would come a better idea as to what to expect from an AGI.
>
> I have attempted in the past to drum up interest in a RTT, but so far, no
> one seems interested.
>
> Do you want to play a game?!
>
> Steve
> 
>
> On Sun, Jul 18, 2010 at 5:15 AM, deepakjnath wrote:
>
>> I wanted to know if there is any bench mark test that can really convince
>> majority of today's AGIers that a System is true AGI?
>>
>> Is there some real prize like the XPrize for AGI or AI in general?
>>
>> thanks,
>> Deepak
>>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread Ian Parker
In my view the main obstacle to AGI is the understanding of Natural
Language. If we have NL comprehension we have the basis for doing a whole
host of marvellous things.

There is the Turing test. A good question to ask is "What is the difference
between laying concrete at 50C and fighting Israel. Google translated "wsT
jw AlmErkp or وسط جو المعركة " as "central air battle". Correct is "the
climatic environmental battle" or a more free translation would be "the
battle against climate and environment". In Turing competitions no one ever
asks the questions that really would tell AGI apart from a brand X
chatterbox.

http://sites.google.com/site/aitranslationproject/Home/formalmethods

<http://sites.google.com/site/aitranslationproject/Home/formalmethods>We can
I think say that anything which can carry out the program of my blog would
be well on its way. AGI will also be the link between NL and
formal mathematics. Let me take yet another example.

http://sites.google.com/site/aitranslationproject/deepknowled

Google translated it as 4 times the temperature. Ponder this, you have in
fact 3 chances to get this right.

1)  درجة means degree. GT has not translated this word. In this context it
means "power".

2) If you search for "Stefan Boltzmann" or "Black Body" Google gives you the
correct law.

3) The translation is obviously mathematically incorrect from the
dimensional stand-point.

This 3 things in fact represent different aspects of knowledge. In AGI they
all have to be present.

The other interesting point is that there are programs in existence now that
will address the last two questions. A translator that produces OWL solves
"2".

If we match up AGI to Mizar we can put dimensions into the proof engine.

There are a great many things on the Web which will solve specific problems.
NL is *THE* problem since it will allow navigation between the different
programs on the Web.

MOLTO BTW does have its mathematical parts even though it is primerally
billed as a translator.


  - Ian Parker

On 18 July 2010 14:41, deepakjnath  wrote:

> Yes, but is there a competition like the XPrize or something that we can
> work towards. ?
>
> On Sun, Jul 18, 2010 at 6:40 PM, Panu Horsmalahti wrote:
>
>> 2010/7/18 deepakjnath 
>>
>>> I wanted to know if there is any bench mark test that can really convince
>>> majority of today's AGIers that a System is true AGI?
>>>
>>> Is there some real prize like the XPrize for AGI or AI in general?
>>>
>>> thanks,
>>> Deepak
>>>
>>
>> Have you heard about the Turing test?
>>
>> - Panu Horsmalahti
>>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> cheers,
> Deepak
>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] OFF-TOPIC: University of Hong Kong Library

2010-07-16 Thread Ian Parker
There is one other thing that I have been bellyaching to Google about for
some time. GT takes absolutely no cognisence of grammar. Genders are all
over the place.

*Le* magasin .. *Elle* a été fondu (note no "e" at end) par...

This is more fundamental than you might think. Kurtzweil has spoken of GT as
being AI. It isn't AI would deduce grammatical rules from bilingual text,
something GT cannot do.

This as I said may be more fundamental than we realise. People talk about
minimum sets. Quite clearly the Google algorithms cannot be minimal sets.


  - Ian Parker



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] OFF-TOPIC: University of Hong Kong Library

2010-07-16 Thread Ian Parker
On 16 July 2010 00:32, John G. Rose  wrote:

>
>
> I always wondered - do language translators map from one language to
> another or do they map to a "universal language" first. And if there is a
> "universal language" what is it or.. what are they?
>

Depends very much on the translator. In Google Translate each pair of
languages is a separate case. Sometimes if you want to (say) translate
Arabic into Russian you translate from Arabic - English and then from
English - Russian. This as you can judge gives very poor Russian.

MOLTO<http://cordis.europa.eu/fp7/ict/language-technologies/project-molto_en.html>on
the other hand translates into a common base. In the EU you cannot
prefer one language to another . MOLTO should be of considerable interest as
pure AGI as it will generate
OWL<http://en.wikipedia.org/wiki/Web_Ontology_Language> from
NL documents.


  - Ian Parker

>
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] OFF-TOPIC: University of Hong Kong Library

2010-07-15 Thread Ian Parker
Ok Off topic, but not as far as you might think. YKY has posted in "Creating
Artificial Intelligence" on a collaborative project. It is quite important
to know *exactly* where he is. You see Taiwan uses the classical character
set, The People's Republic uses a simplified character set.

Hong Kong was handed back to China in I think 1997. It is still outside the
Great Firewall and (I presume) uses classical characters, although I don't
really know. If we are to discuss transliteration schemes, translation and
writing Chinese (PRC or Taiwan) on Western keyboards, it is important for us
to know.

I have just bashed up a Java program to write Arabic. You input Roman
Buckwalter and it has an internal conversion table. The same thing could in
principle be done for a load of character sets. In Chinese you would have to
input two Western keys simultaneously. That can be done.

I know HK is outside the Firewall because that is where Google has its proxy
server. Is YKY there, do you know?


  - Ian Parker

2010/7/15 John G. Rose 

> Make sure you study that up YKY :)
>
>
>
> John
>
>
>
> *From:* YKY (Yan King Yin, 甄景贤) [mailto:generic.intellige...@gmail.com]
> *Sent:* Thursday, July 15, 2010 8:59 AM
> *To:* agi
> *Subject:* [agi] OFF-TOPIC: University of Hong Kong Library
>
>
>
>
>
> Today, I went to the HKU main library:
>
>
>
>
>
> =)
>
> KY
>
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now> [image:
> Description: 
> https://www.listbox.com/images/feed-icon-10x10.jpg]<https://www.listbox.com/member/archive/rss/303/>|
> Modify <https://www.listbox.com/member/?&;> Your Subscription
>
> [image: Description: 
> https://www.listbox.com/images/listbox-logo-small.png]<http://www.listbox.com/>
>
>
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com
<><>

Re: [agi] Hutter - A fundamental misdirection?

2010-07-07 Thread Ian Parker
There is very little. Someone do research. Here is a paper on language
fitness.

http://kybele.psych.cornell.edu/~edelman/elcfinal.pdf

<http://kybele.psych.cornell.edu/~edelman/elcfinal.pdf>LSA is *not* discussed
nor is any fitness concept with the language itself. Similar sounding (or
written) words must be capable of disambiguation using LSA, otherwise the
language would be unfit. Let us have a *gedanken* language where "spring"
the example I have taken with my Spanish cannot be disambiguated. Suppose "*
spring*" meant "*step forward", *as well as its other meanings. If I am
learning to dance I do not think about "*primavera, resorte *or*
mamanthal"* but
I do think about "*salsa*". If I did not know whether I was to jump or put
my leg forward it would be extremely confusing. To my knowledge fitness in
this context has not been discussed.

In fact perhaps the only work that is relevant is my own which I posted here
some time ago. The reduction in entropy (compression) obtained with LSA was
disappointing. The different meanings (different words in Spanish & other
languages) are compressed more readily. Both Spanish and English have a
degree of fitness which (just possibly) is definable in LSA terms.


  - Ian Parker

On 7 July 2010 17:12, Gabriel Recchia  wrote:

> > In short, instead of a "pot of neurons", we might instead have a pot of
> dozens of types of
> > neurons that each have their own complex rules regarding what other types
> of neurons they
> > can connect to, and how they process information...
>
> > ...there is plenty of evidence (from the slowness of evolution, the large
> number (~200)
> > of neuron types, etc.), that it is many-layered and quite complex...
>
> The disconnect between the low-level neural hardware and the implementation
> of algorithms that build conceptual spaces via dimensionality
> reduction--which generally ignore facts such as the existence of different
> types of neurons, the apparently hierarchical organization of neocortex,
> etc.--seems significant. Have there been attempts to develop computational
> models capable of LSA-style feats (e.g., constructing a vector space in
> which words with similar meanings tend to be relatively close to each other)
> that take into account basic facts about how neurons actually operate
> (ideally in a more sophisticated way than the nodes of early connectionist
> networks which, as we now know, are not particularly neuron-like at all)? If
> so, I would love to know about them.
>
>
> On Tue, Jun 29, 2010 at 3:02 PM, Ian Parker  wrote:
>
>> The paper seems very similar in principle to LSA. What you need for a
>> concept vector  (or position) is the application of LSA followed by K-Means
>> which will give you your concept clusters.
>>
>> I would not knock Hutter too much. After all LSA reduces {primavera,
>> mamanthal, salsa, resorte} to one word giving 2 bits saving on Hutter.
>>
>>
>>   - Ian Parker
>>
>>
>> On 29 June 2010 07:32, rob levy  wrote:
>>
>>> Sorry, the link I included was invalid, this is what I meant:
>>>
>>>
>>> http://www.geog.ucsb.edu/~raubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdf<http://www.geog.ucsb.edu/%7Eraubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdf>
>>>
>>>
>>> On Tue, Jun 29, 2010 at 2:28 AM, rob levy  wrote:
>>>
>>>> On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield <
>>>> steve.richfi...@gmail.com> wrote:
>>>>
>>>>> Rob,
>>>>>
>>>>> I just LOVE opaque postings, because they identify people who see
>>>>> things differently than I do. I'm not sure what you are saying here, so 
>>>>> I'll
>>>>> make some "random" responses to exhibit my ignorance and elicit more
>>>>> explanation.
>>>>>
>>>>>
>>>> I think based on what you wrote, you understood (mostly) what I was
>>>> trying to get across.  So I'm glad it was at least quasi-intelligible. :)
>>>>
>>>>
>>>>>  It sounds like this is a finer measure than the "dimensionality" that
>>>>> I was referencing. However, I don't see how to reduce anything as 
>>>>> quantized
>>>>> as dimensionality into finer measures. Can you say some more about this?
>>>>>
>>>>>
>>>> I was just referencing Gardenfors' research program of "conceptual
>>>> spaces" (I was intentionally vague about committing to this fully though
>

Re: [agi] Reward function vs utility

2010-07-04 Thread Ian Parker
No it would not. AI willk "press its own buttons" only if those buttons are
defined. In one sense you can say that Goedel's theorem is a proof of
friendliness as it means that there must always be one button that AI cannot
press.


  - Ian Parker

On 4 July 2010 16:43, Abram Demski  wrote:

> Joshua,
>
> But couldn't it game the external utility function by taking actions which
> modify it? For example, if the suggestion is taken literally and you have a
> person deciding the reward at each moment, an AI would want to focus on
> making that person *think* the reward should be high, rather than focusing
> on actually doing well at whatever task it's set...and the two would tend to
> diverge greatly for more and more complex/difficult tasks, since these tend
> to be harder to judge. Furthermore, the AI would be very pleased to knock
> the human out of the loop and push its own buttons. Similar comments would
> apply to automated reward calculations.
>
> --Abram
>
>
>
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Hutter - A fundamental misdirection?

2010-06-29 Thread Ian Parker
The paper seems very similar in principle to LSA. What you need for a
concept vector  (or position) is the application of LSA followed by K-Means
which will give you your concept clusters.

I would not knock Hutter too much. After all LSA reduces {primavera,
mamanthal, salsa, resorte} to one word giving 2 bits saving on Hutter.


  - Ian Parker

On 29 June 2010 07:32, rob levy  wrote:

> Sorry, the link I included was invalid, this is what I meant:
>
>
> http://www.geog.ucsb.edu/~raubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdf
>
>
> On Tue, Jun 29, 2010 at 2:28 AM, rob levy  wrote:
>
>> On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield <
>> steve.richfi...@gmail.com> wrote:
>>
>>> Rob,
>>>
>>> I just LOVE opaque postings, because they identify people who see things
>>> differently than I do. I'm not sure what you are saying here, so I'll make
>>> some "random" responses to exhibit my ignorance and elicit more explanation.
>>>
>>>
>> I think based on what you wrote, you understood (mostly) what I was trying
>> to get across.  So I'm glad it was at least quasi-intelligible. :)
>>
>>
>>>  It sounds like this is a finer measure than the "dimensionality" that I
>>> was referencing. However, I don't see how to reduce anything as quantized as
>>> dimensionality into finer measures. Can you say some more about this?
>>>
>>>
>> I was just referencing Gardenfors' research program of "conceptual spaces"
>> (I was intentionally vague about committing to this fully though because I
>> don't necessarily think this is the whole answer).  Page 2 of this article
>> summarizes it pretty succinctly: http:// <http://goog_1627994790>
>> www.geog.ucsb.edu/.../ICSC_2009_AdamsRaubal_Camera-FINAL.pdf
>>
>>
>>
>>> However, different people's brains, even the brains of identical twins,
>>> have DIFFERENT mappings. This would seem to mandate experience-formed
>>> topology.
>>>
>>>
>>
>> Yes definitely.
>>
>>
>>>  Since these conceptual spaces that structure sensorimotor
>>>> expectation/prediction (including in higher order embodied exploration of
>>>> concepts I think) are multidimensional spaces, it seems likely that some
>>>> kind of neural computation over these spaces must occur,
>>>>
>>>
>>> I agree.
>>>
>>>
>>>> though I wonder what it actually would be in terms of neurons, (and if
>>>> that matters).
>>>>
>>>
>>> I don't see any route to the answer except via neurons.
>>>
>>
>> I agree this is true of natural intelligence, though maybe in modeling,
>> the neural level can be shortcut to the topo map level without recourse to
>> neural computation (use some more straightforward computation like matrix
>> algebra instead).
>>
>> Rob
>>
>
>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-28 Thread Ian Parker
The space navigation system is a case in point. I listened to a talk by
Peter Norvig in which he talked about looking at the moton of the heavens in
an empirical way. Let's imagine angels and crystal spheres and see where we
get. He rather decried Newton and the apple. In fact the apple story isn't
true. Newton investigated gravity after discussion with Halley of comet
fame. Newton was not generous enough to admit this and came up the "*apple*"
story. In fact there is a 7th parameter - the planet's mass. This is needed
when you think about the effect that a planet has on the others. Adams and
Leverrier discovered Neptune in 1846 by looking at irregularities in the
orbit of Uranus.

Copernicus came up with the Heliocentric theory because Ptolomy's epicycles
were so cumbersome. In fact a planet can be characterised by 6 things. Its
major axis, its eccentricity, the plane of its orbit, the angle of the major
axis and the position in orbit - time. These parameters are constant unless
a third body is present.

How do we get to Saturn? Cassini had a series of encounters. We view each
encounter by calculating first a solar orbit and then a
Venusian/Terrestrial/Jovian orbit. Deep in an encounter a planet is the main
influence. 3 body problems are tricky and we do a numerical computation with
time steps. We have to work out first of all the series of encounters and
then the effects of inaccuracies. We need to correct, for example *before* we
encounter a planet.

How does this fit in with AGI? Well Peter I don't think you empirical
approach will get you to Saturn. There is the need for theoretical
knowledge. How is this knowledge represented in AGI? It is represented in an
abstract mathematical form, a form which describes a general planet and
incorporates the inverse square law.

What in fact we need to know about space navigation is whether our system
incorporates these abstract definitions. I would view AGI as something which
will understand an abstract definition. This is something a little meta
mathematical. It understands ellipses and planets in general and wants to
know how this knowledge is incorporated in our system. In fact I would view
AGI as something checking on the integrity of other programs.

*Abstract Language*
*
*
By this I mean an abstract grammatical description of a language, gender and
morphology. This is vital. I don't think our Peter has ever learnt Spanish
even, at least not properly. We can find out what morphology a word has once
we have a few examples if*f* we have a morphological description built in.


  - Ian Parker


> A narrow "embedded" system, like say a DMV computer network is not an AGI.
> But that doesn't mean an AGI could not perform that function. In fact, AGI
> might arise out of these systems needing to become more intelligent. And an
> AGI system, that same AGI software may be used for a DMV, a space
> navigation
> system, IRS, NASDAQ, etc. it could adapt. .. efficiently. There are some
> systems that tout multi-use now but these are basically very narrow AI. AGI
> will be able to apply it's intelligence across domains and should be able
> to
> put its feelers into all the particular subsystems. Although I foresee some
> types of standard interfaces perhaps into these narrow AI computer
> networks;
> some sort of intelligence standards maybe, or the AGI just hooks into the
> human interfaces...
>
> An AGI could become a God but also it could do some useful stuff like run
> everyday information systems just like people with brains have to perform
> menial labor.
>
> John
>
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-28 Thread Ian Parker
On 27 June 2010 22:21, Travis Lenting  wrote:

> I don't like the idea of enhancing human intelligence before the
> singularity.


What do you class as enhancement? Suppose I am in the Middle East and I am
wearing glasses which can give a 3D data screen. Somebody speaks to me. Up
on my glasses are the possible translations. Neither me nor the computer
system understands Arabic, yet together we can achieve comprehension. (PS I
in fact did just that with
http://docs.google.com/Doc?docid=0AQIg8QuzTONQZGZxenF2NnNfNzY4ZDRxcnJ0aHI&hl=en_GB
)

I think crime has to be made impossible even for an enhanced humans first.


If our enhancement was Internet based it could be turned off if we were
about to commit a crime. You really should have said "unenhanced" humans. If
my conversation (see above) was about jihad and terrorism AI would provide a
route for the security services. I think you are muddled here.


> I think life is too adapt to abusing opportunities if possible. I would
> like to see the singularity enabling AI to be as least like a reproduction
> machine as possible. Does it really need to be a general AI to cause a
> singularity?


The idea of the Singularity is that AGI enhances itself. Hence a singularity
*without* AGI is a contradiction in terms. I did not quite get you syntax on
reproduction, but it is perfectly true that you do not need a singularity
for a Von Neumann machine. The singularity is a long way off yet Obama is
going to leave Afghanistan in 2014 leaving robots behind.


> Can it not just stick to scientific data and quantify human uncertainty?
>  It seems like it would be less likely to ever care about killing all humans
> so it can rule the galaxy or that its an omnipotent servant.


AGI will not have evolved. It will have been created. It will not anyway
have the desires we might ascribe to it. Scientific data would be a high
priority but you could *never* be exclusively scientific. If human
uncertainty were quantified that would give it, or whoever wielded it
immense power.

There is one other eventuality to consider - a virus. If an AGI system was
truly thinking and introspective, at least to the extent that it understood
what it was doing, a virus would be impossible. Software would in fact be
self repairing.

GT makes a lot of very silly translations. Could I say that no one in Mossad
or any dictator ever told me how to do my French homework. Trivial and naive
remark, yet GT is open to all kinds of hacking. True AGI would not by
definition. This does in fact serve to indicate how far off we are.


  - Ian Parker

>
>
> On Sun, Jun 27, 2010 at 11:39 AM, The Wizard wrote:
>
>> This is wishful thinking. Wishful thinking is dangerous. How about instead
>> of hoping that AGI won't destroy the world, you study the problem and come
>> up with a safe design.
>>
>>
>> Agreed on this dangerous thought!
>>
>> On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney wrote:
>>
>>> This is wishful thinking. Wishful thinking is dangerous. How about
>>> instead of hoping that AGI won't destroy the world, you study the problem
>>> and come up with a safe design.
>>>
>>>
>>> -- Matt Mahoney, matmaho...@yahoo.com
>>>
>>>
>>> --
>>> *From:* rob levy 
>>> *To:* agi 
>>> *Sent:* Sat, June 26, 2010 1:14:22 PM
>>> *Subject:* Re: [agi] Questions for an AGI
>>>
>>>  why should AGIs give a damn about us?
>>>
>>>
>>>> I like to think that they will give a damn because humans have a unique
>>> way of experiencing reality and there is no reason to not take advantage of
>>> that precious opportunity to create astonishment or bliss. If anything is
>>> important in the universe, its insuring positive experiences for all areas
>>> in which it is conscious, I think it will realize that. And with the
>>> resources available in the solar system alone, I don't think we will be much
>>> of a burden.
>>>
>>>
>>> I like that idea.  Another reason might be that we won't crack the
>>> problem of autonomous general intelligence, but the singularity will proceed
>>> regardless as a symbiotic relationship between life and AI.  That would be
>>> beneficial to us as a form of intelligence expansion, and beneficial to the
>>> artificial entity a way of being alive and having an experience of the
>>> world.
>>>*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com>
>&

Re: [agi] Theory of Hardcoded Intelligence

2010-06-28 Thread Ian Parker
I can't see how this can be true. Let us look at a few things.

Theory of languages. Todos hablnt espanol! Well let us analyse what you
learnt. You learnt about agreement. Spanish nouns are masculine or feminine,
adjectives (usually) come after the noun. If the noun is feminine they end
in "a". There are tenses. Verbs have stems. Vend = sell (-ir Spanish -re
French for the infinitive).

I feel the people at Google Translate never did Spanish. The genders for one
thing are all over the pace. In Arabic (and Russian) the verb "to be" is not
inserted in the translation of attributives. The attributive ending (in
Russian) is wrong to boot.

What am I coming round to? It is this. If you have heuristics and you can
reliably reconstruct the grammar of a language from bilingual text, then OK
your grammar is AI derived. Google Translate is not able to do this. It
would benefit no end by having some "hard wired" grammatical rules. In both
French and German gender is all over the place.

Let us now take a slightly more philosophical view. What do we mean by
"Intelligence". Useful intelligence involves memory and knowledge. These are
hard wired. Something like *Relativity* is not simply a matter of common
sense. To do Science you not only need the ability to manipulate knowledge,
you also need certain general principles.

Let me return to language. If we are told that Spanish is fairly regular.
Infinitives are -ar, -ir, -er and we know how they decline, we can work out
the morphology of any Spanish verb from a few examples. Arabic is a little
bit more complicated in that there are prefixes and suffixes. There are
morphology types and again given a few examples you can work out what they
are.

Looking at this question biologically we find that certain knowledge is
layed down at a particular age. Language is picked up in childhood. The Arab
child will understand morphology at the ago of 5 or 6. He won't call it
that, it will be just that certain things will sound wrong. This being so
there are aspects of our behaviour that are layed down and are "hardwired"
other things which are not.

I fact we would view a system as being intelligent if it could bring to bear
a large amount of knowledge onto the problem.


  - Ian Parker

On 27 June 2010 22:36, M E  wrote:

>  I sketched a graph the other day which represented my thoughts on the
> usefulness of hardcoding knowledge into an AI.  (Graph attached)
>
> Basically, the more hardcoded knowledge you include in an AI, of AGI, the
> lower the overall intelligence it will have, but that faster you will reach
> that value.  I would include any real AGI to be toward the left of the graph
> with systems like CYC to be toward the right.
>
> Matt
>
> --
> The New Busy is not the old busy. Search, chat and e-mail from your inbox. Get
> started.<http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_3>
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-27 Thread Ian Parker
On 27 June 2010 21:25, John G. Rose  wrote:

> It's just that something like world hunger is so complex AGI would have to
> master simpler problems.
>

I am not sure that that follows necessarily. Computing is full of situations
where a seemingly simple problem is not solved and a more complex one is. I
remember posting some time ago on Cassini.

> Also, there are many people and institutions that have solutions to world
> hunger already and they get ignored.
>
Indeed. AGI in the shape of a search engine would find these solutions.
World Hunger might well be soluble *simply because so much work has already
been done.* AGI might well start off as search and develop into feasibility
ans solutions.

> So an AGI would have to get established over a period of time for anyone to
> really care what it has to say about these types of issues. It could
> simulate things and come up with solutions but they would not get
> implemented unless it had power to influence. So in addition AGI would need
> to know how to make people listen... and maybe obey.
>

This is CRESS. CRESS would be an accessible option.

>
>
> IMO I think AGI will take the embedded route - like other types of computer
> systems - IRS, weather, military, Google, etc. and we become dependent
> intergenerationally so that it is impossible to survive without. At that
> point AGI's will have power to influence.
>
>
>
Look! The point is this:-

1) An embedded system is AI not AGI.

2) AGI will arise simply because all embedded systems are themselves
searchable.


  - Ian Parker

>
>
> *From:* Ian Parker [mailto:ianpark...@gmail.com]
> *Sent:* Saturday, June 26, 2010 2:19 PM
> *To:* agi
> *Subject:* Re: [agi] The problem with AGI per Sloman
>
>
>
> Actually if you are serious about solving a political or social question
> then what you really need is CRESS<http://cress.soc.surrey.ac.uk/web/home>.
> The solution of World Hunger is BTW a political question not a technical
> one. Hunger is largely due to bad governance in the Third World. How do you
> get good governance. One way to look at the problem is via CRESS and
> run simulations in second life.
>
>
>
> One thing which has in fact struck me in my linguistic researches is this.
> Google Translate is based on having Gigabytes of bilingual text. The fact
> that GT is so bad at technical Arabic indicates the absence of such
> bilingual text. Indeed Israel publishes more papers than the whole of the
> Islamic world. This is of profound importance for understanding the Middle
> East. I am sure CRESS would confirm this.
>
>
>
> AGI would without a doubt approach political questions by examining all the
> data about the various countries before making a conclusion. AGI would
> probably be what you would consult for long term solutions. It might not be
> so good at dealing with something (say) like the Gaza flotilla. In coing to
> this conclusion I have the University of Surrey and CRESS in mind.
>
>
>
>
>
>   - Ian Parker
>
> On 26 June 2010 14:36, John G. Rose  wrote:
>
> > -Original Message-
> > From: Ian Parker [mailto:ianpark...@gmail.com]
> >
> >
> > How do you solve World Hunger? Does AGI have to. I think if it is truly
> "G" it
> > has to. One way would be to find out what other people had written on the
> > subject and analyse the feasibility of their solutions.
> >
> >
>
> Yes, that would show the generality of their AGI theory. Maybe a particular
> AGI might be able to work with some problems but plateau out on its
> intelligence for whatever reason and not be able to work on more
> sophisticated issues. An AGI could be "hardcoded" perhaps and not improve
> much, whereas another AGI might improve to where it could tackle vast
> unknowns at increasing efficiency. There are common components in tackling
> unknowns, complexity classes for example, but some AGI systems may operate
> significantly more efficiently and improve. Human brains at some point may
> plateau without further augmentation though I'm not sure we have come close
> to what the brain is capable of.
>
> John
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
>
> Powered by Listbox: http://www.listbox.com
>
>
>
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/>| 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
>
> <http://www.listbox.com>
>
>
&

Re: [agi] The problem with AGI per Sloman

2010-06-26 Thread Ian Parker
Actually if you are serious about solving a political or social question
then what you really need is CRESS <http://cress.soc.surrey.ac.uk/web/home>.
The solution of World Hunger is BTW a political question not a technical
one. Hunger is largely due to bad governance in the Third World. How do you
get good governance. One way to look at the problem is via CRESS and
run simulations in second life.

One thing which has in fact struck me in my linguistic researches is this.
Google Translate is based on having Gigabytes of bilingual text. The fact
that GT is so bad at technical Arabic indicates the absence of such
bilingual text. Indeed Israel publishes more papers than the whole of the
Islamic world. This is of profound importance for understanding the Middle
East. I am sure CRESS would confirm this.

AGI would without a doubt approach political questions by examining all the
data about the various countries before making a conclusion. AGI would
probably be what you would consult for long term solutions. It might not be
so good at dealing with something (say) like the Gaza flotilla. In coing to
this conclusion I have the University of Surrey and CRESS in mind.


  - Ian Parker

On 26 June 2010 14:36, John G. Rose  wrote:

> > -Original Message-
> > From: Ian Parker [mailto:ianpark...@gmail.com]
> >
> >
> > How do you solve World Hunger? Does AGI have to. I think if it is truly
> "G" it
> > has to. One way would be to find out what other people had written on the
> > subject and analyse the feasibility of their solutions.
> >
> >
>
> Yes, that would show the generality of their AGI theory. Maybe a particular
> AGI might be able to work with some problems but plateau out on its
> intelligence for whatever reason and not be able to work on more
> sophisticated issues. An AGI could be "hardcoded" perhaps and not improve
> much, whereas another AGI might improve to where it could tackle vast
> unknowns at increasing efficiency. There are common components in tackling
> unknowns, complexity classes for example, but some AGI systems may operate
> significantly more efficiently and improve. Human brains at some point may
> plateau without further augmentation though I'm not sure we have come close
> to what the brain is capable of.
>
> John
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-25 Thread Ian Parker
One of the first things in AGI is to produce software which is self
monitoring and which will correct itself when it is not working. For over a
day now I have been unable to access Google Groups. The Internet
access simply loops and does not get anywhere. If Google had any true AGI it
would :-

a) Spot that it was looping.
b) Failing that it would provide the use with an interface which would
enable the fault to be corrected on line.

This may seem an absolutely trivial point, but I feel it is absolutely
fundamental. First of all you do not pass the Turing test by being
absolutely dumb. I suppose you might say that conversing with Google was
rather like Tony Haywood answering questions in Congress. "Sorry we cannot
process your request at this time" (or any other time for that matter). You
don't either (this is Google Translate for you) by saying hat US forces
have committed atrocities in Burma when they have been out of SE Asia since
the end of the Vietnam war.

Another instance. Google denied access to my site saying that I had breached
the terms and conditions. I hadn't and they said they did not know why. You
do not pass the TT either by walking up and saying they had a paedophile
website when they hadn't.

I would say that the first task of AGI (this is actually a definition) would
be to provide software that is fault tolerant and self correcting. After all
if we have 2 copies of AGI we will have (by definition) a fault tolerant
system. If a request cannot be processed an AGI system should know why not
and hopefully be able to do something about it.

The lack of any real fault tolerance in our systems to me underlines just
how far off we really are.


  - Ian Parker

On 24 June 2010 07:10, Dana Ream  wrote:

>  How do you work?
>
>  --
> *From:* The Wizard [mailto:key.unive...@gmail.com]
> *Sent:* Wednesday, June 23, 2010 11:05 PM
> *To:* agi
> *Subject:* [agi] Questions for an AGI
>
>
> If you could ask an AGI anything, what would you ask it?
> --
> Carlos A Mejia
>
> Taking life one singularity at a time.
> www.Transalchemy.com
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Ian Parker
I think there is a great deal of confusion between these two objectives.
When I wrote that if you had a car accident due to a fault in AI/AGI and
Matt wrote back talking about downloads this was a case in point. I was
assuming that you had a system which was intelligent but was *not* a
download in any shape or form.

Watson<http://learning.blogs.nytimes.com/2010/06/23/waxing-philosophical-on-watson-and-artificial-intelligence/>is
intelligent. I would be interested to know other peoples answers to
the
5 questions.

1) Turing test - Quite possibly with modifications. Watson needs to be
turned into a chatterbox. This can be done fairly trivially by allowing
Watson to store conversation in his database.

2) Meaningless question. Watson could produce results of thought and feed
these back in. Watson could design a program by referencing other programs
and their comment data. Sinilarly for engineering.

3,4,5 Absolutely not.

How do you solve World Hunger? Does AGI have to. I think if it is truly "G"
it has to. One way would be to find out what other people had written on the
subject and analyse the feasibility of their solutions.


  - Ian Parker

On 24 June 2010 18:20, John G. Rose  wrote:

> I think some confusion occurs where AGI researchers want to build an
> artificial person verses artificial general intelligence. An AGI might be
> just a computational model running in software that can solve problems
> across domains.  An artificial person would be much else in addition to AGI.
>
>
>
> With intelligence engineering and other engineering that artificial person
> could be built, or some interface where it appears to be a person. And a
> huge benefit is in having artificial people to do things that real people
> do. But pursuing AGI need not have to be pursuit of building artificial
> people.
>
>
>
> Also, an AGI need not have to be able to solve ALL problems initially.
> Coming out and asking why some AGI theory wouldn't be able to figure out how
> to solve some problem like say, world hunger, I mean WTF is that?
>
>
>
> John
>
>
>
> *From:* Mike Tintner [mailto:tint...@blueyonder.co.uk]
> *Sent:* Thursday, June 24, 2010 5:33 AM
> *To:* agi
> *Subject:* [agi] The problem with AGI per Sloman
>
>
>
> "One of the problems of AI researchers is that too often they start off
> with an inadequate
> understanding of the *problems* and believe that solutions are only a few
> years away. We need an educational system that not only teaches techniques
> and solutions, but also an understanding of problems and their difficulty —
> which can come from a broader multi-disciplinary education. That could speed
> up progress."
>
> A. Sloman
>
>
>
> (& who else keeps saying that?)
>
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/>| 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
>
> <http://www.listbox.com>
>
>
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] High Frame Rates Reduce Uncertainty

2010-06-21 Thread Ian Parker
My comment is this. The brain in fact takes whatever speed it needs. For
simple processing it takes the full speed. More complex processing does not
require the same speed and so is taken more slowly. This is really an
extension of what DESTIN does spatially.


  - Ian Parker

On 21 June 2010 15:30, deepakjnath  wrote:

> The brain does not get the high frame rate signals as the eye itself
> only gives brain images at 24 frames per second. Else u wouldn't be
> able to watch a movie.
> Any comments?
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Ian Parker
Isn't this the argument for GAs running on multicored processors? Now each
organism has one core/fraction of a core. The "brain" will then evaluate "*
fitness*" having a fitness criterion.

The fact they can be run efficiently in parallel is one of the advantages of
GAs.

Let us look at this another way, when an intelligent person thinks about a
problem, they will think about it in terms of a set of alternatives. This
could be said to be the start of genetic reasoning. So it does in fact take
place now.

A GA is the simplest parallel system which you can think of for purposes of
illustration. However when we answer "*Jeopardy*" type questions parallelism
is involved. This becomes clear when we look at how Watson actually
works.<http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html>
It
works in parallel and then finds the most probable answer.


  - Ian Parker


  - Ian Parker

On 21 June 2010 16:38, Abram Demski  wrote:

> Steve,
>
> You didn't mention this, so I guess I will: larger animals do generally
> have larger brains, coming close to a fixed brain/body ratio. Smarter
> animals appear to be the ones with a higher brain/body ratio rather than
> simply a larger brain. This to me suggests that the amount of sensory
> information and muscle coordination necessary is the most important
> determiner of the amount of processing power needed. There could be other
> interpretations, however.
>
> It's also pretty important to say that brains are expensive to fuel. It's
> probably the case that other animals didn't get as smart as us because the
> additional food they could get per ounce brain was less than the additional
> food needed to support an ounce of brain. Humans were in a situation in
> which it was more. So, I don't think your argument from other animals
> supports your hypothesis terribly well.
>
> One way around your instability if it exists would be (similar to your
> hemisphere suggestion) split the network into a number of individuals which
> cooperate through very low-bandwidth connections. This would be like an
> organization of humans working together. Hence, multiagent systems would
> have a higher stability limit. However, it is still the case that we hit a
> serious diminishing-returns scenario once we needed to start doing this
> (since the low-bandwidth connections convey so much less info, we need waaay
> more processing power for every IQ point or whatever). And, once these
> organizations got really big, it's quite plausible that they'd have their
> own stability issues.
>
> --Abram
>
> On Mon, Jun 21, 2010 at 11:19 AM, Steve Richfield <
> steve.richfi...@gmail.com> wrote:
>
>> There has been an ongoing presumption that more "brain" (or computer)
>> means more intelligence. I would like to question that underlying
>> presumption.
>>
>> That being the case, why don't elephants and other large creatures have
>> really gigantic brains? This seems to be SUCH an obvious evolutionary step.
>>
>> There are all sorts of network-destroying phenomena that rise from complex
>> networks, e.g. phase shift oscillators there circular analysis paths enforce
>> themselves, computational noise is endlessly analyzed, etc. We know that our
>> own brains are just barely stable, as flashing lights throw some people into
>> epileptic attacks, etc. Perhaps network stability is the intelligence
>> limiter? If so, then we aren't going to get anywhere without first fully
>> understanding it.
>>
>> Suppose for a moment that theoretically perfect neurons could work in a
>> brain of limitless size, but their imperfections accumulate (or multiply) to
>> destroy network operation when you get enough of them together. Brains have
>> grown larger because neurons have evolved to become more nearly perfect,
>> without having yet (or ever) reaching perfection. Hence, evolution may have
>> struck a "balance", where less intelligence directly impairs survivability,
>> and greater intelligence impairs network stability, and hence indirectly
>> impairs survivability.
>>
>> If the above is indeed the case, then AGI and related efforts don't stand
>> a snowball's chance in hell of ever outperforming humans, UNTIL the
>> underlying network stability theory is well enough understood to perform
>> perfectly to digital precision. This wouldn't necessarily have to address
>> all aspects of intelligence, but would at minimum have to address
>> large-scale network stability.
>>
>> One possibility is chopping large networks into pieces, e.g. the
>> hemispheres of our own brains. However, like multi-core CPUs,