Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread Russell Wallace
If you can do better voice recognition, that's a significant
application in its own right, as well as having uses in other
applications e.g. automated first layer for call centers.

If you can do better image/video recognition, there are a great many
uses for that -- look at all the things people are trying to use image
recognition for at the moment.

If you can do both at the same time, that's going to have plenty of
uses for filtering, classifying and searching video. (Imagine being
able to search the Youtube archives like you can search the Web today.
I would guess Google would pay a few bob for technology that could do
that.)

On Sun, Aug 8, 2010 at 2:10 AM, Ben Goertzel b...@goertzel.org wrote:
 Hi,

 A fellow AGI researcher sent me this request, so I figured I'd throw it
 out to you guys

 
 I'm putting together an AGI pitch for investors and thinking of low
 hanging fruit applications to argue for. I'm intentionally not
 involving any mechanics (robots, moving parts, etc.). I'm focusing on
 voice (i.e. conversational agents) and perhaps vision-based systems.
 Hellen Keller AGI, if you will :)

 Along those lines, I'd like any ideas you may have that would fall
 under this description. I need to substantiate the case for such AGI
 technology by making an argument for high-value apps. All ideas are
 welcome.
 

 All serious responses will be appreciated!!

 Also, I would be grateful if we
 could keep this thread closely focused on direct answers to this
 question, rather than
 digressive discussions on Helen Keller, the nature of AGI, the definition of 
 AGI
 versus narrow AI, the achievability or unachievability of AGI, etc.
 etc.  If you think
 the question is bad or meaningless or unclear or whatever, that's
 fine, but please
 start a new thread with a different subject line to make your point.

 If the discussion is useful, my intention is to mine the answers into a 
 compact
 list to convey to him

 Thanks!
 Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Walker Lake

2010-08-02 Thread Russell Wallace
I don't often request list moderation, but if this kind of off-topic spam
and clueless trolling doesn't call for it, nothing does, so: I hereby
request that a moderator take appropriate action.

On Mon, Aug 2, 2010 at 3:40 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Sometime when you are flying between the northwest US to/from Las Vegas,
 look out your window as you fly over Walker Lake in eastern Nevada. At the
 south end you will see a system of roads leading to tiny buildings, all
 surrounded by military security. From what I have been able to figure out,
 you will find the U.S. arsenal of chemical and biological weapons housed
 there. No, we are not now making these weapons, but neither are we disposing
 of them.

 Similarly, there has been discussion of developing advanced military
 technology using AGI and other computer-related methods. I believe that
 these efforts are fundamentally anti-democratic, as they allow a small
 number of people to control a large number of people. Gone are the days when
 people voted with their swords. We now have the best government that money
 can buy monitoring our every email, including this one, to identify anyone
 resisting such efforts. 1984 has truly arrived. This can only lead to a
 horrible end to freedom, with AGIs doing their part and more.

 Like chemical and biological weapons, unmanned and automated weapons should
 be BANNED. Unfortunately, doing so would provide a window of opportunity for
 others to deploy them. However, if we make these and stick them in yet
 another building at the south end of Walker Lake, we would be ready in case
 other nations deploy such weapons.

 How about an international ban on the deployment of all unmanned and
 automated weapons? The U.S. won't now even agree to ban land mines. At least
 this would restore SOME relationship between popular support and military
 might. Doesn't it sound ethical to insist that a human being decide when
 to end another human being's life? Doesn't it sound fair to require the
 decision maker to be in harm's way, especially when the person being killed
 is in or around their own home? Doesn't it sound unethical to add to the
 present situation? When deployed on a large scale, aren't these WMDs?

 Steve

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-27 Thread Russell Wallace
I spent a while back in the 90s trying to make AGI and alife converge,
before establishing to my satisfaction the approach is a dead end: we
will never have anywhere near enough computing power to make alife
evolve significant intelligence (the only known success took 4 billion
years on a planetary sized nanocomputer network, after all), even if
we could set up just the right selection pressures, which we can't.

On Tue, Jul 27, 2010 at 4:23 AM, Linas Vepstas linasveps...@gmail.com wrote:
 I saw the following post from Antonio Alberti, on the linked-in
 discussion group:

ALife and AGI

Dear group participants.

The relation among AGI and ALife greatly interests me. However, too few 
recent works try to relate them. For exemple, many papers presented in AGI-09 
(http://agi-conf.org/2009/) are about program learning algorithms (combining 
evolutionary learning and analytical learning). In AGI 2010, virtual pets 
have been presented by Ben Goertzel and are also another topic of this forum. 
There are other approaches in AGI that uses some digital evolutionary 
approach for AGI. For me it is a clear clue that both are related in some 
instance.


By ALife I mean the life-as-it-could-be approach (not simulate, but to use 
digital environment to evolve digital organisms using digital evolution 
(faster than Natural one - see 
http://www.hplusmagazine.com/articles/science/stephen-hawking-%E2%80%9Chumans-have-entered-new-stage-evolution%E2%80%9D).

So, I would like to propose some discussion topics regarding ALIfe and AGI:

1) What is the role of Digital Evolution (and ALife) in the AGI context?

2) Is it possible that some aspects of AGI could self-emerge from the digital 
evolution of intelligent autonomous agents?

3) Is there any research group trying to converge both approaches?

Best Regards,

  and my reply was below:

 For your question 3), I have no idea. For question 1) I can't say I've
 ever heard of anyone talk about this. For question 2), I imagine the
 answer is yes, although the boundaries between what's Alife and
 what's program learning (for example) may be blurry.

 So, imagine, for example, a population of many different species of
 neurons (or should I call them automata? or maybe I should call them
 virtual ants?) Most of the individuals have only a few friends (a
 narrow social circle) -- the friendship relationship can be viewed
 as an axon-dendrite connection -- these friendships are semi-stable;
 they evolve over time, and the type  quality of information exchanged
 in a friendship also varies. Is a social network of friends able to
 solve complex problems? The answer is seemingly yes, if the
 individuals are digital models of neurons. (To carry analogy further:
 different species of individuals would be analogous to different types
 of neurons e.g. purkinje cells vs pyramid cells vs granular vs. motor
 neurons. Individuals from one species may tend to be very gregarious,
 while those from other species might be generally xenophobic. etc.)

 I have no clue if anyone has ever explored genetic algorithms or
 related alife algos, factored together with the individuals being
 involved in a social network (with actual information exchange between
 friends). No clue as to how natural/artificial selection should work.
 Do anti-social individuals have a possibly redeeming role w.r.t. the
 organism as a whole? Do selection pressures on individuals (weak
 individuals are cullled) destroy social networks? Do such networks
 automatically evolve altruism, because a working social network with
 weak, altruistically-supported individuals is better than a shredded,
 dysfunctional social network consisting of only strong individuals?
 Dunno. Seems like there could be many many interesting questions.

 I'd be curious about the answers to Antonio's questions ...

 --linas


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Russell Wallace
On Mon, Jun 28, 2010 at 4:54 PM, David Jones davidher...@gmail.com wrote:
 But, that's why it is important to force oneself to solve them in such a way 
 that it IS applicable to AGI. It doesn't mean that you have to choose a 
 problem that is so hard you can't cheat. It's unnecessary to do that unless 
 you can't control your desire to cheat. I can.

That would be relevant if it was entirely a problem of willpower and
self-discipline, but it isn't. It's also a problem of guidance. A real
problem gives you feedback at every step of the way, it keeps blowing
your ideas out of the water until you come up with one that will
actually work, that you would never have thought of in a vacuum. A toy
problem leaves you guessing, and most of your guesses will be wrong in
ways you won't know about until you come to try a real problem and
realize you have to throw all your work away.

Conversely, a toy problem doesn't make your initial job that much
easier. It means you have to write less code, sure, but what of it?
That was only ever the lesser difficulty. The main reason toy problems
are easier is that you can use lower grade methods that could never
scale up to real problems -- in other words, precisely that you can
'cheat'. But if you aren't going to cheat, you're sacrificing most of
the ease of a toy problem, while also sacrificing the priceless
feedback from a real problem -- the worst of both worlds.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Russell Wallace
*nods* So you have tried the full problem, and caught up with the current
state-of-the-art in techniques for it? In that case...

... well, honestly, I still don't think your approach with black squares and
screenshots is going to produce any useful results. But given the above, I
no longer think you are being irrational in pursuing it. I think, as you
said, you have looked at the alternatives, all of which are very tough, and
your judgment disagrees with mine about which is the least bad.

On Mon, Jun 28, 2010 at 9:15 PM, David Jones davidher...@gmail.com wrote:

 Yes I have. But what I found is that real vision is so complex, involving
 so many problems that must be solved and studied, that any attempt at
 general vision is beyond my current abilities. It would be like expecting a
 single person, such as myself, to figure out how to build the h-bomb all by
 themselves back before it had ever been done. It is the same scenario
 because it involves many engineering and scientific problems that must all
 be solved and studied.

 You see in real vision you have a 3D world, camera optics, lighting issues,
 noise, blurring, rotation, distance, projection, reflection, shadows,
 occlusion, etc, etc, etc.

 It is many magnitudes more difficult than the problems I'm studying. Yet,
 really consider the two black squares problem. Its hard! It's so simple, yet
 so hard. I still haven't fully defined how to do it algorithmically... I
 will get to that in the coming weeks.

 So, to work on the full problem is practically impossible for me. Seeing as
 though there isn't a lot of support for AGI research such as this, I am much
 better served by proving the principle rather than implementing the full
 solution to the real problem. If I can even prove how vision works on simple
 black squares, I might be able to get help in my research... without a proof
 of concept, no one will help. If I can prove it on screenshots, even better.
 It would be a very significant achievement, if done in a truly general
 fashion (keeping in mind that truly general is not really possible).

 A great example of what happens when you work with real images is this...
 Look at the current solutions. They use features, such as sift. Using sift
 features, you might be able to say that an object exists with 70% certainty,
 or something like that. But, it won't be able to tell you what the object
 looks like, whats behind it. What is it occluding. What's next to it. What
 color is it. What pixels in the image belong to it. How are those parts
 attached. Etc. etc. etc. Now do you see why it makes little sense to tackle
 the full problem? Even the state of the art in computer vision sucks. It is
 great at certain narrow applications, but no where near where it needs to be
 for AGI.

 Dave

 On Mon, Jun 28, 2010 at 4:00 PM, Russell Wallace 
 russell.wall...@gmail.com wrote:

 On Mon, Jun 28, 2010 at 8:56 PM, David Jones davidher...@gmail.com
 wrote:
  Having experience with the full problem is important, but forcing
 yourself to solve every sub problem at once is not a better strategy at all.

 Certainly going back to a toy problem _after_ gaining some experience
 with the full problem would have a much better chance of being a
 viable strategy. Have you tried that with what you're doing, i.e.
 having a go at writing a program to understand real video before going
 back to black squares and screen shots to improve the fundamentals?


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Russell Wallace
On Mon, Jun 21, 2010 at 4:19 PM, Steve Richfield
steve.richfi...@gmail.com wrote:
 That being the case, why don't elephants and other large creatures have 
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

Personally I've always wondered how elephants managed to evolve brains
as large as they currently have. How much intelligence does it take to
sneak up on a leaf? (Granted, intraspecies social interactions seem to
provide at least part of the answer.)

 There are all sorts of network-destroying phenomena that rise from complex 
 networks, e.g. phase shift oscillators there circular analysis paths enforce 
 themselves, computational noise is endlessly analyzed, etc. We know that our 
 own brains are just barely stable, as flashing lights throw some people into 
 epileptic attacks, etc. Perhaps network stability is the intelligence limiter?

Empirically, it isn't.

 Suppose for a moment that theoretically perfect neurons could work in a brain 
 of limitless size, but their imperfections accumulate (or multiply) to 
 destroy network operation when you get enough of them together. Brains have 
 grown larger because neurons have evolved to become more nearly perfect

Actually it's the other way around. Brains compensate for
imperfections (both transient error and permanent failure) in neurons
by using more of them.  Note that, as the number of transistors on a
silicon chip increases, the extent to which our chip designs do the
same thing also increases.

 There are some medium-scale network similes in the world, e.g. the power 
 grid. However, there they have high-level central control and lots of crashes

The power in my neighborhood fails once every few years (and that's
from all causes, including 'the cable guys working up the street put a
JCB through the line', not just network crashes). If you're getting
lots of power failures in your neighborhood, your electricity supply
company is doing something wrong.

 I wonder, does the very-large-scale network problem even have a prospective 
 solution? Is there any sort of existence proof of this?

Yes, our repeated successes in simultaneously improving both the size
and stability of very large scale networks (trade, postage, telegraph,
electricity, road, telephone, Internet) serve as very nice existence
proofs.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Russell Wallace
On Mon, Jun 21, 2010 at 11:05 PM, Steve Richfield
steve.richfi...@gmail.com wrote:
 Another pet peeve of mine. They could/should do MUCH more fault tolerance 
 than they now are. Present puny efforts are completely ignorant of past 
 developments, e.g. Tandem Nonstop computers.

Or perhaps they just figure once the mean time between failure is on
the order of, say, a year, customers aren't willing to pay much for
further improvement. (Note that things like financial databases which
still have difficulty scaling horizontally, do get more fault
tolerance than an ordinary PC. Note also that they pay a hefty premium
for this, more than you or I would be willing or able to pay.)

 The power in my neighborhood fails once every few years (and that's
 from all causes, including 'the cable guys working up the street put a
 JCB through the line', not just network crashes). If you're getting
 lots of power failures in your neighborhood, your electricity supply
 company is doing something wrong.

 If you look at the failures/bandwidth, it is pretty high.

 So what? Nobody except you cares about that metric.  Anyway, the
phone system is in the same league, and the Internet is a lot closer
to it than it was in the past, and those have vastly higher bandwidth.

 Yes, our repeated successes in simultaneously improving both the size
 and stability of very large scale networks (trade,

 NOT stable at all. Just look at the condition of the world's economy.

Better than it was in the 1930s, despite a lot greater complexity.

 postage, telegraph,
 electricity, road, telephone, Internet)

 None of these involve feedback, the fundamental requirement to be a network 
 rather than a simple tree structure. This despite common misuse of the term 
 network to cover everything with lots of interconnections.

 All of them involve massive amounts of feedback. Unless you're
adopting a private definition of the word feedback, in which case by
your private definition, if it is to be at all consistent, neither
brains nor computers running AI programs will involve feedback either,
so it's immaterial.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Russell Wallace
Melting and boiling at least should be doable: assign every bead a
temperature, and let solid interbead bonds turn liquid above a certain
temperature and disappear completely above some higher temperature.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Russell Wallace
And it occurs to me you could even have fire. Let fire be an element,
whose beads have negative gravitational mass. Beads of fuel elements
like wood have a threshold temperature above which they will turn into
fire beads, with release of additional heat.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Russell Wallace
Yeah :-) though boiling an egg by putting it in a pot of boiling
water, that much I think should be doable.

On Tue, Jan 13, 2009 at 3:41 PM, Ben Goertzel b...@goertzel.org wrote:
 Indeed...  but cake-baking just won't have the same nuances ;-)

 On Tue, Jan 13, 2009 at 10:08 AM, Russell Wallace
 russell.wall...@gmail.com wrote:
 Melting and boiling at least should be doable: assign every bead a
 temperature, and let solid interbead bonds turn liquid above a certain
 temperature and disappear completely above some higher temperature.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 b...@goertzel.org

 This is no place to stop -- half way between ape and angel
 -- Benjamin Disraeli


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-12 Thread Russell Wallace
I think this sort of virtual world is an excellent idea.

I agree with Benjamin Johnston's idea of a unified object model where
everything consists of beads.

I notice you mentioned distributing the computation. This would
certainly be valuable in the long run, but for the first version I
would suggest having each simulation instance run on a single machine
with the fastest physics capable GPU on the market, and accepting that
it will still run slower than real time. Let an experiment be an
overnight run, and use multiple machines by running multiple
experiments at the same time. That would make the programming for the
first version more tractable.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-12 Thread Russell Wallace
On Tue, Jan 13, 2009 at 1:22 AM, Benjamin Johnston
johns...@it.uts.edu.au wrote:
 Actually, I think it would be easier, more useful and more portable to
 distribute the computation rather than trying to make it to run on a GPU.

If it would be easier, fair enough; I've never programmed a GPU, I
don't really know how difficult that is.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Introducing Steve's Theory of Everything in cognition.

2008-12-26 Thread Russell Wallace
On Fri, Dec 26, 2008 at 11:56 PM, Abram Demski abramdem...@gmail.com wrote:
 That's not to say that I don't think some representations are
 fundamentally more useful than others-- for example, I know that some
 proofs are astronomically larger in 1st-order logic as compared to
 2nd-order logic, even in domains where 1st-order logic is
 representationally sufficient.

Do you have any online references handy for these? One of the things
I'm still trying to figure out is to just what extent it is necessary
to go to higher-order logic to make interesting statements about
program code, and this sounds like useful data.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Should I get a PhD?

2008-12-17 Thread Russell Wallace
On Wed, Dec 17, 2008 at 3:54 PM, Paul Cray pmc...@gmail.com wrote:
 In the UK, it is certainly possible to proceed directly to a PhD without
 doing an MSc or much in the way of coursework, provided you have a good
 enough Bachelor's degree. As a self-funded student, it would just be a
 matter of finding a sympathetic supervisor (and the UK speaks English!).

As does Ireland ^.^


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Machine Knowledge and Inverse Machine Knowledge...

2008-12-10 Thread Russell Wallace
On Wed, Dec 10, 2008 at 5:47 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
 I don't see how, because it is completely unbounded and HIGHLY related to
 specific platforms and products. I could envision a version that worked for
 a specific class of problems on a particular platform, but it would probably
 be more work than it was worth UNLESS the user-base were really large, e.g.
 it might work well for something like Microsoft Windows or Office.

Okay, Windows or Office would seem like reasonable targets.

 I already had circuit board repair in my sights. Perhaps you recall the
 story of Eleanor my daughter observing the incredible parallels between
 difficult circuit board repair and chronic illnesses?

I recall a mention of it, but no details; have a link handy?

 I could probably rough out a KB for this in ~1 week of work. I'm just not
 sure what to do with it once done. Did you have a customer or marketing idea
 in mind?

I hadn't thought that far ahead, but given how much money is spent
every year by people covering the size range from the punter trying to
keep an old banger on the road up to the armed forces of NATO on
maintaining and repairing equipment, I'd be astonished if there wasn't
a market there for any tool that could make a real contribution.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Machine Knowledge and Inverse Machine Knowledge...

2008-12-10 Thread Russell Wallace
On Wed, Dec 10, 2008 at 5:35 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
 Maybe I should adopt the ORCAD model, where I provide it for free for a
 while, then start inching the price up and UP and UP.

Bad for PR. I suggest providing a free trial but making it clear from
the outset there will be some sort of charge for use beyond that.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Machine Knowledge and Inverse Machine Knowledge...

2008-12-09 Thread Russell Wallace
As an application domain for Dr. Eliza, medicine has the obvious
advantage of usefulness, but the disadvantage that it's hard to assess
performance -- specific data is largely unavailable for privacy
reasons, and most of us lack the expertise to properly assess it even
if it were available.

Is there any chance of applying it to debugging software, or repairing machines?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] JAGI submission

2008-11-25 Thread Russell Wallace
On Tue, Nov 25, 2008 at 12:58 AM, Trent Waddington
[EMAIL PROTECTED] wrote:
 summed up in the last two words of the abstract: without input.  Who
 ever said that RSI had anything to do with programs that had no input?

It certainly wasn't a strawman as of a couple of years ago; I've had
arguments with people who seemed to seriously believe in the
possibility of creating AI in a sealed box in someone's basement
without any feedback from the environment.

If nobody believes that anymore, well, then that is significant progress.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Whole Brain Emultion (WBE) - A Roadmap

2008-11-05 Thread Russell Wallace
On Wed, Nov 5, 2008 at 3:27 PM, Bob Mottram [EMAIL PROTECTED] wrote:
 Brains however are not nearly so sensitive to small errors, and in
 some cases fairly extensive damage can be sustained without causing
 the entire system to fail.

Let's face it, they're not that insensitive; some debugging required
still applies.

The solution is obvious: we'll have to start with the easy end and
move up to harder problems. Once we have a working, debugged brain
(plus body plus environment) emulation of C. elegans, we'll be ready
to start thinking about fruit flies.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Unification by index?

2008-11-03 Thread Russell Wallace
On Sun, Nov 2, 2008 at 7:14 AM, Benjamin Johnston
[EMAIL PROTECTED] wrote:
 The Prolog clause database effectively has this same problem. It solves it
 simply by indexing on the functor of the outermost term and the first
 argument of that term. This may be enough for your problem. As Donald Knuth
 puts it, premature optimization is the root of all evil.

*nods* Existing logic systems generally seem to use that sort of ad
hoc solution. It's certainly better than nothing, though I'd still
like to try for the full O(N) speedup if possible, since I expect N to
be in the millions in the short term and billions in the long term.

 Even if you can't get your hand on a description of a unification tree, I
 wouldn't imagine it would be too difficult to build an appropriate
 tree-structured index (especially given that the unification algorithm does
 itself operate over trees). If your data doesn't change much, you can
 probably search efficiently, even in the case of unbound variables in the
 query term, by indexing your data in several places (i.e., indexing with
 some arguments ignored).

Googled references seemed to use unification tree in the conceptual
sense only, not as the name of a reified data structure; probably the
paper articles aren't showing up. But that is basically the sort of
approach I'm currently thinking of (or will be when I get back to it
from fixing bugs). Thanks!


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Unification by index?

2008-10-31 Thread Russell Wallace
In classical logic programming, there is the concept of unification,
where one expression is matched against another, and one or both
expressions may contain variables. For example, (FOO ?A) unifies with
(FOO 42) by setting the variable ?A = 42.

Suppose you have a database of N expressions, and are given a new
expression, and want to find which of the existing ones unify against
the new one. This can obviously be done by unifying against each
expression in turn. However, this takes O(N) time, which is slow if
the database is large.

It seems to me that by appropriate use of indexes, it should be
possible to unify against the entire database simultaneously, or at
least to isolate a small fraction of it as potential matches so that
the individual unification algorithm need not be run against every
expression in the database.

I'm obviously not the first person to run into this problem, and
presumably not the first to think of that kind of solution. Before I
go ahead and work out the whole design myself, I figure it's worth
checking: does anyone know of any existing examples of this?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Unification by index?

2008-10-31 Thread Russell Wallace
On Fri, Oct 31, 2008 at 8:00 PM, Pei Wang [EMAIL PROTECTED] wrote:
 The closest thing I can think of is Rete algorithm --- see
 http://en.wikipedia.org/wiki/Rete_algorithm

Thanks! If I'm understanding correctly, the Rete algorithm only
handles lists of constants and variables, not general expressions
which include nested lists?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Cloud Intelligence

2008-10-30 Thread Russell Wallace
On Thu, Oct 30, 2008 at 6:45 AM,  [EMAIL PROTECTED] wrote:
 It sure seems to me that the availability of cloud computing is valuable
 to the AGI project.  There are some claims that maybe intelligent programs
 are still waiting on sufficient computer power, but with something like
 this, anybody who really thinks that and has some real software in mind
 has no excuse.  They can get whatever cpu horsepower they need, I'm pretty
 sure even to the theoretical levels predicted by, say, Moravec and
 Kurzweil.  It takes away that particular excuse.

Indeed, that's been the most important effect of computing power
limitations. It's not that we've ever been able to say this program
would do great things, if only we had the hardware to run it. It's
that we learn to flinch away from the good designs, the workable
approaches, because they won't fit on the single cheap beige box we
have on our desks. The key benefit of cloud computing is one that can
be had before the first line of code is written: don't think in terms
of how your design will run on one box, think in terms of how it will
run on 10,000.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Cloud Intelligence

2008-10-30 Thread Russell Wallace
On Thu, Oct 30, 2008 at 3:07 PM, John G. Rose [EMAIL PROTECTED] wrote:
 My suspicion though is that say you had 100 physical servers and then 100
 physical cloud servers. You could hand tailor your distributed application
 so that it is extremely more efficient not running on the cloud substrate.

Why would you suspect that? My understanding of cloud computing is
that the servers are perfectly ordinary Linux boxes, with perfectly
ordinary network connections, it's just that you rent them instead of
buying them.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Cloud Intelligence

2008-10-30 Thread Russell Wallace
On Thu, Oct 30, 2008 at 3:42 PM, John G. Rose [EMAIL PROTECTED] wrote:
 Not talking custom hardware, when you take your existing app and apply it to
 the distributed resource and network topology (your 100 servers) you can
 structure it to maximize its execution reward. And the design of the app
 should take the topology into account.

That would be a very bad idea, even if there were no such thing as
cloud computing. Even if there was a significant efficiency gain to be
had that way (which there isn't, in the usual scenario where you're
talking about ethernet not some custom grid fabric), as soon as the
next hardware purchase comes along, the design over which you sweated
so hard is now useless or worse than useless.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-25 Thread Russell Wallace
On Sat, Oct 25, 2008 at 9:29 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 There are systems that do just that, constructing models of a program
 and representing conditions of absence of a bug as huge formulas. They
 work with various limitations, theorem-prover based systems using
 counterexample-driven abstraction refinement (the most semantically
 accurate brute force models) able to work with programs of up to about
 tens of thousands lines of code. They don't scale. And they don't even
 handle loops well. Then there are ways to make anaylsis more scaleable
 or precise, usually in a tradeoff. The most of what used to be AI that
 enters this scene are theorem provers (that don't promise to solve all
 the problems), and cosmetic statistical analyses here and there.

Thanks, that was about where I understood the state-of-the-art to be.

 What I see as potential way of AI in program analysis is cracking
 abstract interpretation, automatically inventing invariants and
 proving that they hold, using these invariants to interface between
 results of analysis in different parts of the program and to answer
 the questions posed before analysis. This task has interesting
 similarities with summarizing world-model, where you need to perform
 inference on a huge network of elements of physical reality (start
 with physical laws, if they were simple, or chess rules in a chess
 game), basically by dynamically applying summarizing events, matching
 simplified models.

Yes, that's the sort of thing I have in mind.

 But it all looks almost AI-complete.

It's a very hard problem, but it's a long way short of AI complete. I
think it's worth aiming for as an intermediate stage between the
current state of the art and good morning Dr. Chandra.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-25 Thread Russell Wallace
On Sat, Oct 25, 2008 at 9:57 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Note that people are working on this specific technical problem for 30
 years, (see the scary amount of work by Cousot's lab,
 http://www.di.ens.fr/~cousot/COUSOTpapers/ ), and they are still
 tackling fixed invariants, finding ways to summarize program code as
 transformations on domains containing families of assertions about
 program state, to handle loops, to work with more features of
 programming languages they analyze. And it all is still imprecise and
 is able to find only relatively weak assertions. Open-ended invention
 of assertions to reflect the effect of program code in a more adaptive
 way in even on a horizon.

Look at it this way: at least we're agreed it's not such a trivial
problem as to be unworthy of a prototype AGI :-)

 I don't know, it looks like a long way there. I'm currently shifting
 towards probabilistic analysis of huge formal systems in my thinking
 about AI (which is why chess looks interesting again, in an entirely
 new light). Maybe I'll understand this area better in months to come.

That sounds like an interesting approach -- let us know when you have
results to share.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Russell Wallace
On Sat, Oct 25, 2008 at 11:14 PM, Mark Waser [EMAIL PROTECTED] wrote:
 Anyone else want to take up the issue of whether there is a distinction
 between competent scientific research and competent learning (whether or not
 both are being done by a machine) and, if so, what that distinction is?

Science is about public knowledge. I can learn from personal
experience, but it's only science if I publish my results in such a
way that other people can repeat them.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] On programming languages

2008-10-24 Thread Russell Wallace
I understand that some here have already started a project in a given
language, and aren't going to change at this late date; this is
addressed to those for whom it's still an open question.

The choice of language is said to not matter very much, and there are
projects for which this is true. AGI is not among them, so I wrote up
my thoughts on the matter in case they be of use to anyone. There
turns out to be a website collecting perspectives like mine, so I
posted it there:
http://wiki.alu.org/Russell_Wallace%27s_Road_to_Lisp

And copy below:

I'm doing research in AI on the problem of procedural knowledge, which
means dealing with (creating, debugging, reasoning about) program
code. This entails dealing with code in parse tree form (other ways to
specify computation turn out to be object code, so one way or another
you end up coming back to the parse tree whatever the desired end
product). This necessarily entails using Lisp if we define it in the
broadest sense as the high-level language family that exposes the
parse tree; the decision to make, then, is whether to use an existing
dialect or invent one's own.

So naturally, as hackers are wont to do, I chose the second option.

And I was happy as a lark for a while, putting a lot of work into
creating my own language, until in late summer 2008, compelled by
repetitive strain injury to take a break, I thought a bit more about
what I was doing.

Okay, I thought to myself, instead of using an existing language that
has a dozen mature, efficient implementations, thousands of users and
extensive documentation, you're spending time you haven't got to spare
on creating one that will have a single inefficient prototype
implementation, one user and no documentation. And for what? A nicer
syntax and ditching some historical baggage that isn't really doing
any harm in the first place?

Sharpen your wits, Russell. This is going to be a hard enough job even
if you're smart about it. Making mistakes like this, you haven't a
prayer.

After that, the decision to use Common Lisp over Scheme was dictated
by the fact that Common Lisp has the more comprehensive language
standard, which makes it easy to port code (at least code that
primarily performs computation rather than I/O) between
implementations. The ability to defer the choice of implementation is
a significant strategic advantage.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 12:14 AM, Trent Waddington
[EMAIL PROTECTED] wrote:
 Well as a somewhat good chess instructor myself, I have to say I
 completely agree with it.  People who play well against computers
 rarely rank above first time players.. in fact, most of them tend to
 not even know the rules of the game.. having had the computer there to
 coddle them at every move.

I have no difficulty believing someone who doesn't know the rules of
the game will perform badly against skilled human players.

However, considering that computers can defeat grandmasters, I find
myself rather skeptical of the proposition that someone who doesn't
even know the rules of the game can play well against computers :-)

I learned to play Go at the novice level with a little casual playing
against computers, then when I took on a human player many ranks above
me, I won, so in that game at least I can say playing against
computers does appear to constitute useful instruction.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 10:56 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Why mix AI-written code and your own code?

Example: you want the AI to generate code to meet a spec, which you
provided in the form of a fitness function. If the problem isn't
trivial and you don't have a million years to spare, you want the AI
to read and understand the spec so it can produce code targeted to
meet it, rather than rely on random trial and error.

 I meant: where the need for primitives come from? What determines the
 choice of primitive operations you need?

*shrug* The usual: arithmetic, lists, data structures etc. etc.

 You can always compile your own language into an existing language
 where there's an existing optimizing compiler.

That's only easy if the semantics are a close match, in which case
making your own language didn't accomplish anything.

 Needing many different
 features just doesn't look like a natural thing for AI-generated
 programs.

No, it doesn't, does it? And then you run into this requirement that
wasn't obvious on day one, and you cater for that, and then you run
into another requirement, that has to be dealt with in a different
way, and then you run into another... and you end up realizing you've
wasted a great deal of irreplaceable time for no good reason
whatsoever.

So I figure I might as well document the mistake, in case it saves
someone having to repeat it.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 10:24 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Russel, in what capacity do you use that language?

In all capacities, for both hand written and machine generated content.

 Do AI algorithms
 write in it?

That's the idea, once said AI algorithms are implemented.

 Where primitive operations come from?

An appropriately chosen subset of the Lisp primitives.

 From
 what you described, depending on the answers, it looks like a simple
 hand-written lambda-calculus-like language with interpreter might be
 better than a real lisp with all its bells and whistles.

Yes, that's an intuitively appealing idea, which is why I started off
there. But it turns out there is no natural boundary; the simple
interpreted language always ends up needing more features until one is
forced to acknowledge that it does, in fact, have to be a full
programming language. Furthermore, much of the runtime ends up being
spent in the object language; while machine efficiency isn't important
enough to spend project resources implementing a compiler, given that
other people have already implemented highly optimizing Lisp
compilers, it's advantageous to use them.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 11:49 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Well, my point was that maybe the mistake is use of additional
 language constructions and not their absence? You yourself should be
 able to emulate anything in lambda-calculus (you can add interpreter
 for any extension as a part of a program), and so should your AI, if
 it's to ever learn open-ended models.

Would you choose to program in raw lambda calculus if you were writing
a Web server or an e-mail client? If not, why would you choose to do
so when writing an AGI? It's not like it's an easier problem to start
with -- it's harder, so being handicapped with bad tools is an even
bigger problem.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 2:50 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 I'd write it in a separate language, developed for human programmers,
 but keep the language with which AI interacts minimalistic, to
 understand how it's supposed to grow, and not be burdened by technical
 details in the core algorithm or fooled by appearance of functionality
 where there is none but a simple combination of sufficiently
 expressive primitives. Open-ended learning should be open-ended from
 the start. It's a general argument of course, but you need specifics
 to fight it.

Okay, I'll repeat the specific example from earlier; how would you
handle it following your strategy?

Example: you want the AI to generate code to meet a spec, which you
provided in the form of a fitness function. If the problem isn't
trivial and you don't have a million years to spare, you want the AI
to read and understand the spec so it can produce code targeted to
meet it, rather than rely on random trial and error.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 3:04 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 I'd write this specification in language it understands, including a
 library that builds more convenient primitives from that foundation if
 necessary.

Okay, so you'd waste a lot of irreplaceable time creating a homebrew
language running on a slow interpreter stack when there are good
efficient languages already available. In other words, you'd make the
same mistake I did, and probably end up years down the line writing
posts on mailing lists to try to steer other people away from it :-)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 3:24 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Again, specifics. What is this specification thing? What kind of
 task are to be specified in it? Where does it lead, where does it end?

At the low end, you could look at some of the fitness functions that
have been written for genetic programming. Moving up a bit, we could
take gameplaying as an example, where specifications would involve
writing the rules of games like chess and Go. Looking further ahead, a
highly desirable application area would be verifying, debugging and
configuring existing software, the interface to which would entail
writing specifications of existing languages and platforms. Fancy
writing a C compiler or a Java byte code interpreter in raw lambda
calculus?

 In the context of my general argument I don't assume that you'd have
 to write that much. If you have to write so much, that is a deviation
 from my default, and you'd need to explain it to connect to this
 argument. Basically, it's a tradeoff between adding complexity in a
 core AI algorithm and adding complexity in a message that AI mush
 handle, in which I'd prefer to keep the core simple.

*nods* I know what you mean, that intuitively feels like it ought to
be the right approach. But it turns out the AI core can't be that
simple and still do anything interesting. It's still desirable to keep
it as simple as reasonably possible, but the threshold is above the
point where using lambda calculus instead of Common Lisp would help.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 3:37 PM, Eric Burton [EMAIL PROTECTED] wrote:
 Due to a characteristic paucity of datatypes, all powerful, and a
 terse, readable syntax, I usually recommend Python for any project
 that is just out the gate. It's my favourite way by far at present to
 mangle huge tables. By far!

Python is definitely a very good language. Unless this has changed
since I last looked at it, though, it doesn't expose the parse tree,
so isn't suitable for representing AI content?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 3:48 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 This will be practical once we have a million-fold decrease in the cost of 
 computation, based on the cost of simulating a brain sized neural network. It 
 could occur sooner if we discover more efficient
  solutions. So far we have not.

Nor have I stopped looking :-)

 If you are selecting a language for self improving AI using a genetic 
 algorithm

Oh, I gave up on that approach years ago, what with wanting a solution
before the sun burns out and all.

 If you are selecting a language to implement language or vision, good choices 
 are C, C++, and assembler. The primary concern is efficiency and the ability 
 to make good use of underlying parallelism in the hardware. The choice will 
 probably be less important as hardware gets faster.

The above are reasonable choices for low level pixel crunching, if
that's all your project is doing. None of them is remotely suitable
for implementing anything related to language.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 4:09 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 If that allows AI to understand the code, without directly helping it.
 In this case teaching it to understand these other languages might be
 a better first step.

And to do that you need to give it a specification of those languages,
and the ability to reason about the properties of a program given the
code plus the specification of what it's written in; and you need a
language in which to write the code to do all that; which brings us
back to where I started this thread.

 But, speaking of application to debugging software, I long ago came to
 conclusion that you'd need to include unreasonable amount of
 background information which you won't even be able to guess relevant
 to make AI do what you need with things that are not completely
 defined.

It's a hard problem isn't it? Science fiction about Friendly AI
rewriting the solar system is entertaining, but to really get to grips
with the matter, start with trying to figure out how to write one that
understands how to make the Firefox option always perform this
action work for all file types.

Where (if anywhere) do you see AGI going in our lifetimes, if you
think software debugging will remain too difficult an application for
the foreseeable future?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 4:10 PM, Stephen Reed [EMAIL PROTECTED] wrote:
 Hi Russell,
 Although I've already chosen an implementation language for my Texai project
 - Java, I believe that my experience may interest you.

Very much so, thank you.

 I moved up one level of procedural abstraction to view program composition
 as the key intelligent activity.  Supporting this abstraction level is the
 capability to perform source code editing for the desired target language -
 in my case Java.  In my paradigm, its not the program syntax tree that gets
 persisted in the knowledge base but rather the nested composition framework
 that bottoms out in primitives that generate Java program elements.  The
 nested composition framework is my attempt to model the conceptual aspects
 of program composition.   For example a procedure may have an initialization
 section, a main body, and a finalization section.  I desire Texai to be able
 to figure out for itself where to insert a new required variable in the
 source code so that it has the appropriate scope, and so forth.

But if it can't read the syntax tree, how will it know what the main
body actually does?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 4:54 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 That's why you need a fault tolerant language that works well with 
 redundancy. However you still have the inherent limitation that genetic 
 algorithms can learn no faster than 1 bit per population doubling.

More to the point, being a form of blind search, the runtime is
generally exponential in the size of the solution being found. A
fault-tolerant language reduces the size of the exponent, but doesn't
solve the fundamental problem.

 That is not all I am doing. Look at the top ranked text compressors. They 
 implement fairly sophisticated language models, though still far below adult 
 level.

Right, sorry, I had momentarily forgotten that you classify file
compression under the heading of natural language. Yes, those programs
have some algorithmic subtlety, but they deal with fairly simple data
structures, so any programming language that can compile into fast
machine code will suffice.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 5:26 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 It's a specific problem: jumping right to the code generation to
 specification doesn't work, because you'd need too much specification.
 At the same time, a human programmer will need much less
 specification, so it's a question of how to obtain and use background
 knowledge, a general question of AI. The conclusion is that this is
 not the way.

Oh, it's not step one or step two, that's for sure! I did say it was a
prospect for the longer term.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 5:37 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 You are describing it as a step one, with writing huge specifications
 by hand in formally interpretable language.

I skipped a lot of details because this thread is on programming
languages not my roadmap to AGI :-)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 5:30 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Interesting!  I have a good friend who is also an AGI enthusiast who
 followed the same path as you ... a lot of time burned making his own
 superior, stripped-down, AGI-customized variant of LISP, followed by a
 decision to just go with LISP ;-)

I'm not surprised :-)

 But I thought I'd mention that for OpenCog we are planning on a
 cross-language approach.  The core system is C++, for scalability and
 efficiency reasons, but the MindAgent objects that do the actual AI
 algorithms should be creatable in various languages, including Scheme or
 LISP.

*nods* As you know, I'm of the opinion that C++ is literally the worst
possible choice in this context. However...

 We can do self-modification of components of the system by coding these
 components in LISP or other highly manipulable languages.

This is good, and for what it's worth I think the best approach for
OpenCog at this stage to aim to stabilize the C++ core as soon as
possible, and try to write AI code at the higher level in Lisp, Combo
or whatever.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 5:37 PM, Mark Waser [EMAIL PROTECTED] wrote:
 Instead of arguing language, why don't you argue platform?

Platform is certainly an interesting question. I take the view that
Common Lisp has the advantage of allowing me to defer the choice of
platform. You take the view that .Net has the advantage of allowing
you to defer the choice of language, which is not unreasonable. As far
as I know, there isn't a version of Common Lisp for .Net, but there is
a Scheme, which would be suitable for writing things that the AI needs
to understand, and still allow interoperability with other chunks of
code written in C# or whatever.

The obvious fly in the ointment is that a lot of technical work is
done on Unix, so an AI project really wants to keep that option open
if at all possible. Is Mono ready for prime time yet?

 And as for Python?  Great for getting reasonably small projects up quickly
 and easily.  The cost is trade-offs on extensibility and maintenance --
  which means that, for a large, complex system, some day you're either going
 to rewrite and replace it (not necessarily a bad thing) or you're going to
 rue the day that you used it.

Why do you say that? Python code is concise and very readable, both of
which are positive attributes for extensibility and maintenance.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 5:55 PM, Stephen Reed [EMAIL PROTECTED] wrote:
 Composed statements generate Java statements such as an assignment
 statement, block statement and so forth.  You can see that there is a tree
 structure that can be navigated when performing a deductive composition
 operation like is ArrayList imported into the containing class? - if not
 then compose that import in the right place.

Okay, so would it not be accurate to say that this tree structure is a
homebrew Lisp family language (with annotations of static type,
preconditions etc.) for which you have implemented a compiler into
Java?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 6:08 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 No. Genetic algorithms implement a beam search. It is linear in the best case 
 and exponential in the worst case. It depends on the shape of the search 
 space.

It turns out that real search spaces are deceptive, so that genetic
algorithms are exponential in the typical case.

 Neurons are also simple data structures.

Which is why Fortran is a perfectly suitable language for implementing
neural nets and neuron simulations.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 6:00 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 If it's not supposed to be a generic language war, that becomes relevant.

Fair point. On the other hand, I'm not yet ready to write a detailed
road map out as far as fix user interface bugs in Firefox. Okay,
here are some nearer term examples:

Verification of digital hardware against formal models. (Narrow AI
theorem provers, for all their limitations, are already making
significant contributions in this area.)
Better solutions to NP problems.
Finding buffer overrun, bad pointer and memory leak bugs in C/C++ programs.

All of these things can be formally defined without relying on large
amounts of ill-defined background knowledge.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 6:54 PM, Stephen Reed [EMAIL PROTECTED] wrote:
 Russell,
 Let me conclude this particular point by agreeing that the Texai program
 composition framework is a domain-specific programming language whose
 purpose is to express algorithms in tree form, from which Java source code
 can be generated.

 This domain-specific programming language has capabilities not directly
 provided by any conventional programming language, e.g. it is suitable for
 HTN planning.  Its runtime necessarily includes a knowledge base and limited
 deductive inference for capability subsumption.

Okay, that makes sense. Thanks for the explanation!


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 6:49 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 I write software for analysis of C/C++ programs to find bugs in them
 (dataflow analysis, etc.). Where does AI come into this? I'd really
 like to know.

Wouldn't you find AI useful? Aren't there bugs that slip past your
software because it's not smart enough at figuring out what the code
is doing?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 7:42 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 This general sentiment doesn't help if I don't know what to do specifically.

Well, given a C/C++ program that does have buffer overrun or stray
pointer bugs, there will typically be a logical proof of this fact;
current theorem provers are typically not able to discover this proof,
but that doesn't rule out the possibility of writing a program that
can. (If this doesn't clarify, then I'm probably misunderstanding your
question, in which case can you rephrase?)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Russell Wallace
On Fri, Oct 24, 2008 at 9:49 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Only because it is hard to come up with representations that can be 
 incrementally modified (don't break when you flip 1 bit).

No, I came up with some representations that didn't break, a
sufficiently large percentage of the time. Not perfect of course, but
then neither is DNA. Good enough that wasn't the limiting factor.

 But nature has figured it out.

Nature took a billion years. I don't know about you gentlemen, but
that's more time than I'm prepared to devote to this enterprise.

 I prefer MMX and SSE2 assembler for x86. It allows you to evaluate 8 synapses 
 in parallel. In PAQ8 I got 6 times the speed of optimized C.

Cool ^.^


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-21 Thread Russell Wallace
On Tue, Oct 21, 2008 at 4:53 PM, Abram Demski [EMAIL PROTECTED] wrote:
 As it happens, this definition of
 meaning admits horribly-terribly-uncomputable-things to be described!
 (Far worse than the above-mentioned super-omegas.) So, the truth or
 falsehood is very much not computable.

 I'm hesitant to provide the mathematical proof in this email, since it
 is already long enough... let's just say it is available upon
 request.

Now I'm curious -- can these horribly-terribly-uncomputable-things be
described to a non-mathematician? If so, consider this a request.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-21 Thread Russell Wallace
On Wed, Oct 22, 2008 at 3:11 AM, Abram Demski [EMAIL PROTECTED] wrote:
 I agree with you there. Our disagreement is about what formal systems
 a computer can understand.

I'm also not quite sure what the problem is, but suppose we put it this way:

I think the most useful way to understand the family of algorithms of
which AIXI is the best-known member, is that they effectively amount
to: create (by perfect simulation) all possible universes and select
the one that exhibits the desired behavior.

Suppose we took a bunch of data from our universe as input, if the
amount of data were large enough to be specific enough, our universe
(or at least one with the same physical laws) would be created and
selected as producing results that match the data.

So the universe thus created would contain humans, and therefore
contain all the understanding of mathematics that actual humans have.

Of course, this understanding would not be contained in the original kernel.

But this should not be surprising. Consider a realistic AI which can't
create whole universes, but can learn about mathematics. Suppose the
kernel of the AI is written in Lisp, does the Lisp compiler understand
incomputable numbers? No, but that's no reason the AI as a whole
can't, at least to the extent that we humans do.

Does this help at all?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Russell Wallace
Split seems reasonable to me. Right now this is the closest there is
to a venue specifically for AGI engineering, whereas there are other
places to discuss AGI philosophy. (For example, AGI philosophy would
presumably be on topic for extropy-chat.)

As for the suggestions that we regress to the primitive and
impoverished web forum medium, the ideal would be a system that
provides both a mailing list and forum interfaces, so that each
subscriber can choose his preferred interface. I'd be surprised if
there isn't something available off the shelf that can do this by now.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Russell Wallace
On Wed, Oct 15, 2008 at 5:54 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 The below suggestion is a perfect illustration of why I have given up on the
 list:  it shows that the AGI list has become, basically, just a vehicle for
 the promotion of Ben's projects and preferences, while everything (and
 everyone) else is gradually being defined as a distraction.

For the record, this is of course utter rubbish, and I say that as
someone who disagrees with a number of aspects of Ben's approach.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: Defining AGI (was Re: AW: [agi] META: A possible re-focusing of this list)

2008-10-15 Thread Russell Wallace
I'm currently investigating the problem of theorem proving as an AGI
domain, not so much for its own sake as from the following reasoning:
AGI needs to learn procedural knowledge, which means program code; and
reasoning about program code requires formal logic.

From a programming viewpoint, formal logic is quite remarkably
slippery and messy. Right now I'm trying to formulate it in a
sufficiently elegant/orthogonal way that heuristics for guiding proof
search can be treated as content, rather than having to be
individually wired into the guts of the program.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: Defining AGI (was Re: AW: [agi] META: A possible re-focusing of this list)

2008-10-15 Thread Russell Wallace
On Thu, Oct 16, 2008 at 1:35 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Goedel and Turing showed that theorem proving is equivalent to solving the 
 halting problem. So a simple measure of intelligence might be to count the 
 number of programs that can be decided. But where does that get us? Either 
 way (as as set of axioms, or a description of a universal Turing machine), 
 the problem is algorithmically simple to describe. Therefore (by AIXI) any 
 solution will be algorithmically simple too.

This doesn't follow. Per Chaitin, you can't prove a 20 pound theorem
with 10 pounds of axioms, even if the formal system being discussed is
algorithmically simple.

More to the point, a program that can prove interesting theorems
before the sun burns out, will be much more complex than the axiom
system.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-11 Thread Russell Wallace
On Sat, Oct 11, 2008 at 4:37 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Brad,

 Sorry if my response was somehow harsh or inappropriate, it really wasn't
 intended as such.  Your contributions to the list are valued.  These last
 few weeks have been rather tough for me in my entrepreneurial role (it's not
 the best time to be operating a small business, which is what Novamente LLC
 is) so I may be in a crankier mood than usual for that reason.

I don't think your response was in any way harsh or inappropriate;
honestly, you in a cranky mood are still generally more polite and
tolerant than most of us in a good mood!

Hoping Novamente LLC pulls through the current recession okay.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-07 Thread Russell Wallace
On Tue, Oct 7, 2008 at 1:47 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 But how do you explain the fact that many of today's top financially
 successful companies rely on closed-source software?  A recent example
 is Google's search engine, which remains closed source.

Nobody paid Google for their idea. Nobody paid them for their
prototype code. What they got paid for was *solving people's problems*
-- delivering a better search service.

I'm not saying every project has to be open source. I'm saying revenue
will accrue to an AI if and only if, when and because it solves
people's problems. If you think you can get to that stage by your own
labor alone, go for it. If you think you can persuade a venture
capitalist to fund you to hire a team to do it, go for it. If you
think you're charismatic enough to get people to fund you as charity,
go for it. If you think you can by some other method make it work as a
closed source project, go for it. If not, make it open source.

But whichever route you pick, follow it with conviction. If you flag
your project open source and then start talking about protecting
your ideas and trying to measure the exact value of everybody's
contributions so everybody gets just what's coming to them and no
more, people will avoid it like a week-dead rat. You might have the
best intentions in the world, but those intentions need to come across
clearly and unambiguously in how you present your strategy.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-07 Thread Russell Wallace
A good idea and a euro will get you a cup of coffee. Whoever said you
need to protect ideas is just shilly-shallying you. Ideas have no
market value; anyone capable of taking them up, already has more ideas
of his own than time to implement them. Don't take my word for it,
look around you; do you see people on this list going, I'm ready to
start work, someone give me an idea please? No, you see people going,
here are my ideas, and other people going, great thanks, but I've
already got my own.

What people will pay for is to have their problems solved. If you want
to get paid for AI, I think the best hope is to make as an open-source
project, and offer support, consultancy etc. It's a model that has
worked for other types of open source software.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-27 Thread Russell Wallace
Given that Cyc has accomplished far more in the logical encoding of
common sense than any other project, starting with OpenCyc and
building from there would seem to suggest itself as the obvious course
of action. Am I missing something?

On Sat, Sep 27, 2008 at 8:02 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 Hi group,

 I'm starting an AGI project called G_0 which is focused on commonsense
 reasoning (my long-term goal is to become the world's leading expert
 in common sense).  I plan to use it to collect commonsense knowledge
 and to learn commonsense reasoning rules.

 One thing I need is a universal logical form for NL, which means every
 (eg English) sentence can be translated into that logical form.

 I can host a Wiki to describe the logical form, or we can use
 OpenCog's.  I plan to consult all AGI groups including OpenCog,
 OpenNARS, OpenCyc, and Texai.

 Any opinion on this?

 YKY


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-22 Thread Russell Wallace
On Mon, Sep 22, 2008 at 1:34 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 On the other hand, if intelligence is in large part a systems phenomenon,
 that has to do with the interconnection of reasonably-intelligent components
 in a reasonably-intelligent way (as I have argued in many prior
 publications), then testing the intelligence of individual system components
 is largely beside the point: it may be better to have moderately-intelligent
 components hooked together in an AGI-appropriate way, than
 extremely-intelligent components that are not able to cooperate with other
 components sufficiently usefully.

I agree with this as far as it goes; certainly, progress in
integrating separately optimized AI components that has hitherto been
somewhere between minimal and nonexistent. And one solution is to
develop a set of components as part of a system.

Another solution, which I'm currently looking at, is to develop a way
to turn code into procedural knowledge, so that separately optimized
components can be used in new contexts their programmers did not
envisage.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Russell Wallace
On Fri, Sep 19, 2008 at 11:46 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 So perhaps someone can explain why we need formal knowledge representations 
 to reason in AI.

Because the biggest open sub problem right now is dealing with
procedural, as opposed to merely declarative or reflexive, knowledge.
And unless you're trying to duplicate organic brains, procedural
knowledge means program code.  And reasoning about program code
requires formal logic.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Russell Wallace
The most plausible explanation I've heard is that humor evolved as a
social weapon for use by a group of low status individuals against a
high status individual. This explains why laughter is involuntarily
contagious, why it mostly occurs in conversation, why children like
watching Tom and Jerry and why it's always Tom rather than Jerry who
takes the fall. The snake  vine scenario is a derived application,
based on the general idea of something that had appeared badass,
turning out to not need to be taken seriously.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Russell Wallace
On Tue, Aug 26, 2008 at 2:38 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 The be-all and end-all here though, I presume is similarity. Is it a
 logic-al concept?  Finding similarities - rough likenesses as opposed to
 rational, precise, logicomathematical commonalities - is actually, I would
 argue, a process of imagination and (though I can't find a ready term)
 physical/embodied improvisation. Hence rational, logical, computing
 approaches have failed to produce any new (in the normal sense of
 surprising)  metaphors or analogies or be creative.

 Maybe you could give an example of what you mean by similarity

See AM, Eurisko, Copycat.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-07-03 Thread Russell Wallace
On Wed, Jul 2, 2008 at 5:31 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 Nevertheless, generalities among different instances of complex systems have 
 been identified, see for instance:

 http://en.wikipedia.org/wiki/Feigenbaum_constants

To be sure, but there are also plenty of complex systems where
Feigenbaum's constants don't arise. I'm not saying there aren't
theories that say things about more than one complex system - clearly
there are - only that there aren't any that say nontrivial things
about complex systems in general.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-07-01 Thread Russell Wallace
On Mon, Jun 30, 2008 at 8:10 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 My scepticism comes mostly from my personal observation that each complex
 systems scientist I come across tends to know about one breed of complex
 system, and have a great deal to say about that breed, but when I come to
 think about my preferred breed (AGI, cognitive systems) I cannot seem to
 relate their generalizations to my case.

That's not very surprising if you think about it. Suppose we postulate
the existence of a grand theory of complexity. That's a theory of
everything that is not simple (in the sense being discussed here) -
but a theory that says something about _every nontrivial thing in the
entire Tegmark multiverse_ is rather obviously not going to say very
much about any particular thing.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Russell Wallace
On Mon, Jun 30, 2008 at 8:31 AM, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
 P.S. The biggest issue that spoiled my joy of reading Permutation
 City is that you cannot simulate dynamic systems ( = solve
 numerically differential equations) out-of-order, you need to know
 time t to compute time t+1 (or, alternatively, you need to know
 t+2)

Yes...

 the same goes for space, I presume you need to know x-1,x,x+1
 to compute the next-step x.

No, x+1 is not a function of x. That's the _definition_ of time: a
dimension in which t+1 is a function of t.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-27 Thread Russell Wallace
On Fri, Jun 27, 2008 at 6:32 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
 Unsupervised learning? This could be really good for looking for strange
 things in blood samples. Now, I routinely order a manual differential white
 count that requires someone to manually look over the blood cells with a
 microscope. These typically cost ~US$25. Note that the routine counting of
 cell types in blood samples is already done by camera-driven AI programs in
 most labs.

I hadn't thought of blood samples, but that's an excellent example, thanks.

 Something like AutoCAD's mechanical simulations?

Yes, except better. An engineer wrote an excellent post on what would
be useful here:
http://groups.google.com/group/sci.nanotech/browse_thread/thread/ada3a83d1a284969/b713922d343e5371?lnk=stq=#b713922d343e5371

 Present systems already highlight any changes.

Yep. Now let's extend that to highlighting suspicious changes while
ignoring a cat chasing a mouse.

 Similar to the program-by-example programming that is used with present
 automobile welding robots?

Yes, except able to work in more complex environments than an assembly line.

 This stuff all sounds pretty puny compared to the awe-inspiring hype of the
 Singularity people

Well, I'm a _former_ Singularitarian :) But...

 None of these things would seem to be worth devoting anyone's life
 toward. Am I missing something here?

Oh, you asked for specific examples, I thought you meant something a
little nearer term than ultimate visions.

My ultimate vision? I would break the bounds that currently trammel
our species, stem the global loss of fifty million lives a year and
open the path to space colonization. I would make Earth-descended
sentient life immortal. Imagine smart CAD programs helping design cell
repair nanomachines. Imagine an O'Neill habitat being built by a swarm
of robots with their human supervisors in pressurized environments.
None of this is beyond the conceptual limits of the human mind, but it
is beyond what humans can _reasonably_ do with present-day technology,
because it takes too much time. Unlike many posters here, I don't
believe human-equivalent AGI is feasible in any meaningful timescale.
Nor do I believe it's necessary. Humans can continue to make the
high-level decisions. What we need, to accomplish great things, is
machines that can handle the details.

That, I hope you'll agree, is worth devoting one's life toward?

 I believe that a complete revolution in man's dealing with his problems is
 right here to be had. Dr. Eliza certainly illustrates that there is probably
 enough low hanging fruit to be worth immediately redesigning the Internet to
 collect it and promptly extend the lives of most of the people on Earth.

That sounds interesting, can you be more specific on what you would do and how?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-27 Thread Russell Wallace
On Fri, Jun 27, 2008 at 7:38 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
 Just one gotcha

[two claimed gotchas snipped]

I disagree with your assessment - while I agree present government and
society have problems, as I see it history shows that the development
of technology in general, and computer technology accessible to
individuals in particular, tends to alleviate rather than exacerbate
such problems - but that's off topic for this list, and is the kind of
subject that tends to generate vast and heated digressions, so I'll
refrain from further comment on it here...

 All that would be needed to make Dr. Eliza work are:

Interesting. I should look at the code and data of Dr. Eliza to
understand exactly how the current version works; where would you
recommend starting, and do you have links/files handy?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-26 Thread Russell Wallace
On Thu, Jun 26, 2008 at 6:12 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
 Perhaps we could create a short database (maybe only a dozen or so entries)
 of sample queries, activities, tasks, etc., that YOU would like to see YOUR
 future AGIs performing to earn their electricity.

The approach I have in mind is to start with reasoning about
algorithms, so possible tasks for a medium-term AI might include:

Prove simple theorems.
Given a formal specification, write a program that meets it.
Given the rules of a game, write a program that can play it with modest skill.
Design cellular automata to carry out a given task, or estimate
whether a given CA has certain properties or can be made to do certain
things.
Estimate a lower bound for values of the busy beaver function for small N.
Analyze the correctness of a program relative to a formal spec.

In the longer term, once it gets to the point of being able to
usefully handle visual/spatial information, its capabilities might
include:

Searching photographs without being limited to human labeling.
Design of physical artifacts.
Checking of human-created or machine-assisted designs.
Watching a security camera feed, ignoring benign activity but alerting
a human operator in the event of suspicious activity.
Programming robots to carry out tasks in e.g. transport and construction.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
Philosophically, intelligence explosion in the sense being discussed
here is akin to ritual magic - the primary fallacy is the attribution
to symbols alone of powers they simply do not possess.

The argument is that an initially somewhat intelligent program A can
generate a more intelligent program B, which in turn can generate...
so on to Z.

Let's stop and consider that first step, A to B. Clearly A cannot
already have B encoded within itself, or the process is mere
installation of already-existing software. So it must generate and
evaluate candidates B1, B2 etc and choose the best one.

On what basis does it choose? Most intelligent? But there's no such
function as Intelligence(S) where S is a symbol system. There are
functions F(S, E) where E is the environment, denoting the ability of
S to produce useful results in that environment; intelligence is the
word we use to refer to a family of such functions.

So A must evaluate Bx in the context of the environment in which B is
intended to operate. Furthermore, A can't evaluate by comparing Bx's
answers in each potential situation to the correct ones - if A knew
the correct answers in all situations, it would already be as
intelligent as B. It has to work by feedback from the environment.

If we step back and think about it, we really knew this already. In
every case where humans, machines or biological systems exhibit
anything that could be called an intelligence improvement - biological
evolution, a child learning to talk, a scientific community improving
its theories, engineers building better aeroplanes, programmers
improving their software - it involves feedback from the environment.
The mistake of trying to reach truth by pure armchair thought was
understandable in ancient Greece. We now know better.

So attractive as the image of a Transcendent Power popping out of a
basement may be to us geeks, it doesn't have anything to do with
reality. Making smarter machines in the real world is, like every
other engineering activity, a process that has to take place _in_ the
real world.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 3:43 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 We are very inefficient in processing evidence, there is plenty of
 room at the bottom in this sense alone. Knowledge doesn't come from
 just feeding the system with data - try to read machine learning
 textbooks to a chimp, nothing will stick.

Indeed, but becoming more efficient at processing evidence is
something that requires being embedded in the environment to which the
evidence pertains. A chimp did not acquire the ability to read
textbooks by sitting in a cave and pondering deep thoughts for a
million years.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 4:34 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 6:52 PM, Russell Wallace
 Indeed, but becoming more efficient at processing evidence is
 something that requires being embedded in the environment to which the
 evidence pertains.

 Why is that?

For the reason I explained earlier. Suppose program A generates
candidate programs B1, B2... that are conjectured to be more efficient
at processing evidence. It can't just compare their processing of
evidence with the correct version, because if it knew the correct
results in all cases, it would already be that efficient itself. It
has to try them out.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 5:22 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 But it can just work with a static corpus. When you need to figure out
 efficient learning, you only need to know a little about the overall
 structure of your data (which can be described by a reasonably small
 number of exemplars), you don't need much of the data itself.

Why do you think that? All the evidence is to the contrary - the
examples we have of figuring out efficient learning, from evolution to
childhood play to formal education and training to science to hardward
and software engineering, do not work with just a static corpus.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 5:58 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 8:32 PM, Russell Wallace
 Why do you think that? All the evidence is to the contrary - the
 examples we have of figuring out efficient learning, from evolution to
 childhood play to formal education and training to science to hardward
 and software engineering, do not work with just a static corpus.

 It is not evidence.

Yes it is.

 Evidence is an indication that depends on the
 referred event: evidence is there when referred event is there, but
 evidence is not there when refereed event is absent.

And if the referred thing (entities acquiring intelligence from static
corpus in the absence of environment) existed we would expect to see
it happening, if (as I claim) it does not exist then we would expect
to see all intelligence-acquiring entities needing interaction with an
environment; we observe the latter, which by the above criterion is
evidence for my theory.

 What would you
 expect to see, depending on correctness of your assumption? Literally,
 it translates to animals having a phase where they sit cross-legged
 and meditate on accumulated evidence, until they gain enlightenment,
 become extremely efficient learners and launch Singularity...

...er, I think there's a miscommunication here - I'm claiming this is
_not_ possible. I thought you were claiming it is?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 8:48 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 There are only evolution-built animals, which is a very limited
 repertoir of intelligences. You are saying that if no apple tastes
 like a banana, therefore no fruit tastes like a banana, even banana.

I'm saying if no fruit anyone has ever tasted confers magical powers,
and theory says fruit can't do so, and there's no evidence whatsoever
that it can, then we should accept that eating fruit does not confer
magical powers.

 Whether a design is possible or not, you expect to see the same
 result, if it was never attempted. And so, the absence of an
 implementation of design that was never attempted is not evidence of
 impossibility of design.

But it has been attempted. I cited not only biological evolution and
learning within the lifetime of individuals, but all fields of science
and engineering - including AI, where quite a few very smart people
(myself among them) have tried hard to design something that could
enhance its intelligence divorced from the real world, and all such
attempts have failed.

Obviously I can't _prove_ the impossibility of this - in the same way
that I can't prove the impossibility of summoning demons by chanting
the right phrases in Latin; you can always say, well maybe there's
some incantation nobody has yet tried.

But here's a question for you: Is the possibility of intelligence
enhancement in a vacuum a matter of absolute faith, or is there some
point at which you would accept it's impossible after all? If the
latter, when will you accept its futility? Ten years from now? Twenty?
Thirty?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 11:57 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Oh yes, it can be proven. It requires an extended argument to do so
 properly, which I won't attempt here.

Fair enough, I'd be interested to see your attempted proof if you ever
get it written up.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] More Info Please

2008-05-26 Thread Russell Wallace
On Mon, May 26, 2008 at 6:26 PM, Stephen Reed [EMAIL PROTECTED] wrote:
 Regarding the best language for AGI development, most here know that I'm
 using Java in Texai.  For skill acquisition, my strategy is to have Texai
 acquire a skill by composing a Java program to perform the learned skill.  I
 hope that the algorithmic (e.g. Java statement  operation) knowledge that I
 teach it will be eventually portable to source code generation in machine
 language for the x64 architecture.  One might hope that by initially
 teaching the register set, machine instructions, and cache-line
 characteristics for x64, the code generation might subsequently  learn to
 perform many of the static and dynamic (e.g. execution profiling based)
 optimizations employed by the best compilers.

This sounds good, though a nitpick...

 Given algorithmic knowledge, it should be possible, for example, to avoid
 the need for type inference, or escape analysis to determine which objects
 can be allocated from the stack versus the heap.

By avoid the need for... you probably mean have the AI figure out
how to do... by itself, thus avoiding the need to manually figure out
the rules?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Understanding a sick puppy

2008-05-18 Thread Russell Wallace
On Fri, May 16, 2008 at 4:10 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
 Wouldn't it be better to provide a super-wiki that could be selected to ONLY
 display the professional content if that was what was wanted? How about a
 cookie on everyone's computer that could select out porn, unreferenced,
 challenged, unprofessional, unChristian, etc., etc., with separate flags for
 each of these? This way, EVERYONE could get what they want.

There's a project in progress to do that, and last I checked, they
were still looking for people to pitch in and lend a hand:

http://canonizer.com/

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Self-maintaining Architecture first for AI

2008-05-11 Thread Russell Wallace
On Sun, May 11, 2008 at 7:45 AM, William Pearson [EMAIL PROTECTED] wrote:
 I'm starting to mod qemu (it is not a straightforward process) to add
 capabilities.

So if I understand correctly, you're proposing to sandbox candidate
programs by running them in their own virtual PC, with their own
operating system instance? I assume this works recursively, so a
qemu-sandboxed program can itself run qemu?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-10 Thread Russell Wallace
On Sat, May 10, 2008 at 1:14 AM, Stan Nilsen [EMAIL PROTECTED] wrote:
 A test of understanding is if one can give a correct *explanation* for any
 and all of the possible outputs that it (the thing to understand) produces.

Unfortunately, explanation is just as ambiguous a word as
understanding, so while the above may be true, it isn't necessarily
very helpful :P

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Self-maintaining Architecture first for AI

2008-05-10 Thread Russell Wallace
On Sat, May 10, 2008 at 8:38 AM, William Pearson [EMAIL PROTECTED] wrote:
 2) A system similar to automatic programming that takes descriptions
 in a formal language given from the outside and potentially malicious
 sources and generates a program from them. The language would be
 sufficient to specify new generative elements in and so extensible in
 that fashion. A system that cannot maintain itself trying to do this
 would quickly get swamped by viruses and the like.

If I'm understanding you correctly, what you need - or at least one
thing you need - is sandboxing, the ability to execute an arbitrary
program with the assurance that it can't compromise the environment.
This is trickier than it seems; I've been thinking about it on and off
for awhile. Most systems don't even attempt to provide full
sandboxing, the most they try to do is restrict vulnerability to
denial of service - good enough to get by on for a web browser,
probably not good enough for an AI system that might be running
millions of candidate programs over a long, unattended run. And unless
I'm missing something, you can't retrofit it after the fact, e.g. you
can't sandbox Java, you have to go back to C and write your own VM,
garbage collector etc.

I'm told MIT Scheme provides sandboxing, though I haven't had a chance
to try it out. I don't know of any other nontrivial environment that
does so. (Lots of trivial ones do, of course: Corewars, Tierra, an
ICFP contest a couple of years ago etc.)

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Self-maintaining Architecture first for AI

2008-05-10 Thread Russell Wallace
On Sat, May 10, 2008 at 10:10 PM, William Pearson [EMAIL PROTECTED] wrote:
 It depends on the system you are designing on. I think you can easily
 create as many types of sand box as you want in programming language E
 (1) for example. If the principle of least authority (2) is embedded
 in the system, then you shouldn't have any problems.

Sure, I'm talking about much lower-level concepts though. For example,
on a system with 8 gigabytes of memory, a candidate program has
computed a 5 gigabyte string. For its next operation, it appends that
string to itself, thereby crashing the VM due to running out of
memory. How _exactly_ do you prevent this from happening (while
meeting all the other requirements for an AI platform)? It's a
trickier problem than it sounds like it ought to be.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-08 Thread Russell Wallace
On Fri, May 9, 2008 at 1:51 AM, Jim Bromer [EMAIL PROTECTED] wrote:
 I don't want to get into a quibble fest, but understanding is not
 necessarily constrained to prediction.

Indeed, understanding is a fuzzy word that means lots of different
things in different contexts. In the context of Newcomb's paradox,
however, the relevant concept is prediction.

The logic here is similar to that of Goedel's theorem, and of Turing's
proof of the unsolvability of the halting problem. It also relates to
an even older question: if there exists an omniscient God, can He know
in advance what we will do? Answer: even God can, in general, only
know what the output of a program will be by actually running the
program. He can only know our actions by watching to see what we
actually do.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Language learning, basic patterns, qualia

2008-05-05 Thread Russell Wallace
On Sun, May 4, 2008 at 1:55 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
  If we imagine a brain scanner with perfect resolution of space and time then
  we get every information of the brain including the phenomenon of qualia.
  But we will not be able to understand it.

That's an empirical question about the future; armchair reasoning has
shown itself to be an utterly unreliable method of answering such
questions. Let's invent full brain scanning and try it out for a
generation or two and see what we actually manage to explain after
that time.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Language learning, basic patterns, qualia

2008-05-05 Thread Russell Wallace
On Mon, May 5, 2008 at 11:01 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
  Armchair reasoning is a bad word.

I think it's a rather good one ^.^

  It is not an empirical question. It is a question what answers we can get
  from science in principle. Therefore it is a philosophical question. By the
  way: The idea of the existence of atoms came also from armchair reasoning
  of philosophers, isn't it?

And people used to have arguments as to how atoms would remain forever
purely philosophical, and how the composition of the stars would be
forever unknowable, and yet here we are with STM pictures of atoms and
spectrographs of stars merely a Google image search away. Now you've
got an argument as to how qualia will be forever unexplainable, but
for all I know, a hundred years from now maybe there will be a
generally accepted, quantitative and tested, explanation of qualia in
terms of ordinary mathematics.

  And I think we can now prove that any explanation of qualia must have self
  references and therefore will be no valid explanation.

An explanation in terms of ordinary mathematics, if we can find one,
will be plenty valid enough for me; I'll predict that such an
explanation, if we can find and verify it, will be generally accepted
as valid. You can call it self-referential and therefore invalid if
you like; I'll disagree and note that the truths of mathematics were
true before qualia, before the Big Bang itself. But all we're really
doing there is ascertaining that you're not a Platonist and I am :)
The important question is whether an explanation of the sort that most
cognitive scientists would consider an explanation, can be found; and
to put that to the test we'll have to actually invent full brain
scanning first.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Deliberative vs Spatial intelligence

2008-04-30 Thread Russell Wallace
On Tue, Apr 29, 2008 at 2:03 PM, Mike Tintner [EMAIL PROTECTED] wrote:
  Here, I think, is a more detailed start to what you're talking about: our
 different ways of perceiving and thinking about the world.

Okay...

  Yes all this is absolutely central to solving AGI. What have I left out?

An agenda. What order do you think things should be done in? Which
areas do you think should be included, and which ones excluded, for AI
version 1.0?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Deliberative vs Spatial intelligence

2008-04-30 Thread Russell Wallace
On Tue, Apr 29, 2008 at 12:52 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
  I claim that we can and do think in each of the 16 modes implied by the above
  (and others as well).

That is certainly true...

  I think the key to AI is not so much to figure how to operate in any given 
 one
  of them, but how to operate in more than one, using one as a pilot wave or
  boundary condition for another.  *Creating* symbols from continuous
  experience. Forming a conditioned reflex by deliberation and practice.

Fair enough. Do you see this integration as being needed for version
1.0; that is, instead of trying first for a program that's good at one
mode, you'd try first for a program that has a minimal competence in
several, and can integrate between them?

If so, do you have any theories on how to achieve this integration?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Deliberative vs Spatial intelligence

2008-04-30 Thread Russell Wallace
On Wed, Apr 30, 2008 at 4:11 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  I think the more traditional classification is D = symbolic, S =
  pattern recognition/motor, or D = high level, S = low level.  The
  D-then-S approach has been popular not because it is biologically
  plausible, but because D by itself lends itself to optimizations that
  enable it to work on available hardware.  Unfortunately these
  optimizations make it incompatible with S, for example, Cyc's (D)
  failure to interface with natural language (S).  The most successful
  language models are statistical, a pattern recognition problem.

*nods* All of the above is true. I have some ideas about how to make
D-then-S work (essentially by getting D to the point where it can
reason about short programs, then encoding some S algorithms for it to
play with). Why hasn't this been done hitherto? Primarily because the
hardware wasn't up to it (S algorithms are computationally intensive,
and encoding them in accessible script incurs a hefty slowdown), and
we tend to flinch away from solutions that won't run on the hardware
we have in front of us. Even these days, I've had to train myself into
compensating for this limit by rejecting solutions that _will_ run on
available hardware.

I gather you're a proponent of S-then-D though. How do you propose
going from one to the other?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Deliberative vs Spatial intelligence

2008-04-30 Thread Russell Wallace
On Wed, Apr 30, 2008 at 5:29 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  By modeling symbolic knowledge in a neural network.  I realize it is
  horribly inefficient, but at least we have a working model to start
  from.

Inefficient is reasonable, but how do you propose to do it at all?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Deliberative vs Spatial intelligence

2008-04-30 Thread Russell Wallace
On Wed, Apr 30, 2008 at 4:59 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
  Deliberative reasoning can be expressed as processing performed by an
  inference circuit, a network that propagates activation and calculates
  the result using logic gates. Particular deliberative algorithms can
  be programmed into circuit's structure or external memory-environment
  with which it interacts.

Well yes, like I said, we can recap the fact that logic gates are a
special case of neurons, but this fact is of limited use:

1) _You_ can set up circuits in this way, but the network itself can't,

2) Even then, it only works for purely combinatorial problems and
breaks down once e.g. recursion comes into play - how would you go
about implementing Quicksort in this way? (If the answer starts with
well, von Neumann machines are made of logic gates..., I'll consider
my case made :))

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Deliberative vs Spatial intelligence

2008-04-30 Thread Russell Wallace
On Wed, Apr 30, 2008 at 7:41 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  Understanding can be as simple as matching terms in two documents, or
  something more complex, such as matching a video clip to a text or
  audio description.  However, there is an incentive to develop
  sophisticated solutions (e.g. distinguish TV programming from
  commercials).  This is the S part of the problem, essentially a
  hierarchical adaptive pattern recognition problem that could be
  implemented as a neural network or something similar on each peer.  For
  language, the pattern hierarchy is letters - words - semantic
  categories - grammatical structures.  The task is divided by pattern.
  A peer whose expertise is recognizing when a picture contains an animal
  could route the message to peers that recognize cats or dogs.  I
  believe that extremely narrow domains are practical in a network with
  billions of peers.

  The D part is old school AI, calculators, databases, theorem provers,
  programs that play chess, etc.  Interfacing these to natural language
  is a job for the S peers, matching the most common expressions to their
  formal equivalents.  This is not a hard problem in narrow domains.

So my distinction between S-first and D-first isn't particularly
relevant to you, because you're not proposing a monolithic AGI; you're
instead proposing a community or marketplace of narrow AI modules
(some S-oriented, some D-oriented), that will hopefully constitute a
sort of loosely bound collective intelligence. Would that be an
accurate paraphrase of your view?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Deliberative vs Spatial intelligence

2008-04-30 Thread Russell Wallace
On Wed, Apr 30, 2008 at 8:13 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
  That is why you can't learn to multiply numbers in your head like a
  calculator (or maybe it's possible with sufficient understanding of
  learning dynamics, but was never implemented...). You unfortunately
  don't have *memory*, so it's not a von Neumann machine, it's more
  limited in its ability. You can configure the circuit, but circuit
  can't be connected to a reliable tape. You can only sort of emulate
  the tape for some of the circuits by other circuits. Overall,
  processing is composed of reactive transitions. Is this what you meant
  by 'combinatorial problems'?

I think so, if I understand you correctly, you're agreeing with me
that it's not feasible to go directly from S to a full von Neumann or
Turing machine; so you propose concentrating on S, and then building a
version of D that will be limited in the way we humans are, with the
idea that because humans are useful despite those limits, so will your
S-oriented AI be useful?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Deliberative vs Spatial intelligence

2008-04-30 Thread Russell Wallace
On Wed, Apr 30, 2008 at 8:22 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  I know the argument that once you build one AGI, making copies is
  cheap.  No, it's not.  In an organization, every member has a unique
  job, so every member needs to be trained individually.  The costs may
  be indirect, i.e. correcting the inevitable novice mistakes, but they
  are there.  This is why AGI is expensive.  Software and training aren't
  subject to Moore's Law.

*nods* Would it be accurate to say that you see the encoding of
specific knowledge as the difficult and expensive thing, then, and so
you're trying to create an environment that maximizes the resources
that can be brought to bear on it, while also providing as many
opportunities as possible for incremental progress (individual
modules) to be rewarded and reused?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Deliberative vs Spatial intelligence

2008-04-30 Thread Russell Wallace
On Wed, Apr 30, 2008 at 9:40 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
  Deliberative reasoning is not at the core of the system I'm thinking
  about, but for example given an external tape 'in the environment',
  such system can easily implement a finite state machine to drive a
  UTM. Like reasoning with pencil and paper. Individual actions that
  combine into trains of deliberative reasoning are reactive and can be
  learned incrementally. In the case of a program (AI), it will probably
  be possible to attach highly efficient *external* memory that will act
  as another modality and will be useful for deliberative reasoning. In
  perspective, such system can learn to use a computer and fashion new
  modalities for specific computational tasks. So, I don't see this as a
  strong limitation, but as a feature.

*nods* So basically you propose to follow in evolution's footsteps in
starting with S, layering some level of D on top of it, then getting
the rest of the way by connecting up external devices to fill in the
rest of the D functionality. Fair enough.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Deliberative vs Spatial intelligence

2008-04-29 Thread Russell Wallace
There's been a lot of argument (some of it from me, indeed) about what
type of intelligence is necessary for AGI. Let me take a shot at
resolving it.

Suppose we say there are two types of intelligence (not in any
rigorous sense, just in broad classification):

Deliberative. Able to prove theorems, solve the Busy Beaver problem
for small N, write and prove properties of small functions, construct
cellular automata computers for small functions, derive small
functions from specifications, notice what it's doing, accept symbolic
heuristics to improve its efficiency, think about said heuristics etc.
Symbolic intelligence that can, in some crude sense, copy some of the
things humans can symbolically do.

Spatial. Able to perceive patterns in two or three dimensions. Can be
used, with mods, for a robot visual cortex; image recognition; given a
series of photographs of a landmark from varying viewpoints, can
derive a 3d model and backtrack that to the 2d image visible from any
other viewpoint; can pathfind units around a map in a video game; can
make much better than random guesses as to likely folds of a new
protein chain; can animate a cartoon from the description cat sits on
mat.

I think we should be able to agree on this: AGI should ultimately have
both Deliberative and Spatial faculties; after all, humans have both,
and there are jobs needing both, that are currently done by humans,
and many of those jobs are boring, so that humans would rather be
freed to do something more creative; so there is certainly room for AI
work in both D and S.

So we may then disagree on which should come first.

In biological evolution, S came first, of course. It was hard - likely
a hard step in the Great Filter - to make D on top of S. It was done,
still, and he who thinks we should try S first, then D, is not
necessarily irrational, even though I disagree with him.

I have some outline ideas on how to make S, but not scalably, not that
would easily generalize. So I think D should come first; and I think I
now know how to make D, in a way that would hopefully then scale to S.
I do not, of course, expect anyone except me to believe those personal
claims; but they are my reasons for believing the right path is D then
S.

Is there a consensus at least that AGI paths fall into the two
categories of D-then-S or S-then-D?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Deliberative vs Spatial intelligence

2008-04-29 Thread Russell Wallace
On Tue, Apr 29, 2008 at 10:12 AM, Bob Mottram [EMAIL PROTECTED] wrote:
 In biological terms D came from S.  If you read about the history of
  numbers, or abstract concepts such as money, they have a clear origin
  in S but eventually transcended it.  Even within the D realm S terms
  are still used, for example the value of the dollar is dropping like
  a stone, or phrases like dead cat bounce.

Indeed and so I said. Do you take that to imply we are best off to
follow the same path, engineer S first then derive D from it? As I
said, I'm inclined to disagree with that strategy, but I don't think
you're irrational for believing it if you do so believe.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-28 Thread Russell Wallace
On Mon, Apr 28, 2008 at 2:50 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
  No, ya dummy ;-) ... I wasn't criticising the Biosphere project itself!

Ah! Fair enough, I misunderstood you, then.

  I was criticising your use of this as an example of how complexity can be
 overcome in an engineered system by shear intuition and trial and error.
 That was the context in which it was raised by Ben.

Okay, well, take any nontrivial engineered system and you'll see
complexity being overcome by intuition plus trial and error. Here's a
couple of very good posts by someone who designs microwave electronics
for a living:

http://groups.google.com/group/sci.nanotech/browse_thread/thread/ada3a83d1a284969/b713922d343e5371?lnk=stq=#b713922d343e5371

The discussion is on CAD software, but the poster goes into a lot of
detail about exactly how complexity is dealt with (essentially a
mixture of avoiding it where possible and otherwise just going ahead
and doing the hard crunching or prototyping work).

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


  1   2   3   4   >