Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Samantha Atkins
Richard Loosemore wrote:
> Aki Iskandar wrote:
>>
>> Hello -
>>
>> I'm new on this email list.  I'm very interested in AI / AGI - but do
>> not have any formal background at all.  I do have a degree in
>> Finance, and have been a professional consultant / developer for the
>> last 9 years (including having worked at Microsoft for almost 3 of
>> those years).
>>
>> I am extremely happy to see that there are people out there that
>> believe AGI will become a reality - I share the same belief.  Most,
>> to all, of my colleagues see AI as never becoming a reality.  Some
>> that do see intelligent machines becoming a reality - believe that it
>> is hardware, not software, that will make it so.  I believe the
>> opposite ... in that the key is in the software - the hardware we
>> have today is ample.
>>
>> The reason I'm writing is that I am curious (after watching a couple
>> of the videos on google linked off of Ben's site) as to why you're
>> using C++ instead of other languages, such as C#, Java, or Python. 
>> The later 2, and others, do the grunt work of cleaning up resources -
>> thus allowing for more time to work on the problem domain, as well as
>> saving time in compiling, linking, and debugging.
>>
>> I'm not questioning your decision - I'm merely curious to learn about
>> your motivations for selecting C++ as your language of choice.
>>
>> Thanks,
>> ~Aki
>
> It is not always true that C++ is used (I am building my own language
> and development environment to do it, for example), but if C++ is most
> common in projects overall, that probably reflects the facts that:
>
> (a) it is most widely known, and
> (b) for many projects, it does not hugely matter which language is used.
>
> Frankly, I think most people choose the language they are already most
> familiar with.  There just don't happen to be any Cobol-trained AI
> researchers ;-).
>
> Back in the old days, it was different.  Lisp and Prolog, for example,
> represented particular ways of thinking about the task of building an
> AI.  The framework for those paradigms was strongly represented by the
> language itself.
>

What do you have in mind?  Pretty much every mechanism in any computer
language known was initially developed and often perfected in Lisp. 
Thus it does not seem me that Lisp was at all tied to a particular form
of program or programming much less to some forms of AI.

- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Samantha Atkins
Eugen Leitl wrote:
> On Sat, Feb 17, 2007 at 08:24:21AM -0800, Chuck Esterbrook wrote:
>
>   
>> What is the nature of your language and development environment? Is it
>> in the same neighborhood as imperative OO languages such as Python and
>> Java? Or something "different" like Prolog?
>> 
>
> There are some very good Lisp systems (SBCL) with excellent compilers,
> rivalling C and Fortran in code quality (if you avoid common pitfalls
> like consing). Together with code and data being represented by
> the same data structure and good support of code generation by code
> (more so than any other language I've heard of) makes Lisp an evergreen
> for classical AI domains. (Of course AI is a massively parallel
> number-crunching application, so Lisp isn't all that helpful here).
>
>   
Really?  I question whether you can get anywhere near the same level of
reflection and true data <-> code equivalence in any other standard
language.  I would think this capability might be very important
especially to a Seed AI.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Samantha Atkins
Eugen Leitl wrote:
> On Sun, Feb 18, 2007 at 12:40:03AM -0800, Samantha Atkins wrote:
>
>   
>> Really?  I question whether you can get anywhere near the same level of
>> reflection and true data <-> code equivalence in any other standard
>> language.  I would think this capability might be very important
>> especially to a Seed AI.
>> 
>
> 
>
> However, the AI school represented here seems to assume a seed AI (an 
> open-ended agent
> capable of directly extracting information from its environment) is 
> sufficiently simple
> to be specified by a team of human programmers, and implemented explictly by
> a team of human programmers. This type of approach is most clearest 
> represented
> by Cyc, which is sterile. 

Cyc was never intended to be a Seed AI to the best of my knowledge.  If
not it doesn't make a very clear case against seed AI.

> The reason is assumption that the internal architecture
> of human cognition is fully inspectable by human analyst introspection alone, 
> and 
> that furthermore the resulting extracted architecture is below the complexity 
> ceiling 
> accessible to a human team of programmers. I believe both assumptions are 
> incorrect.
>   

I don't believe that any real intelligence will be reasonably
inspectable by human analysts.  As a working sofware geek these last
three decades or so I am quite aware of the limits of human
understanding of even perfectly mundane moderately large systems of
code.  I think the primary assumption with Seed AI is that humans can
put together something that has some small basis of generalizable
learning ability and the capacity to self improve from there.  That is
still a tall order but it doesn't require that humans are going to
understand the code very well, especially after an iteration or two.
> There are approaches which involve stochastical methods,
> information theory and evolutionary computation which appear potentially 
> fertile,
> though the details of the projects are hard to evaluate, since lacking 
> sufficient
> numbers of peer-reviewed publications, source code, or even interactive 
> demonstrations.
> Lisp does not particularly excel at these numerics-heavy applications, though 
> e.g.
> Koza used a subset of Lisp sexpr with reasonably good results. 
It is quite possible to write numerics-heavy applications in lisp where
needed that approach the speed of C.  With suitable declarations and
tuned code generation there is no reason for any significant gap. 
Unlike most languages such tuned subsystems can be created within the
language itself fairly seamlessly.   Among other things Lisp excels as
DSL  environment.

What I find problematic with Lisp is that it has been stuck in the
academic/specialist closet too long.  Python, for instance, has a far
greater wealth of libraries and glue for many tasks.  The Common Lisp
standard doesn't even specify a threading and IPC model.  Too much is
done differently in different implementations.   Too much has to be
created or reparented from the efforts of others in order to as
efficiently produce many types of practical systems.   That I have a
problem with.

- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Samantha Atkins
Mark Waser wrote:
>
>> And, from a practical programmatic way of having  code generate code,
>> those are the only two ways.  The way you  mentioned - a text file -
>> you still have to call the compiler (which  you can do through the
>> above namespaces), but then you still have to  bring the dll into the
>> same appdomain and process.  In short, it is a  huge performance hit,
>> and in no way would seem to be a smooth  transition.
>
> Spoken by a man who has clearly never tried it.  I have functioning
> code that does *exactly* what I outlined.  There is no perceptible
> delay when the program writes, compiles, links, starts a new thread,
> and executes the second piece of new code (the first piece generates a
> minor delay which I attribute to loading the compiler and other tools
> into memory).
>

I have tried it.  I was writing code and especially classes to files,
compiling and loading them into memory back in the mid 80s.  There is no
way that opening a file, writing the code to it, closing the file,
invoking another process or several to compile and link it and still
another file I/O set to load it is going to be of no real performance
cost.  There is also no way it will outperform creating code directly in
a language tuned for it in memory and immediately evaluating it with or
without JIT machine code generation.  #Net is optimized for certain
stack based classes of languages.  Emulating other types of languages on
top of it is not going to be as efficient as implementing them closer to
the hardware.  If the IDL allowed creating a broader class of VMs than
it apparently does I would be much more interested.

> Also, even if it *did* generate a delay, this function should happen
> often enough that it is a problem and there are numerous ways around
> the delay (multi-tasking, etc).
>
How would it help you that much to do a bunch of context switching or
IPC on top of the original overhead?

- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Samantha Atkins
Eliezer S. Yudkowsky wrote:
>
>
> If you know in advance what code you plan on writing, choosing a
> language should not be a big deal.  This is as true of AI as any other
> programming task.
>

It is still a big deal.  You want to chose a language that allows you to
express your intent as concisely and clearly as possible with a minimum
of language choice induced overhead.  Ideally you want a language that
actually helps you sharpen your thoughts as you express them.  You want
the result to run at reasonable speed and to be maintainable over time. 
Almost never do you know fully not only what you plan on writing but
what it will need to also handle an iteration or two down the road. You
learn what kind of flexibility to build in to help with inevitable
change.  But the choice of programming language can make a very large
difference in how easy it is to create and maintain that. 

- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: **SPAM** Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-20 Thread Samantha Atkins
Joel Pitt wrote:
>
> Postgresql and Mysql both allow a large variety of computing functions
> - possibly not exceedingly fast, but then you don't have to manage
> what table rows are cached  if the whole table can't be stored in
> memory.

A database table of data is not even approximately equivalent to a
matrix of data.  Many bulk operations are basically equivalent to series
of matrix operations.  To perform thees operation you need to convert
efficiently from disk representation to memory representation.  Outside
math formulas, aggregates and the like databases cannot do any function
magic more directly on the data.  The only other gain is having the more
complex functions on the same machine as the data.  Even the database
cache doesn't help really because it is a database page cache rather
than something more like an equivalent matrix.  So the above mentioned
transformation still occurs.  Some types of in-memory databases might
fare a bit better.   By all means do with relational algebra what can be
done with relational algebra and let the work be handle by something
optimized for relational algebra.  But that is a small subset of the
algorithm space.

- samantha


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Why C++ ?

2007-03-23 Thread Samantha Atkins
On Fri, 2007-03-23 at 22:48 -0400, Ben Goertzel wrote:
> Samantha Atknis wrote:
> > Ben Goertzel wrote:
> >>
> >> Regarding languages, I personally am a big fan of both Ruby and 
> >> Haskell.  But, for Novamente we use C++ for reasons of scalability.
> > I am  curious as to how C++ helps scalability.  What sorts of 
> > "scalability"?  Along what dimensions?  There are ways that C++ does 
> > not scale so well like across large project sizes or in terms of 
> > maintainability.   It also doesn't scale that well in terms of  space 
> > requirements if the  class hierarchy gets  too deep or uses much  
> > multiple inheritance  of  non-mixin classes.   It also doesn't scale 
> > well in large team development.  So I am curious what you mean.
> >
> 
> I mean that Novamente needs to manage large amounts of data in heap 
> memory, which needs to be very frequently garbage collected according to 
> complex patterns.
> 

So all collection is being done by hand since C/C++ have no facilities.
But complex heap management could be done in most any language where you
can get at the bits.  Heap management could logically be handled
separately from the what is using the structures to be managed as long
as their is good enough binding between the languages.  But I see here
that having a language relatively "close to the metal" was useful. 

> We are doing probabilistic logical inference IN REAL TIME, for real time 
> embodied agent control.  This is pretty intense.  A generic GC like 
> exists in LISP or Java won't do.
> 

Lisp can though handle allocating large arrays in memory that are then
subdivided.  Exactly what you need can be modeled in Lisp and then tweak
the code generation to make it as efficient as needed.  It would be a
bit unusual but doable. 

> Aside from C++, another option might have been to use LISP and write our 
> own custom garbage collector for LISP.  Or, to go with C# and then use 
> unsafe code blocks for the stuff requiring intensive memory management.
> 

Yes.  Similar path.  It could almost be done in Java at the byte code
level but that would arguably be more unfriendly than C++. 

> Additionally, we need real-time, very fast coordinated usage of multiple 
> processors in an SMP environment.  Java, for one example, is really slow 
> at context switching between different threads.
> 

Depending on the threading model I can see that. Clever hacking can get
around some needs for context switching but then you start stepping
beyond the things Java is good for.  Did you have much opportunity to
form a judgment about Erlang?

> Finally, we need rapid distributed processing, meaning that we need to 
> rapidly get data out of complex data structures in memory and into 
> serialized bit streams (and then back into complex data structures at 
> the other end).  This means we can't use languages in which object 
> serialization is a slow process with limited 
> customizability-for-efficiently.
> 

Lisp could excel at streaming data. Java data streaming isn't that bad
either as it can be customized to stream only what you wished streamed
for your specific needs with custom readers and writers per object.  It
is relatively easy to do this custom approach in Java.  You aren't stuck
with the defaults. 

> When you start trying to do complex learning in real time in a 
> distributed multiprocessor context, you quickly realize that 
> C-derivative languages are the only viable option.   Being mostly a 
> Linux shop we didn't really consider C# (plus back when we started, .Net 
> was a lot less far along, and Mono totally sucked).
> 
> C++ with heavy use of STL and Boost is a different language than the C++ 
> we old-timers got used to back in the 90's.   It's still a large and 
> cumbersome language but it's quite possible to use it elegantly and 
> intelligently.  I am not such a whiz myself, but fortunately some of our 
> team members are.
> 

Hehehe.  That was the late 80's for me with another C++ stint from
96-99.  STL and Boost give it some of the power of Lisp but in a much
more difficult to understand and extend manner.  :-) 

I can see the choice for tight management of memory for sure.  I have
some thought so using C# at least for optimizing a general cache I
devised myself.  I know less about the story of C, C++ utilization of
newer multi-core processors.  It is my understanding that most of the
compilers still suck at taking advantage of such things.  

Lisp or Java should do fine with some tweaking at interprocess streaming
of arbitrarily complex data.  

Thanks for the interesting and informative answer.  

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] How should an AGI ponder about mathematics

2007-04-23 Thread Samantha  Atkins


On Apr 23, 2007, at 2:05 PM, J. Storrs Hall, PhD. wrote:


On Monday 23 April 2007 15:40, Lukasz Stafiniak wrote:

... An AGI working with bigger numbers had better discovered binary
numbers. Could an AGI do it? Could it discover rational numbers? (It
would initially believe that irrational numbers do not exist, as  
early

Pythagoreans have believed.) After having discovered the basic
grounding, it could be taught the more advanced things.


How many people on this list have discovered anything as fundamental  
as binary

numbers, I wonder?


Many I would suspect.  I learned math by ignoring most of what went on  
in junior high and early high school classes.  My school ran out of  
math to teach me by my junior year. I would look up now and then from  
my SF book once a week or so to see what was being taught.  If it was  
new I would take it, abstract it, play with the abstractions and  
generally figure out what was likely to be taught the next week or  
month.  If I saw something new I would figure out at least one way it  
could have been discovered for myself.  This kept math interesting.  I  
very much doubt I am unique in that respect around these parts.



We take a lot of stuff for granted but we *learned* almost
all of it, we didn't discover it.


I generally got less happy when I couldn't figure out a way to derive  
what was being taught.   I wasn't big on memorization or applying  
things I did not understand.



There's a lot of hubris in the notion that
we, working from a technology base that can't build an AI with the  
common

sense of a 5-year-old, will turn around and build a system that will
duplicate 3000 years of the accumulated efforts of humanitiy's  
greatest

geniuses in a year or two.


Yay for hubris!  A lot has been done throughout history by people who  
didn't know any better than to assume it was possible to do what they  
desired and  not give up.   What would it serve us to assume that  
creating at least a seed AI is impossible?


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Open-Source AGI

2007-05-10 Thread Samantha  Atkins


On May 10, 2007, at 6:29 PM, Benjamin Goertzel wrote:





Ben, I imagine, more or less knows the open-source truth in talking  
about an
AGI "Manhattan Project." But even that would be too small. The whole  
world -

the whole Internet - will have to be involved..

I don't really agree with this.

A Manhattan project would be awesome and would maximize odds of  
success ...
but I'm confident that with brilliance, a good AGI design and a bit  
of luck,

a small team can get to the finish line ;-)



I tend to agree.  Many hands and eyeballs are great for a project of  
many relatively isolatable components whose requirements and  
interaction are relatively understood.  But AGI is pushing the  
envelope tremendously and, to the degree I understand current designs  
and design strategies,  a set of very tightly inter-related parts need  
to be designed and build.  Many of the parts themselves much less  
their interaction are being created and integrated out of whole  
cloth.   Small, high bandwidth, concentrated and brilliant teams are  
required.The vast majority of all programmers/hackers are not  
qualified.  Even of the number that is only a small subset can be  
formed into a cohesive enough team for this intense a task.  If  
anything is likely to be a natural cathedral rather than a bazaar it  
is AGI.


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Open-Source AGI

2007-05-10 Thread Samantha  Atkins


On May 10, 2007, at 6:49 PM, Russell Wallace wrote:


Well there are two phases, framework and content. The framework is  
as you say: it needs to be a cathedral. The content needs to be of  
volume such that only a whole industry can create it: definitely a  
bazaar. The hard part then is designing a framework such as to allow  
content to easily flow together. Compare it to the Web: the first  
browser was created by an individual or small team, but the Web  
itself was not.


I think (could be wrong) that part of the goal of the core team is to  
create a mind that can largely navigate huge amounts of data for  
itself, something that has the basis to learn autonomously on the  
Web.  It may take a phase of a lot of input from many hands to get  
there but then again it may not.


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


Re: [agi] Opensource Business Model

2007-05-30 Thread Samantha Atkins

YKY (Yan King Yin) wrote:

Hi Ben and others,
 
Let's analyse the opensource vs closed-source issue in more detail...  
(limericks are not arugments!)
 
1.  I guess the biggest turn-off about opensource is that it may allow 
competitors to look at our source and steal our ideas / algorithms. 
We used to joke in closed source companies that we could set our 
competitors back several months at least by exposing our source code to 
them.  There is some truth to this in that most source code does not 
easily yield up the underlying design principles and any but the most 
localized algorithms.  The rest is more defuse, spread across to much 
code and to many little details.  It isn't that easy to extract.  As AI 
code is some of the most sophisticated around I find it not so 
troublesome that others see the code.   Too much of the explanative 
design level is obscure once it is coded in most software languages.  
Now if the code was a sophisticated set of interlocking DSLs written in 
a language like Lisp I might be a bit more worried. :-)


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Opensource Business Model

2007-05-30 Thread Samantha Atkins

J. Andrew Rogers wrote:


On May 30, 2007, at 4:24 PM, Russell Wallace wrote:
I don't think patenting algorithms is a good thing - algorithms are 
essentially ideas, and ideas should not be treated as property.



All patents are ideas and algorithms.


Not quite.  Would you patent the quadratic equation?  How about 
Newtonian approximation?  Means of computing logarithms?  Patents are 
meant to spread the fruit of innovation while encouraging more 
innovation.  Software patents quite arguably fail at both. 

There is nothing to distinguish a classical CompSci algorithm from 
those in any other area, never mind that hardware is software is 
data.  For example, chemical process patents, which no one seems to 
object to, are indistinguishable from software algorithm patents in 
every meaningful detail both in practice and in theory.  From this I 
gather that most of the objections to so-called software patents that 
are not also applied to other "types" of patents are based on 
ignorance or veiled self-interest.


Really?  When I would have to consult a considerable patent database to 
build most any middleware system that I normally would simply start 
designing and implementing on top of some commonly available parts?   In 
what way would my work and productivity and creativity be improved by 
this overhead much less negotiating licenses for each and every piece 
that migh be useful or that, more commonly, was broadly enough claimed 
in the patent to cover huge families of possible solutions to similar 
problems? 



- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Opensource Business Model

2007-05-31 Thread Samantha Atkins

On 5/31/07, J. Andrew Rogers <[EMAIL PROTECTED]> wrote:



On May 30, 2007, at 11:57 PM, Samantha Atkins wrote:
> J. Andrew Rogers wrote:
>> All patents are ideas and algorithms.
>
> Not quite.  Would you patent the quadratic equation?  How about
> Newtonian approximation?  Means of computing logarithms?


The nice thing about algorithms is that there are so many of them.
One of the disingenuous arguments against algorithm patents is that
it prevents people from doing things.  That is patently false.



Actually patents are commonly filed to be as broad as possible.  So a very
specific way of doing X will be filed as a patent on X.  Also some things
are so obvious that they are very likely to be invented over and over
again.  The 1-Click patent held by Amazon is a good case in point.  Why
should everyone have to license or not use something so obvious?



All

algorithm patents do is, at worst, make you use an older and less
efficient algorithm to accomplish the exact same thing or expend the
effort to invent your own version.




"All"?  Than is certainly not  "all".


If any average person can churn

out fantastic new algorithms with only nominal effort then it means
that virtually everyone in the software industry is a bloody imbecile
because virtually no one does it.  Yes, there are a lot of frivolous
patents (of all types), but there are also non-frivolous ones (of all
types) -- a separate issue.



There is also the small matter of prior art.  I did a LOT of work in
distributed objects and object persistence in the mid 80s.  But at the time
software patents were just not done, at least not by my company.  About
eight years ago I looked up patents in this area to see that Sun and IBM had
a number in these areas that my work in the 80s certainly was  relevant to
and much earlier.  But since  the company and I did not keep sufficient
records and since I cannot afford to challenge them myself the current
practice would restrict me in some cases from using what I myself invented
long ago.   That is not healthy.




Patents are meant to spread the fruit of innovation while
> encouraging more innovation.  Software patents quite arguably fail
> at both.


Nothing in this assertion is not equally applicable to *all*
patents.  Most non-software patents are frivolous, so an argument on
that basis would be pretty irrelevant.  As I originally stated, I'm
looking for consistency and nothing more.  Any defense of non-
software patents is equally applicable to software patents, potential
ignorance of that fact notwithstanding.



I do not agree that all patentable things are equal.  I believe that
software algorithms are much more fine grained and inter-related and
independently discoverable than say newly machine inventions.




A better question is this: what new applications are magically
enabled by a new algorithm, and if the effort is so trivial why
haven't you developed these algorithms?



Triviality is not remotely the only argument against software patents.

- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Opensource Business Model

2007-06-01 Thread Samantha  Atkins


On Jun 1, 2007, at 9:16 AM, YKY (Yan King Yin) wrote:




On 6/2/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
> > Robbing a bank is morally unjustified because it takes away  
wealth from people who are entitled to the wealth *legitimately*.

>
> And extorting money by software patents is morally unjustified for  
the same reason: it takes away wealth from people (the authors of  
software) who are entitled to it legitimately.


That is rather problematic.  I am generally against software patents  
but it is not valid to imply that all software patents are extortion.   
Nor is it valid to say that anyone is entitled to anything that she  
might perhaps create but has not created because things are different  
than they are.  I do agree that software patents in many cases do  
block areas of creation though.  But the right being blocked is the  
creation not any presumed wealth the creation will bring.




> >
> > Your criticism of patents is based on the fact that it may be  
negative to some people.  But your ideal of not doing *anything*  
negative to *anyone* is unrealistic

>
> I do not have an ideal of not doing anything negative to anyone -  
I believe in the use of armed force in self-defense for example.  
What I do not believe in is unprovoked aggression.


But why do you accept the right of the authors of software to make  
money, yet deny the right of intellectual workers who create  
intellectual capital (such as *novel* algorithms that are *non- 
obvious*)?




Who is denying this really?  The creator of such can exploit their  
invention in many different ways without having to forbid anyone else  
from inventing something similar unless they get his permission.


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Opensource Business Model

2007-06-01 Thread Samantha  Atkins


On Jun 1, 2007, at 9:40 AM, YKY (Yan King Yin) wrote:




On 6/2/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
> > But why do you accept the right of the authors of software to  
make money, yet deny the right of intellectual workers who create  
intellectual capital (such as *novel* algorithms that are *non- 
obvious*)?

>
> If an algorithm is accompanied by an implementation, the people  
who created it can make money from that. An algorithm that is not  
accompanied by an implementation, and which other people are not  
free to use to create their own implementations, does not constitute  
capital - on the contrary, its value is negative, because it  
prevents other people from independently inventing it.


Isn't this a definition of what constitutes "work" and "property"?   
I think there's little doubt that intellectual work should be  
considered a form of work.  And intellectual property seems to be a  
reasonable way of rewarding inventors -- think of other forms of  
patents such as Edison's patent of the light bulb, that also  
restricted others from copying;  so why should software be an  
exception to this?


Who can own an idea?  Ideas increase in value the more they are shared  
and built upon.  Why limit their value artificially?   Edison patent  
was on a physical invention, not the very idea of such a thing.   
Patenting software is a bit like patent a mathematical technique.  It  
decreases the applicability of the invention and places fences across  
the intellectual landscape of the discipline.



  Also, please remember that patents do not completely "prevent"  
people from practicing something -- licensing is always an option.


Licensing is not mandatory or necessarily for reasonable prices.  More  
and more patents are collected so one's company may have the right to  
not be burden by the software patents of others by trading the mutual  
right not to enforce one's own patents.   How much energy and effort  
is wasted by such friction?  How many relatively small concerns and  
individuals and projects are stalled by not having sufficient funds or  
patent portfolio or legal fees to play such games?


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Opensource Business Model

2007-06-01 Thread Samantha  Atkins


On Jun 1, 2007, at 10:14 AM, J. Andrew Rogers wrote:




Forward some decades to the problem of writing a conventional  
relational database indexed only by scalar data. Show the  
programmers (notice the plural - we're now at the stage where teams  
are typically involved) a stack of computer science papers, any  
papers you like. How much of the job have you done for them? Very  
little. B-trees are all very fine, but "now I know B-trees" doesn't  
actually help all that much. The hard part is in the implementation  
details, in software engineering not computer science.



You call them "implementation details", but the reason we do not  
actually use vanilla B-Trees is because they have a few pathological  
characteristics e.g. poor concurrency.  After B-Trees were invented  
it took another ten years before someone figured out the trick to  
make them support high-concurrency (Lehman & Yao, 1981).  The trick  
is obvious and simple in retrospect, but it nonetheless took a  
decade for anyone to figure it out.  On the other hand, it took no  
time at all to go from B-Trees to B+Trees.


In computer science today, high-concurrency B+Tree implementations  
are among the more ubiquitous constructs, probably far more  
ubiquitous than a vanilla B-Tree.  These contain two significant  
improvements over B-Trees:  the B+Tree data structure and the Lehman- 
Yao concurrency algorithm.  Based on the evidence at hand and the  
nature of the problems, I think an argument could be made that given  
the B-Tree as a starting point the B+Tree was obvious and might be  
considered an "implementation detail" but the Lehman-Yao concurrency  
algorithm was not.




Well, in my graduate database implementation class we had to design  
parts of a relational database from scratch.  My design for handling  
the B+-tree concurrency was almost exactly like the Lehman-Yao  
algorithm.   So it isn't all that obscure.   I see there algorithm was  
published in 1981.  My class was in the fall of 1980.  Yet another  
place where not knowing how the academic game is played was a bit of a  
handicap.  I just figured stuff out as I needed it for the pure joy of  
it.



I put it to you that a spatial database is like this except even  
more so. I predict that even if you could photocopy a stack of  
computer science papers from the year 2057 and put it on the desks  
of a team attempting to write a spatial database, you would have  
done only a small fraction of the job for them - most of the effort  
remains in the engineering.





I mostly disagree.  Most engineers are not all that good at  
mathematical reasoning.  They aren't so great at stepping back from  
the details and seeing the working abstractions and patterns behind  
the details in reasonably full generality and formally capturing and  
manipulating those patterns.  Without this ability they often produce  
sub-optimal, brittle results that cannot be easily adapted to somewhat  
different but quite related cases.  While I am better at this than  
many working software engineers I learned a long time ago to respect  
the more mathematical, abstract and theoretical work in Computer  
Science.  So I mine  those papers for ideas, abstractions and  
approaches I missed and was often too head down to see.  I would pay a  
lot for comp sci papers from a few years from now much less fifty  
years out!


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Opensource Business Model

2007-06-01 Thread Samantha  Atkins


On Jun 1, 2007, at 10:25 AM, Russell Wallace wrote:


On 6/1/07, J. Andrew Rogers <[EMAIL PROTECTED]> wrote:
This argument is neither here nor there.  Do you need CS papers from
2057 today because the problem is not an "implementation detail"
today?  You are still using "implementation detail" in a vague and
poorly defined way.

I'm talking about the process of going from a stack of CS papers to  
a working, useful program. I'm pointing out that most of the  
difficulty lies in that process, not in generating the CS papers.


Really?  So where is your stack of so easily generated papers?   I  
grant that implementation issues can be huge and very burdensome  
beyond the algorithms used of course.  But many of those issues have  
more to do with the still (STILL!) very primitive software languages  
and methodologies used and with primitive project knowledge management  
tools and with the vagaries of managing a population of implementation  
folks and other stakeholders.


One of the ugly secrets of this business is how little real innovation  
is in many highly touted products.   An uglier secret is how little  
computer science most professional software engineers actually apply  
to their work.   There is a lot of banging away on a bit of software  
so it more or less meets the spec in at least some threshold number of  
cases.


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Opensource Business Model

2007-06-01 Thread Samantha  Atkins


On Jun 1, 2007, at 12:48 PM, J. Andrew Rogers wrote:



On Jun 1, 2007, at 11:40 AM, Russell Wallace wrote:
A week of effort will get you a piece of test code that runs in a  
harness to prove the algorithm works. In other words, it will get  
you nothing whatsoever that is of any use by itself. Creating  
software that does something useful typically takes much more than  
six months of effort, and I assure you, it is not work that a  
monkey could do.



The prototype functions as a template that can be utilized in  
building the final product, and I would hardly call demonstrating  
something previously not possible in computer science "nothing  
whatsoever".  It is what separates yet another boring business app  
from novel new app spaces.  It requires nothing more than an  
experienced software engineer to get a production implementation.


There are experienced engineers and then there are experienced  
engineers  A few are 10x to 100x more productive than the average  
experienced engineer.  Since in the real world time to implementation  
is the difference between success and failure it is not exactly true  
that experienced software engineers are mere commodities of no great  
importance.   I suspect you know that.



The point is that this part is pure commodity, actually solving  
algorithm problems is not.  You cannot pay X dollars to Y computer  
scientists and get a result in Z months.  For this reason virtually  
all of the economic value is in the algorithms and not the  
implementation.




No.  It is not pure commodity.  Brains that can hold and organize the  
necessary levels of complexity in a moderately sophisticated system  
are not as common as people who can merely program.  I have been on  
teams that were top heavy with computer scientists and had too few  
good implementers and particularly had no architect.   If it was pure  
commodity our industry would not be beset with its well known high  
failure and defect rate.


Most algorithm design work these days is done with the abstract  
system design context in mind out of necessity.  It is often that  
context which breaks conventional algorithms, so there is less  
"systems engineering" to it when finished than you might expect.





Yes and no.  Some of those abstract system models are quite difficult  
to implement in reality with sufficient scalability, dependability and  
other desirable motherhoods.




If we are allowed to dismiss those parts of reality that we wish to  
ignore by calling them "window dressing" and "irrelevant", then  
algorithm research is irrelevant window dressing, so let's forget  
about it.



Nonsense.  One is fungible, the other is not.  That is distinction  
with a very important economic difference.  Algorithm research has  
an unbounded and unpredictable cost, systems engineering costs are  
generally quite predictable.


So all those software project cost overruns come from what exactly?

 I can go to any competent software engineer and get a production  
implementation of an algorithm with well-bounded costs.  If I need a  
new algorithm, many computer scientists will never deliver anything  
useful and it could take anywhere from a month to a decade to an  
eternity to actually deliver that new algorithm even if they are  
capable in theory.  The comparative risk between algorithm R&D and  
implementation of an algorithm that already exists is separated by  
an astronomical gap, and "risk" plays a major role in economics.




Can you get well-bounded costs on entire systems?  Not really.  How  
come?







- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Open AGI Consortium

2007-06-02 Thread Samantha  Atkins


On Jun 1, 2007, at 2:33 PM, YKY (Yan King Yin) wrote:



How about some brainstorming...?

My proposal is this:

1.  People post their ideas onto a wiki and discuss them, while  
carefully keeping a record of who has said what.  Also, each person  
suggests an amount of how much the contribution is worth.  If the  
amount is outrageous people can make complaints about it.


2.  Suppose the group end up with some useful ideas / algorithms.   
Each result will be collectively owned by that result's contributors.


Lots of luck keeping that straight.



3.  Suppose someone (a developer) wants to take a result and  
implement it?  The developer will have to pay a license fee to the  
contributors, the fee being proportional to the total estimated  
worth of its "constituents".


A result?   A group of ideas and theory is not a result in my mind  
until it is successfully implemented.  As the developer would be more  
or less working for free or for far less than her normal rate I think  
it is ludicrous that she also be expected to pay for ideas to develop.




4.  Also, everyone who participates, must sign a non-disclosure and  
non-competitive agreement (NDA & NCA).  There should also be some  
way to verify the person's identity.




Non-disclosure is one thing.  But I will not promise to never branch  
out on my own using in part things I have learned.  I will not shackle  
my mind like that and certainly not without compensation.




5.  I think this scheme can work for existing AGI projects like  
Novamente.  It will not compromise the control over their ideas /  
intellectual property because of the NDA & NCA.


6.  If something is deemed patent-worthy, the patent will be  
collectively owned as in (2).  The licensing price will be set  
analogous to (2), so it won't be outrageous.


It looks like the brainstormers and idea people get some ownership but  
implementors get less than zilch as they have to pay to participate.   
Was that on purpose?




How's that?



It sucks.

- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Open AGI Consortium

2007-06-02 Thread Samantha  Atkins


On Jun 1, 2007, at 4:07 PM, Bob Mottram wrote:



Although I'm an open source fan I don't think I would ever sign up to
the things you're proposing.  Forcing developers to pay a fee before
they use your system simply ensures that no developers will join your
project.


Yep.  Calling such a system "open source" is a bad joke.  It certainly  
can't be certified as Open Source.



 The whole saga of non-disclosure, identity verification,
anti-competitiveness and software patents I find quite nauseating, as
the saying goes "like a monstrous carbuncle on the face of a much
loved friend".  When true AGI emerges I sincerely hope that it does
not appear within the confines of this kind of restrictive system.


Yes.  It is hard to get to anything at all utopian when massively  
better technology gets applied in the service of such restriction on  
thoughts, ideas and the flow of information and creativity.




Powerful new technology concentrated into the hands of a few
individuals who exclusively monopolise its use could cause a great
deal of damage in my opinion, and hinder its wider application
especially within developing countries.


We see a lot of this today.  They become "pirates" or go with F/OSS  
solutions.


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-06 Thread Samantha  Atkins


On Jun 5, 2007, at 9:17 AM, J Storrs Hall, PhD wrote:


On Tuesday 05 June 2007 10:51:54 am Mark Waser wrote:
It's my belief/contention that a sufficiently complex mind will be  
conscious

and feel -- regardless of substrate.


Sounds like Mike the computer in Moon is a Harsh Mistress  
(Heinlein). Note,

btw, that Mike could be programmed in Loglan (predecessor of Lojban).

I think a system can get arbitrarily complex without being conscious  
--
consciousness is a specific kind of model-based, summarizing, self- 
monitoring

architecture.


That matches my intuitions mostly.  If the system must model itself in  
the context of the domain it operates upon and especially if it must  
model perceptions of itself from the point of view of other actors in  
that domain, then I think it very likely that it can become  
conscious / self-aware.   It might be necessary that it takes a  
requirement to explain itself to other beings with self-awareness to  
kick it off.   I am not sure if some of the feral children studies  
lend some support to such.  If a human being, which we know (ok, no  
quibbles for a moment) is conscious / self-aware,  has less self- 
awareness without significant interaction with other humans then this  
may say something interesting about how and why self-awareness develops.




- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] AGI Consortium

2007-06-08 Thread Samantha Atkins

Apache and its various offshoots?  Linux itself?  KDE?  JBoss and its
subprojects?  Hibernate?  None of these came from some academic thesis work
and all are wildly successful.  So I do not agree with the characterization
of Open Source.

- s

On 6/8/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:


On Friday 08 June 2007 08:21:28 am Mark Waser wrote:
> Opening your project up to an unreliable parade of volunteer
contributors
allows for a great, lowest-common-denominator consensus product. That's
fine
for Wikipedia, but I wouldn't count on any grand intellectual discourse
arising therein. Same goes for most software developed by this
method-almost
all the great open source apps are me-too knockoffs of innovative
proprietary
programs, and those that are original were almost always created under the
watchful eye of a passionate, insightful overseer or organization. Firefox
is
actually Mozilla Firefox, after all.

This is basically right. There are plenty of innovative Open Source
programs
out there, but they are typically some academic's thesis work. Being Open
Source can allow them to be turned into solid usable applications, but it
can't create them in the first place.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] AGI Consortium

2007-06-08 Thread Samantha Atkins

Really Open Source software projects almost never have a total open door
policy on the contributions that are accepted.   There is usually a small
group that determines whether contributed changes are good enough and fit
the overall project goals and architecture well enough.

Wikipedia is one of the best innovations in information aggregation ever.  I
think many of us are very happy that it exists and use it extensively.  It
does work to filter wheat from chaff over time.

Claiming most Open Source is me-too knock-offs is simply wrong.  Apache and
many of its subprojects took the market by storm because it is significantly
better than the closed source solutions it replaced for one example among
many.

You understand that Mozilla is open source right?   Most of the innovation
we enjoy in Firefox today came long after Netscape days and long after
lingering Netscape/AOL control.

But I don't expect any great understanding about Open Source here.   It is
not the expertise or prime interest of the group.


On 6/8/07, Mark Waser <[EMAIL PROTECTED]> wrote:


 from http://blogs.techrepublic.com.com/geekend/?p=696

Bruce Sterling: All blogs will die by 
2018

   - *Date*: June 5th, 2007
   - *Blogger*: The Trivia Geek

 Security expert and tech curmudgeon Bruce Sterling famously quipped at
this year's South-by-Southwest conference that "I don't think there will
be that many [blogs] around in 10 years.
I think they are a passing thing." This got the blogosphere all 
a-twitter(ahem), but I think enough time has 
passed that we can look past this
ill-worded point from Sterling's SXSW rant and get to the real moneyline:

"You are never going to see a painting by committee that is a *great*painting."

And he's right. This was Sterling's indictment of Wikipedia–and to the
"wisdom of crowds" fad sweeping the Web 2.0 pitch sessions of Silicon
Valley–but it's also a fair assessment of what holds most (not all) open
source enterprises back: *Lack of vision*.

Nearly all great innovation comes from a singular vision pursued doggedly
until it achieves success. Apple is a great example of this, as the company
didn't really resume its cutting-edge status (for better or worse) until
Steve Jobs returned, and gave us the iMac and iPod (for better or worse).
And say what you will about Microsoft, but it was Bill Gates singular vision
for Windows and the software industry that drove his company to its
excess…er, success.

Opening your project up to an unreliable parade of volunteer contributors
allows for a great, lowest-common-denominator consensus product. That's fine
for Wikipedia, but I wouldn't count on any grand intellectual discourse
arising therein. Same goes for most software developed by this method–almost
all the great open source apps are me-too knockoffs of innovative
proprietary programs, and those that are original were almost always created
under the watchful eye of a passionate, insightful overseer or organization.
Firefox is actually *Mozilla* Firefox, after all.
--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] AGI Consortium

2007-06-12 Thread Samantha Atkins

No LOC based credit please.  That measure is totally bogus.   Ten lines of
beautifully crafted spot-on code can be more important than a 1000 lines of
more ordinary code.   The real measures are pretty subjective and the
quality of the measure is utterly dependent on the quality and insight of
the measurer.  Any sort of averaging out of the measuring/measurers will
average out the quality of the measure.
- samantha


On 6/11/07, James Ratcliff <[EMAIL PROTECTED]> wrote:


Even if they received credit for the 7,000 lines, it would be worth very
little in the overall scheme, and any code that was not good could be marked
as "too be fixed" or optimized fairly easily, (similar again to the Wiki
markups) to where that credit could be diminished...
and any obvious spam or dragging out fo larger code would be removed.

Also a time delay could be in place, so no credit is applied until 3-5
people have looked over the code, and a month has passed by, so any new
spammy code would fall thru the cracks.

James





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] a2i2 news update

2007-08-05 Thread Samantha Atkins
On 7/26/07, Robert Wensman <[EMAIL PROTECTED]> wrote:
>
>  What worries me is that the founder of this company subscribes to the
> philosophy of Objectivism, and the implications this might have for the
> company's possibility at achieving friendly AI. I do not know about the rest
> of their team, but some of them use the word "rational" a lot, which could
> be a hint.
>


You do not wish the AGI to be rational?  :-)  Seriously, if you knew Peter
even lightly you would know he is in no way  of the ilk of the worst of
those who may call themselves "objectivist".   He is imminently sensible,
ethical and committed.  Your remark also imho displays a very shallow notion
of objectivist philosophy .

I am well aware of that Ayn Rand, the founder of Objectivism, uses slightly
> non-standard meaning when using words like "selfishness" and "altruism", but
> her main point is that altruism is the source of all evil in the world, and
> selfishness ought to be the main virtue of all mankind. Instead of altruism
> she often also uses the word "selflessness" which better explains her
> seemingly odd position. What she essentially means is that all evil of the
> world stems from people who "give up their values, and their self" and
> thereby become mindless evildoers that respect others as little as they
> respect themselves. While this psychological statement in isolation could be
> worth noting, and might help understand some collective madness, especially
> from the last century, I still feel her philosophy is dangerous because she
> mixes up her very specific concept of "selflessness" with the
> commonly understood concept of altruism, in the sense of valuing the well
> being and happiness of others. Is this mix-up accidental or intended? In her
> novel The Fountainhead you even get the impression that she doesn't think it
> is possible to combine altruism with creativity and originality, as all
> "altruistic" characters of her book are incompetent copycats who just
> imitate others.
>

If you had actually read her works on this subject, especially in this case
"The Virtue of Selfishness" I think you would have no problem like the
above.



Her view of the world also seems to completely ignore another category of
> potential evil-doers: Selfish people who just do not see any problem with
> using whatever means they see fit, including violence, to achieve their
> goals. People who just do not see there is "any problem" in killing or
> torturing others. Why does she ignore this group of people, because she does
> not think they exist?
>

OK.   You obviously have no real knowledge of objectivism.



So because this philosophy is controversial, it raises some interesting
> questions about Adaptive AI's plans for friendly AI. *What values
> an objectivist would give to an AGI seems like a complete paradox to me? * 
> Would
> he make an AGI that is only obedient to its master and creator, or would he
> make an AGI system that to only cares about protecting and sustaining the
> life of itself? But in the first case, the AGI would truly become a
> selfless, and therefore evil soul in Ayn Rands very meaning, an evil soul
> that is also super intelligent.
>


If you actually understood  objectivism you would undestand that reason,
intelligence and ability are  seen as virtues and real objectivists deeply
desire to see their increase regardless of whether that manifestation is in
themselves or others.  It is not remotely about being King of the Hill or
some such nonsense.  It is not at all clear whether a real AGI would be
selfless.   You are btw mistaken that selflessness per se is the essence of
evil in objectivism.


On the other hand I cannot understand what selfish interest the objectivist
> AGI designer could find in creating a selfish super intelligent AGI system
> that would likely become a superior competitor? Maybe such an AGI system
> would decide, much like the fictionous Skynet, that the humans is the most
> imminent threat to its survival, and make us its enemy?
>

Objectivists welcome superior ability as all profit from greater
intelligence and productive ability in the world.  That an AGI may turn
against us is as much of a concern for an objectivist as anyone else.




I bet a strong enough AGI system could kill us even without the use
> of offensive violence in the sense Ayn Rand uses the word. I guess it just
> needs to obtain exclusive legal ownership on all the land that we need to
> live on, on all the food we need to eat, and on all the air we need to
> breathe. Then it could just kill us in self-defence because we trespass on
> its property. I know even Ayn Rand sees no moral problem in using defensive
> violence to defend material property that is being stolen.
>

Again, you do not know what you are talking about.


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&id_secret=286

Re: [agi] Timing of Human-Level .. The Uncertainty...

2006-05-10 Thread Samantha Atkins


On May 10, 2006, at 7:39 AM, [EMAIL PROTECTED] wrote:



I do believe that any society is better with a very knowledgeable AI.
That AI can do a lot of analysis for a society and take out the
Non-Rational decisions of government.


Not unless it is the government (or there is no government as you may  
think of it)  Governments [human] are very adept at ignoring rational  
arguments, science and facts they find inconvenient.


-s

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] ping

2006-07-05 Thread Samantha Atkins

No mail seen since 6/30.  Testing.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Re: Google wins

2006-07-31 Thread Samantha Atkins


On Jul 31, 2006, at 10:21 AM, Philip Goetz wrote:



I think the 2 Google founders appear to be among the only
dot-billionaires who might put their money into AGI rather than into,
say, building a spaceship (Bezos, Carmack, that eBay guy).


Remember that Palm guy, Jeff Hawkins?   Well, maybe he is not a  
billionaire but he did alright by himself and is after AGI of sorts.


- samantha

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Samantha Atkins


On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote:


Recursive Self Inmprovement?

The answer is yes, but with some qualifications.

In general RSI would be useful to the system IF it were done in such  
a way as to preserve its existing motivational priorities.




How could the system anticipate whether on not significant RSI would  
lead it to question or modify its current motivational priorities?   
Are you suggesting that the system can somehow simulate an improved  
version of itself in sufficient detail to know this?  It seems quite  
unlikely.



That means:  the system would *not* choose to do any RSI if the RSI  
could not be done in such a way as to preserve its current  
motivational priorities:  to do so would be to risk subverting its  
own most important desires.  (Note carefully that the system itself  
would put this constraint on its own development, it would not have  
anything to do with us controlling it).




If the improvements were an improvement in capabilities and such  
improvement led to changes in its priorities then how would those  
improvements be undesirable due to showing current motivational  
priorities as being in some way lacking?  Why is protecting current  
beliefs or motivational priorities more important than becoming  
presumably more capable and more capable of understanding the reality  
the system is immersed in?



There is a bit of a problem with the term "RSI" here:  to answer  
your question fully we might have to get more specific about what  
that would entail.


Finally:  the usefulness of RSI would not necessarily be indefinite.  
The system could well get to a situation where further RSI was not  
particularly consistent with its goals.  It could live without it.




Then are its goal  more important to it than reality?

- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Samantha Atkins


On Nov 30, 2006, at 10:15 PM, Hank Conn wrote:


Yes, now the point being that if you have an AGI and you aren't in a  
sufficiently fast RSI loop, there is a good chance that if someone  
else were to launch an AGI with a faster RSI loop, your AGI would  
lose control to the other AGI where the goals of the other AGI  
differed from yours.




Are you sure that "control" would be a high priority of such systems?



What I'm saying is that the outcome of the Singularity is going to  
be exactly the target goal state of the AGI with the strongest RSI  
curve.


The further the actual target goal state of that particular AI is  
away from the actual target goal state of humanity, the worse.




What on earth is "the actual target goal state of humanity"?   AFAIK  
there is no such thing.  For that matter I doubt very much there is or  
can be an unchanging target goal state for any real AGI.




The goal of ... humanity... is that the AGI implemented that will  
have the strongest RSI curve also will be such that its actual  
target goal state is exactly congruent to the actual target goal  
state of humanity.




This seems rather circular and ill-defined.

- samantha


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] AGI and Deity

2007-12-26 Thread Samantha Atkins


On Dec 8, 2007, at 7:34 PM, John G. Rose wrote:


It'd be interesting, I kind of wonder about this sometimes, if an AGI,
especially one that is heavily complex systems based would  
independently
come up with the existence some form of a deity. Different human  
cultures
come up with deity(s), for many reasons; I'm just wondering if it is  
like

some sort of mathematical entity that is natural to incompleteness and
complexity (simulation?) or is it just exclusively a biological  
thing based

on related limitations.


I am more curious whether a self-improving AGI is likely to attain  
many characteristics associated with a deity.   That seems rather  
likely at least relative to less capable beings.   If the AGI actually  
runs virtual worlds then the parallels between it an a deity become  
stronger.  Who knows.  We may eventually upload into such a AGI Mind  
and in some guise become One with It.





An AGI is going to banging its head against the same limitations  
that we
know of though it will find ways around them or redefine limits.  
Like the

speed of light, if it can't figure out a way around this it's stuck.


Indeed.  Some form of instaneous  information transfer would be  
required for unlimited growth.   If it also turned out that true time  
travel was possible then things would get really spooky.  Alpha and  
Omega.  Mind without end.


- samanttha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=79259205-c38b2d


Re: [agi] AGI and Deity

2007-12-26 Thread Samantha Atkins


On Dec 20, 2007, at 9:18 AM, Stan Nilsen wrote:


Ed,

I agree that machines will be faster and may have something  
equivalent to the trillions of synapses in the human brain.


It isn't the modeling device that limits the "level" of  
intelligence, but rather what can be effectively modeled.   
"Effectively" meaning what can be used in a real time "judgment"  
system.


Probability is the best we can do for many parts of the model.  This  
may give us decent models but leave us short of "super" intelligence.


In what way?  The limits of human probability computation to form  
accurate opinions are rather well documented.  Why wouldn't a mind  
that could compute millions of times more quickly and with far greater  
accuracy be able to form much more complex models that were far better  
at predicting future events and explaining those aspects of reality  
with are its inputs?Again we need to get beyond the [likely  
religion instilled] notion that only "absolute knowledge" is real (or  
"super") knowledge.





Deeper thinking - that means considering more options doesn't it?   
If so, does extra thinking provide benefit if the evaluation system  
is only at level X?


What does this mean?  How would you separate "thinking" from the  
"evaluation system"?  What sort of "evaluation system" do you believe  
can actually exist in reality that has characteristics different from  
those you appear to consider woefully limited?





Yes, "faster" is better than slower, unless you don't have all the  
information yet.  A premature answer could be a jump to conclusion  
that   we regret in the near future. Again, knowing when to act is  
part of being intelligent.  Future intelligences may value high  
speed response because it is measurable - it's harder to measure the  
quality of the performance.  This could be problematic for AI's.


Beliefs also operate in the models.  I can imagine an intelligent  
machine choosing not to trust humans.  Is this intelligent?


If they have no more clarity than is exhibited here then yes, that is  
probably an intelligent decision.


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=79271793-067ea4


Re: [agi] AGI and Deity

2007-12-26 Thread Samantha Atkins


On Dec 9, 2007, at 3:05 PM, Ed Porter wrote:


John,

What I found most interesting in the article, from an AGI  
standpoint, is the evidence our brain is wired for explanation and  
to assign a theory of mind to certain types of events.  A natural  
bias toward explanation would be important for an AGI’s credit  
assignment and ability to predict.  Having a theory of minds would  
be important for any AGIs that have to deal with humans and other  
AGIs, and, in many situations, it actually makes sense to assume  
certain types of events are likely to have resulted from another  
agent with some sort of mental capacities and goals.


Why to you find Dawkin’s so offensive?

I have heard both Dawkins and Sam Harris preach atheism on Book TV.   
I have found both their presentations interesting and relatively  
well reasoned.  But I find them a little too certain and a little  
too close-minded, given the lack of evidence we humans have about  
the big questions they are discussing.  Atheism requires a leap of  
faith, and it requires such a leap from people who, in general,  
ridicule them.


A leap of faith is precisely what it DOES NOT require.  It does  
require a leap of faith to proclaim that there is a God.   That is  
part of their point.




I personally consider knowing whether or not there is a god and, if  
so, what he, she, or it is like way above my mental pay grade, or  
that of any AGI likely to be made within the next several centuries.


You don't know if there is an invisible pink unicorn in the room but I  
doubt you would claim it is above your mental pay grade to deny such  
without very strong evidence.   So why the reticence over something  
much more fantastic that the invisible unicorn?   This seems a very  
fine question.





But I do make some leaps of faith.  As has often been said, any AI  
designed to deal with any reasonably complex aspect of the real  
world is likely to have to deal with uncertainty and will need to  
have a set of beliefs about uncertain things.


It will need a probability matrix about such things but this is not  
any sort of leap of faith.


My leaps of faith include my belief in most of the common-sense  
model of external reality my mind has created (although I know it is  
flawed in certain respects).  I find other humans speak as if they  
share many of the same common sense notions about external reality  
as I do.  Thus, I make the leap of faith that the minds of other  
humans are in many ways like my own.


What?  It is certain the minds of other humans are much like your own  
since you are of the same species.  Where is there any necessary leap  
of faith there?




Another of my basic leaps of faith is that I believe largely in the  
assembled teachings of modern science, although I am aware that many  
of them are probably subject to modification and clarification by  
new knowledge,


That knowledge is contextual and growing as we gain new abilities and  
facts in no way necessitates any "leap of faith" that its results to  
date are valid.   To see it otherwise seems to require a very strange  
epistemology perhaps warped by notions of religious revelation of  
"absolute Truth".



just as Newtonian Physics was by the theories of relativity.  I  
believe that our known universe is something of such amazing size  
and power that it matches in terms of both scale any traditional  
notions of god.


I see no direct evidence for any spirit beyond mankind (and perhaps  
other possible alien intelligences) that we can pray to and that can  
intervene in the computation of reality in response to such  
prayers.  But I see no direct evidence to the contrary  -- just a  
lack of evidence.


No evidence means that you have no rational basis for entertaining  
such a possibility whatsoever.   So what are you doing exactly?  Why  
would such a powerful Being care to rearrange parts of the universe  
due to your pleadings anyway?


  I do pray on occasion.  Though I do not know if there is a God  
external to human consciousness that can understand or that even  
cares about human interests, I definitely do believe most of us,  
myself included, underestimate the power of the human spirit that  
resides in each of us.


How does underestimate the human spirit lead you to pray to you know  
not what that you doubt exists and/or is interested and that you have  
utterly no evidence for?Are you hedging in some odd way or what?


  And I think as a species we are amazingly suboptimal at harnessing  
the collective power of our combined human spirits.


So?  How does this connect?

- s

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=79259586-b9bc51

Re: [agi] AGI and Deity

2007-12-26 Thread Samantha Atkins


On Dec 10, 2007, at 6:29 AM, Mike Dougherty wrote:


On Dec 10, 2007 6:59 AM, John G. Rose <[EMAIL PROTECTED]> wrote:
Dawkins trivializes religion from his comfortable first world  
perspective ignoring the way of life of hundreds of millions of  
people and offers little substitute for what religion does and has  
done for civilization and what has came out of it over the ages.  
He's a spoiled brat prude with a glaring self-righteous desire to  
prove to people with his copious superficial factoids that god  
doesn't exist by pandering to common frustrations. He has little  
common sense about the subject in general, just his



Wow.  Nice to see someone take that position on Dawkins.  I'm  
ambivalent, but I haven't seen many rational comments against him  
and his views.


Wow, you consider the above remotely rational?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=79262511-159685

Re: [agi] AGI and Deity

2007-12-28 Thread Samantha Atkins


On Dec 28, 2007, at 5:34 AM, John G. Rose wrote:


Well I shouldn't berate the poor dude... The subject of rationality is
pertinent though as the way that humans deal with unknown involves
irrationality especially in relation to deitical belief establishment.
Before we had all the scientific instruments and methodologies  
irrationality
played an important role. How many AGIs have engineered  
irrationality as
functional dependencies? Scientists and computer geeks sometimes  
overly

apply rationality in irrational ways. The importance of irrationality
perhaps is underplayed as before science, going from primordial  
sludge to

the age of reason was quite a large percentage of mans time spent in
existence... and here we are.


Methinks there is no clear notion of "rationality" or "rational" in  
the above paragraph.  Thus I have no idea of what you are actually  
saying.Rational is not synonymous with science.   What forms of  
irrationality do you think have a place in an AGI and why?   What does  
the percentage of time supposedly spend in some state have to do with  
the importance of such a state especially with respect to an AGI?


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=80096415-b46a5a


Re: [agi] AGI and Deity

2007-12-29 Thread Samantha Atkins


On Dec 26, 2007, at 11:56 AM, Charles D Hixson wrote:


Samantha Atkins wrote:


On Dec 10, 2007, at 6:29 AM, Mike Dougherty wrote:

On Dec 10, 2007 6:59 AM, John G. Rose <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED] 
>> wrote:


   Dawkins trivializes religion from his comfortable first world
   perspective ignoring the way of life of hundreds of millions of
   people and offers little substitute for what religion does and
   has done for civilization and what has came out of it over the
   ages. He's a spoiled brat prude with a glaring self-righteous
   desire to prove to people with his copious superficial factoids
   that god doesn't exist by pandering to common frustrations. He
   has little common sense about the subject in general, just his


Wow.  Nice to see someone take that position on Dawkins.  I'm  
ambivalent, but I haven't seen many rational comments against him  
and his views.


Wow, you consider the above remotely rational?
A reasonable point, but Dawkins *does* frequently engage in  
"premature certainty", at least from my perspective.  I would find  
him less offensive than the theistic preachers if he weren't making  
pronouncements based on his authority as a scientist.


I don't agree he is doing anything wrong or sleazy.  He is a scientist  
but his arguments are based on reason and pointing out religious  
absurdities and dangers.   As a scientist he also points out that  
science does explain many things without dogma that religion claims to  
explain but does not.   That all seems perfectly legit to me.


 He is a good scientist, and I respect him in the realm of biology  
and genetics.  When he delves into psychology and religion I feel  
like he is using his authority in one area to bolster his opinions  
in another area.


I disagree.  This is precisely what I don't see him doing.

If he were to make similar pronouncements for or against negative  
energy, people would be appalled, and he's just as out of his field  
in religion.


I don't agree that only specialists should speak about religion or its  
place in modern society.  Also he is speaking up in favor of a  
naturalistic and religion free world view.  Which I think is a very  
good thinc to have some active proponents for .  Religion has been  
treated with kid gloves for much too long.   A good airing out of the  
odious aspects of religion is long overdue.   If it does contain  
"eternal verities" then they will survive.   But much rot can and  
should be disposed of.


Unfortunately, so is everyone else.  So he's got as much right to  
his opinion has anyone else, but no more.   Ditto for Billy Graham,  
the Pope, or any other authority you might cite.


So who would you consider qualified?  Or is it just a pointless  
subject?  If so shouldn't someone at least be bothered to say so in  
the face of so many claiming it is the only important subject?




People don't usually even bother to use well defined terms, so  
frequently you can't even tell whether they are arguing or  
agreeing.  When I'm feeling cynical I feel this is on purpose, so  
that they can pick and choose their allies based on expediency.


When the terms are murky but claimed as infallible certainties  
overriding all else someone had best speak against them.


 Clearly much of what is passed off as religious doctrine is  
political expediency, and has no value whatsoever WRT arguments  
about truth.


So Dawkins is less offensive than most...but nearly equally wrong- 
headed.  OTOH, he's probably not lying about what his real beliefs  
are.  He has that over most preachers.




I don't agree he is equally wrong-headed as he actually bothers to  
question his beliefs and is open to discussion.  This is very  
refreshing compared to most religious folks I have dealt with.   He  
actually has reason and evidence for his positive beliefs.   Again  
this is a large improvement.


I also hold with a naturalistic view although I think "nature" has  
quite a few surprises up "her" sleeve yet.   In any event I don't  
think we will be "in Kansas" for a great deal longer.


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=80168043-28856e


Re: [agi] AGI and Deity

2007-12-29 Thread Samantha Atkins


On Dec 26, 2007, at 7:21 AM, Stan Nilsen wrote:


Samantha Atkins wrote:




In what way?  The limits of human probability computation to form  
accurate opinions are rather well documented.  Why wouldn't a mind  
that could compute millions of times more quickly and with far  
greater accuracy be able to form much more complex models that were  
far better at predicting future events and explaining those aspects  
of reality with are its inputs?Again we need to get beyond the  
[likely religion instilled] notion that only "absolute knowledge"  
is real (or "super") knowledge.


Allow me to address what I think the questions are (I'll paraphrase):

Q1. in what way are we going to be "short" of super intelligence?

resp:  The simple answer is that the most intelligent of future  
intelligences will not be able to make decisions that are clearly  
superior to the best of human judgment.  This is not to say that  
weather forecasting might not improve as technology does, but meant  
to say that predictions and decisions regarding the "hard" problems  
that fill reality, will remain hard and defy the intelligentsia's  
efforts to fully grasp them.


This is a mere assertion.  Why won't such computationally much more  
powerful intelligences make better decisions than humans can or will?





Q2. why wouldn't a mind with characteristics of ... be able to form  
more complex models?


resp:  By "more complex" I presume you mean having more "concepts"  
and "relevance" connections between concepts.  If so, I submit that  
wikipedia estimate of synapse of the human brain at 1 to 5  
quadrillion is major complexity, and if all those connections were  
properly tuned, that is awesome computing.  Tuning seems to be the  
issue.




I mean having more active data, better memory, tremendously more  
accurate and powerful computation.How complex our brain is at the  
synaptic level has not all that much to do with how complex a model we  
can hold in our awareness and manipulate accurately.We have no way  
of "tuning the mind" and you would likely a get a biological computing  
vegetable if you could.   A great deal of our brain is design for and  
supports functions that have nothing to do with modeling or abstract  
computation.



Q3 why wouldn't a mind with characteristics of ... be able to build  
models that "are far better at predicting future events"?


resp:  This is very closely related to the limits of intelligence,  
but not the only factor contributing to intelligence.  Predictable  
events are easy in a few domains, but are they an abundant part of  
life? Abundant enough to say that we will be able to make "super"  
predictions?  Billions of daily decisions are made, and any one of  
them could have a butterfly effect.




Not really and it ignores the actual question.   If a given set of  
factors of interest are inter-related with a larger number of  
variables than humans can deal with then an intelligence that can work  
with such more complex inter-dependencies will make better decisions  
in those areas.We already have expert systems that make better  
decisions more dependably in specialized areas than even most human  
experts in those domains.   I see no reason to expect this to decrease  
or hit a wall.   And this is just using weak AI.


Q4 why wouldn't a mind... be far better able to explain "aspects of  
reality"?


resp:  may I propose a simple exercise?  Consider yourself to be  
Bill Gates in philanthropic mode (ready to give to the world.)  Make  
a few decisions about how to do so, then explain why you chose the  
avenue you took.  If you didn't delegate this to committee, would  
you be able to explain how the checks you wrote were the best  
choices in "reality"?




This is not relevant to the question at hand.   Do you think an  
intelligence with greater memory, computational capacity and vastly  
greater speed can keep track of more data and generate better  
hypothesis to explain the data and tests and refinements of those  
hypotheses?   I think the answer is obvious.








Deeper thinking - that means considering more options doesn't it?   
If so, does extra thinking provide benefit if the evaluation  
system is only at level X?



What does this mean?  How would you separate "thinking" from the  
"evaluation system"?  What sort of "evaluation system" do you  
believe can actually exist in reality that has characteristics  
different from those you appear to consider woefully limited?


Q5 - what does it mean, or how do you separate thinking from an  
evaluation system?


resp:  Simple example in two statements:
1.  Apple A is bigger than Apple B.
2.  Apples are better than oranges.

Does it matter how much you know about apples and oranges?  Will  
deep thinki

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-20 Thread Samantha Atkins


On Jan 19, 2008, at 5:24 PM, Matt Mahoney wrote:


--- "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]> wrote:




http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all

Turing also committed suicide.


In his case I understand that the British government saw fit to  
sentence him to heavy hormonal medication because they couldn't deal  
with the fact that he was gay.  Arguably that unhinged his libido and  
other aspects of his psychology, was very upsetting and set up his  
suicide.   In his case I think he was slowly murdered by intolerance  
backed by force of law and primitive medicine.





Building a copy of your mind raises deeply troubling issues.   
Logically, there
is no need for it to be conscious; it only needs to appear to other  
to be
conscious.  Also, it need not have the same goals that you do; it is  
easier to
make it happy (or appear to be happy) by changing its goals.   
Happiness does
not depend on its memories; you could change them arbitrarily or  
just delete
them.  It follows logically that there is no reason to live, that  
death is

nothing to fear.



Those of us who have meditated a bit (and/or experimented with  
conscious in other ways in our youth) are aware of how much of our  
vaunted self can be seen as construct and phantasm.   Rarely does  
seeing that alone drive someone over the edge.


Of course your behavior is not governed by this logic.  If you were  
building
an autonomous robot, you would not program it to be happy.  You  
would program
it to satisfy goals that you specify, and you would not allow it to  
change its

own goals, or even to want to change them.


That would depend greatly on how deeply "autonomous" I wanted it to be.


 One goal would be a self
preservation instinct.  It would fear death, and it would experience  
pain when
injured.  To make it intelligent, you would balance this utility  
against a
desire to explore or experiment by assigning positive utility to  
knowledge.
The resulting behavior would be indistinguishable from free will,  
what we call

consciousness.



I don't think simply avoiding death or injury as counterposed with  
exploring and experimenting is sufficient to arrive at what we  
generally term free will.



This is how evolution programmed your brain.  Your assigned  
supergoal is to

propagate your DNA, then die.  Understanding AI means subverting this
supergoal.



That is a bit blunt and very inaccurate seen analogously to giving  
goals to an AI.  Besides this is not an "assigned" supergoal.  It is  
just the fitness function applied to a naturally occurring wild GA.   
There is reason to read more into it than that.


In http://www.mattmahoney.net/singularity.html I discuss how a  
singularity
will end the human race, but without judgment whether this is good  
or bad.

Any such judgment is based on emotion.


Really?  I can think of arguments why this would be a bad thing  
without even referencing the fact that I am human and do not wish to  
die.   That wish is not equivalent to an emotion if you consider it,  
as you appear to have done above, as one of your deepest goals.  Goal  
per se do not equate to emotion.


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=88044290-bafa52


Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread Samantha Atkins
Personally I would rather shoot for a world where the ever present 
nano-swarm saw that I wanted a cup of good coffee and effectively 
created one out of thin air on the spot, cup and all.  Assuming I still 
took pleasure in such archaic practices and ways of changing my internal 
state of course. :-)


I am not well qualified to give a good guess on the original question.  
But given the intersection of current progress in general environment 
comprehension and navigation, better robotic bodies, common sense 
databases, current task training by example and guesses on learning 
algorithm advancement I would be surprised if a robot with such ability 
was more than a decade out.  


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-11 Thread Samantha Atkins

Bob Mottram wrote:

On 10/02/2008, Matt Mahoney <[EMAIL PROTECTED]> wrote:
  

It seems we have different ideas about what AGI is.  It is not a product that
you can make and sell.  It is a service that will evolve from the desire to
automate human labor, currently valued at $66 trillion per year.



Yes.  I think the best way to think about the sort of robotics that we
can reasonably expect to see in the near future is as physical
artifacts which provide a service.  Most robotics intelligence will be
provided as remotely hosted services, because this means that you can
build the physical machine very cheaply with minimal hardware onboard,
and also to a large extent make it future-proof.
I can see this for managing the download/installation of capabilities 
with periodic feedback of experience.   It is less likely that 
centralized systems would effectively teleoperate large numbers of 
remote robots.   The bandwidth and complexity would go up rapidly.  


  It also enables the
kinds of "collective subconscious" which Ben has talked about in the
context of Second Life agents.  As more computational intelligence
comes online a dumb robot just subscribes to the new service (at a
cost to the user, of course) 
What for?  It may be part of the selling point of general robotics that 
your unit gains abilities at no additional charge over time. 


and with no hardware changes it's
suddenly smarter and able to do more stuff.
  
Ugly things like Sarbannes-Oxley accounting rules could come into play 
limiting what sorts of mods are allowed or how they are priced. 


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=94603346-a08d2f


Re: [agi] the uncomputable

2008-06-19 Thread Samantha Atkins

Abram Demski wrote:

On Wed, Jun 18, 2008 at 9:54 AM, Benjamin Johnston
<[EMAIL PROTECTED]> wrote:
[...]
  

In any case, this whole conversation bothers me. It seems like we're
focussing on the wrong problems; like using the Theory of Relativity to
decide on an appropriate speed limit for cars in school zones. If it could
take 1,000 years of thought and creativity to go from BB(n) to BB(n+1) for
some n, we're talking about problems of an incredible scale, far beyond what
most of us have in mind for our first prototypes. A challenge with the busy
beaver problem is that when n becomes big enough, you start being able to
encode long-standing and very difficult mathematical conjectures.

-Ben



My point is simply that an AGI should be able to think about such
concepts, like we do. It doesn't need to solve them. In this sense I
think it is a fundamental concern: how is it possible to have a form
of knowledge representation that can in principle capture all ideas a
human might express? 



Intuition suggests that there should be a simple
sufficient representation, like 1st-order logic. But 1st-order logic
isn't enough, and neither are 2nd-order logics, 3rd order...

  



Well, what exactly are the constraints you wish you place on "capture".  
Clearly humans can express the ideas so in some sense they are trivially 
(say text and graphics) captured.  :-)


- samantha




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: AI isn't cheap

2008-09-11 Thread Samantha Atkins


On Sep 9, 2008, at 7:54 AM, Matt Mahoney wrote:


--- On Mon, 9/8/08, Steve Richfield <[EMAIL PROTECTED]> wrote:
On 9/7/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:

The fact is that thousands of very intelligent people have been  
trying
to solve AI for the last 50 years, and most of them shared your  
optimism.



Unfortunately, their positions as students and professors at various
universities have forced almost all of them into politically correct
paths, substantially all of which lead nowhere, for otherwise they  
would

have succeeded long ago. The few mavericks who aren't stuck in a
university (like those on this forum) all lack funding.


Google is actively pursuing AI and has money to spend. If you have  
seen some of their talks, you know they are pursuing some basic and  
novel research.


Google to the best of my knowledge is pursuing a some areas of narrow  
AI.  I do not believe they are remotely after AGI.






Perhaps it would be more fruitful to estimate the cost of  
automating the
global economy. I explained my estimate of 10^25 bits of memory,  
10^26

OPS, 10^17 bits of software and 10^15 dollars.


You want to replicate the work currently done by 10^10 human brains.


Hmm.  Actually probably only some 10^6 of them at most are doing  
anything much worth replicating.  :-)


A brain has 10^15 synapses. A neuron axon has an information rate of  
10 bits per second. As I said, you can argue about these numbers but  
it doesn't matter much. An order of magnitude error only changes the  
time to AGI by a few years at the current rate of Moore's Law.


Software is not subject to Moore's Law so its cost will eventually  
dominate.


So creating software creating software may be a high payoff subtask.

A human brain has about 10^9 bits of knowledge, of which probably  
10^7 to 10^8 bits are unique to each individual.


How much of this uniqueness is little more than variations on a much  
smaller number of themes and/or irrelevant to the task?


That makes 10^17 to 10^18 bits that have to be extracted from human  
brains and communicated to the AGI.


What for?  That seems like a very slow path that would pollute your  
AGI with countless errors and repetition.


This could be done in code or formal language, although most of it  
will probably be done in natural language once this capability is  
developed.


Natural languages are ridiculously slow and ambiguous.  There is no  
way the 10^7 guesstimated unique bits per individual will ever get  
encoded in natural language anyway (or much of anything else other  
than its encoding in those brains).


Since we don't know which parts of our knowledge is shared, the most  
practical approach is to dump all of it and let the AGI remove the  
redundancies.


Actually, of the knowledge the AGI needs we have pretty good ideas of  
how much is shared.


This will require a substantial fraction of each person's life time,  
so it has to be done in non obtrusive ways, such as recording all of  
your email and conversations (which, of course, all the major free  
services already do).


What exactly is your goal?  Are you attempting to simulate all of  
humankind?   What for when the real thing is up and running?If you  
want uploads there are more direct possible paths after the AGI has  
perfected some crucial technologies.






The cost estimate of $10^15 comes by estimating the world GDP ($66  
trillion per year in 2006, increasing 5% annually) from now until we  
have the hardware to support AGI. We have the option to have AGI  
sooner by paying more. Simple economics suggests we will pay up to  
what it is worth.


Why believe that the real productive intellectual output of the entire  
human world is anywhere close to or represented by the world GDP?   It  
is not likely that we need to download the full contents of all human  
brains including the huge part that is mere variation on human primate  
programming to effectively meet and exceed this productive  
intellectual output.  I find this method of estimating costs utterly  
unconvincing.


- samantha



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Samantha Atkins


On Sep 10, 2008, at 12:29 PM, Jiri Jelinek wrote:

On Wed, Sep 10, 2008 at 2:39 PM, Mike Tintner <[EMAIL PROTECTED] 
> wrote:

Without a body, you couldn't understand the joke.


False. Would you also say that without a body, you couldn't understand
3D space ?


It depends on what is meant by, and the value of, "understand 3D  
space".   If the intelligence needs to navigate or work with 3D space  
or even understand intelligence whose very concepts are filled with 3D  
metaphors, then I would think yes, that intelligence is going to need  
at least simulated detailed  experience of 3D space.


- samantha



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] META: A possible re-focusing of this list

2008-10-19 Thread Samantha Atkins
This sounds good to me.  I am much more drawn to topic #1.  Topic #2 I 
have seen discussed recursively and in dozens of variants multiple 
places.  The only thing I will add to Topic #2 is that I very seriously 
doubt current human intelligence individually or collectively is 
sufficient to address or meaningfully resolve or even crisply articulate 
such questions.   Much more is accomplished by actually "looking into 
the horse's mouth" than philosophizing endlessly.


- samantha


Ben Goertzel wrote:


Hi all,

I have been thinking a bit about the nature of conversations on this list.

It seems to me there are two types of conversations here:

1)
Discussions of how to design or engineer AGI systems, using current 
computers, according to designs that can feasibly be implemented by 
moderately-sized groups of people


2)
Discussions about whether the above is even possible -- or whether it 
is impossible because of weird physics, or poorly-defined special 
characteristics of human creativity, or the so-called "complex systems 
problem", or because AGI intrinsically requires billions of people and 
quadrillions of dollars, or whatever


Personally I am pretty bored with all the conversations of type 2.

It's not that I consider them useless discussions in a grand sense ... 
certainly, they are valid topics for intellectual inquiry.  

But, to do anything real, you have to make **some** decisions about 
what approach to take, and I've decided long ago to take an approach 
of trying to engineer an AGI system.


Now, if someone had a solid argument as to why engineering an AGI 
system is impossible, that would be important.  But that never seems 
to be the case.  Rather, what we hear are long discussions of peoples' 
intuitions and opinions in this regard.  People are welcome to their 
own intuitions and opinions, but I get really bored scanning through 
all these intuitions about why AGI is impossible.


One possibility would be to more narrowly focus this list, 
specifically on **how to make AGI work**.


If this re-focusing were done, then philosophical arguments about the 
impossibility of engineering AGI in the near term would be judged 
**off topic** by definition of the list purpose.


Potentially, there could be another list, something like 
"agi-philosophy", devoted to philosophical and weird-physics and other 
discussions about whether AGI is possible or not.  I am not sure 
whether I feel like running that other list ... and even if I ran it, 
I might not bother to read it very often.  I'm interested in new, 
substantial ideas related to the in-principle possibility of AGI, but 
not interested at all in endless philosophical arguments over various 
peoples' intuitions in this regard.


One fear I have is that people who are actually interested in building 
AGI, could be scared away from this list because of the large volume 
of anti-AGI philosophical discussion.   Which, I add, almost never has 
any new content, and mainly just repeats well-known anti-AGI arguments 
(Penrose-like physics arguments ... "mind is too complex to engineer, 
it has to be evolved" ... "no one has built an AGI yet therefore it 
will never be done" ... etc.)


What are your thoughts on this?

-- Ben




On Wed, Oct 15, 2008 at 10:49 AM, Jim Bromer <[EMAIL PROTECTED] 
> wrote:


On Wed, Oct 15, 2008 at 10:14 AM, Ben Goertzel <[EMAIL PROTECTED]
> wrote:
>
> Actually, I think COMP=false is a perfectly valid subject for
discussion on
> this list.
>
> However, I don't think discussions of the form "I have all the
answers, but
> they're top-secret and I'm not telling you, hahaha" are
particularly useful.
>
> So, speaking as a list participant, it seems to me this thread
has probably
> met its natural end, with this reference to proprietary
weird-physics IP.
>
> However, speaking as list moderator, I don't find this thread so
off-topic
> or unpleasant as to formally kill the thread.
>
> -- Ben

If someone doesn't want to get into a conversation with Colin about
whatever it is that he is saying, then they should just exercise some
self-control and refrain from doing so.

I think Colin's ideas are pretty far out there. But that does not mean
that he has never said anything that might be useful.

My offbeat topic, that I believe that the Lord may have given me some
direction about a novel approach to logical satisfiability that I am
working on, but I don't want to discuss the details about the
algorithms until I have gotten a chance to see if they work or not,
was never intended to be a discussion about the theory itself.  I
wanted to have a discussion about whether or not a good SAT solution
would have a significant influence on AGI, and whether or not the
unlikely discovery of an unexpected breakthrough on SAT would serve as
rational evid

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-19 Thread Samantha Atkins

Matt Mahoney wrote:

--- On Tue, 10/14/08, Charles Hixson <[EMAIL PROTECTED]> wrote:

  

It seems clear that without external inputs the amount of
improvement 
possible is stringently limited.  That is evident from
inspection.  But 
why the "without input"?  The only evident reason
is to ensure the truth 
of the proposition, as it doesn't match any intended
real-world scenario 
that I can imagine.  (I've never considered the
"Oracle AI" scenario [an 
AI kept within a black box that will answer all your
questions without 
inputs] to be plausible.)



If input is allowed, then we can't clearly distinguish between self improvement 
and learning. Clearly, learning is a legitimate form of improvement, but it is 
not *self* improvement.

What I am trying to debunk is the perceived risk of a fast takeoff singularity 
launched by the first AI to achieve superhuman intelligence. In this scenario, 
a scientist with an IQ of 180 produces an artificial scientist with an IQ of 
200, which produces an artificial scientist with an IQ of 250, and so on. I 
argue it can't happen because human level intelligence is the wrong threshold. 
There is currently a global brain (the world economy) with an IQ of around 
10^10, and approaching 10^12.


Oh man.  It is so tempting in today's economic morass to point out the 
obvious stupidity of this purported super-super-genius.   Why would you 
assign such an astronomical intelligence to the economy?   Even from the 
POV of the best of Austrian micro-economic optimism it is not at all 
clear that billions of minds of human level IQ interacting with one 
another can be said to produce some such large exponential of the 
average human IQ.How much of the advancement of humanity is the 
result of a relatively few exceptionally bright minds rather than the 
billions of lesser intelligences?   Are you thinking more of the entire 
cultural environment rather than specifically the economy?



- samantha



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread Samantha Atkins
Hmm.  After the recent discussion it seems this list has turned into the 
"philosophical musings related to AGI" list.   Where is the AGI 
engineering list?


- samantha



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Cloud Intelligence

2008-10-29 Thread Samantha Atkins

John G. Rose wrote:


Has anyone done some analysis on cloud computing, in particular the 
recent trend and coming out of clouds with multiple startup efforts in 
this space? And their relationship to AGI type applications?


 


Or is this phenomena just geared to web server farm resource grouping?

 

I suppose that it is worth delving into... at least evaluating. But my 
first thoughts are that the hardware nodes have interrelationships 
that require compatibility layers for service offerings verses custom 
clusters hand tweaked for app specific - AGI in this case, 
optimizations and caterings.


 

From playing around a little in the Amazon cloud you can do anything 
you can do on a standard TCP/IP network of off the shelf boxes.   
Granted you can't hook up a faster network as you certainly could in 
your own cluster.  But it still seems pretty intriguing.


What happens though over time is that the cloud generalization 
substrate made for software and competitive efficiencies eventually 
come close to or exceed the abilities of the hand developed and 
tweaked. That is the problem - determining whether to wait, pay, or to 
develop a custom solution.


Well, most of us have no choice but do do whatever we can as soon as we 
can on top of free/cheap  but relatively plentiful resources.


 

Isn't software development annoying because of this? Big guys like MS 
have the umph to shrug off the little guys using their development 
resource power. Sometimes the only choice is to eat dust and like it. 
Suck up the dust, it's nutritional silicon value is there, feed off of 
it, the perpetuity of a naked quartz lunch.


 

Actually I think software is very exciting and have for 30 years because 
the "little guy" can and often does come up with something on a relative 
shoestring that blows MS out of the water in some  market that often 
didn't even see coming. 


- samantha




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] virtual credits again

2008-10-29 Thread Samantha Atkins

YKY (Yan King Yin) wrote:

Hi Ben and others,

After some more thinking, I decide to try the virtual credit approach afterall.

Last time Ben's argument was that the virtual credit method confuses
for-profit and charity emotions in people.  At that time it sounded
convincing, but after some thinking I realized that it is actually
completely untrue.  My approach is actually more unequivocally
for-profit, and Ben's accusation actually applies to OpenCog's stance
more aptly.  I'm afraid OpenCog has some ethical problems by
straddling between for-profit and charity.  For example:  why do you
need funding to do charity?  If you want to do charity why not do it
out of your own pockets?  
Why do you think people create foundations?  Or for that matter why do 
they seek funding for normal businesses, even open source based ones, 
while starting up on nearly nothing except what change they found in the 
lint of their pockets and enthusiasm?   There is nothing unpure about 
seeking funding.  Funding allows you to go further faster.



Why use a dual license if the final product
is supposed to be free for all?  etc.

  
I am not sure you understand dual licensing.  Typically, although I 
can't speak for this case, the source is completely open and all 
non-commercial users can do whatever they want gratis.  Commercial users 
pay a fee for using it commercially.This is one way to have all the 
benefits of open source plus support the effort and even turn a healthy 
profit.  MySql is a well known example (or was) of such licensing.


- samantha



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Walker Lake

2010-08-03 Thread Samantha Atkins

Matt Mahoney wrote:

Steve Richfield wrote:
> How about an international ban on the deployment of all unmanned and 
automated weapons?
 
How about a ban on suicide bombers to level the playing field?


> 1984 has truly arrived.

No it hasn't. People want public surveillance.


Guess I am not people then.  Actually I think surveillance is inevitable 
given current and all but certain future tech.  However, I recognize 
that human beings today, and especially their governments, are not 
remotely ready for it.   To be ready for it at the very least the State 
would have to consider a great number of things none of its business to 
attempt to legislate for or against.  As it is with the current 
incredible number of arcane laws on the books it would be very easy to 
see the already ridiculously large prison population of the US double.   
Also, please note that full surveillance means no successful rebellion 
no matter how bad the powers that be become and how ineffectual the 
means that let remain legal are to change things.  Ever.


It is also necessary for AGI. In order for machines to do what you 
want, they have to know what you know.


It is not necessary to have every waking moment surveilled in order to 
have AGI know what we want.



In order for a global brain to use that knowledge, it has to be public.


I don't think the global brain needs to know exactly how often I have 
sex or with whom or in what varieties.  Do you? 

AGI has to be a global brain because it is too expensive to build any 
other way, and because it would be too dangerous ifthe whole world 
didn't control it.


No humans will control it and it is not going to be that expensive.

- samantha




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com