Re: [agi] RSI - What is it and how fast?

2006-12-07 Thread Brian Atkins

sam kayley wrote:


'integrable on the other end'.is a rather large issue to shove under the 
carpet in five words ;)


Indeed :-)



For two AIs recently forked from a common parent, probably, but for AIs 
with different 'life experiences' and resulting different conceptual 
structures, why should a severed mindpart be meaningful without 
translation into a common representation, i.e. a language?


If the language could describe things that are not introspectable in 
humans, it would help, but there would still be a process of translation 
which I see no reason to think would be anything like as fast and 
lossless as copying a file.


And as Hall points out, even if direct transfer is possible, it may 
often be better not to do so to make improvement of the skill possible.




Well, one relatively easy way to get at least part way around this would be for 
the two AGIs to define beforehand a common format for the sharing of skill data. 
This might allow for defining lots of things such as labeling inputs/outputs, 
what formats of input/output this skill module uses, etc. If then one AGI 
exported the skill to this format, and the other wrote an import function then I 
think this should be plausible. Or if an import function is too hard for some 
reason it could run the skill format on a skill format virtual machine and just 
feed it the right inputs and collect and use the outputs.


Would such a chunk of bits also be able to be further learned/improved and not 
just used as an external tool by the new AGI? I'm not sure, but I would lean 
towards saying yes if the import code used by the 2nd AGI takes the skill format 
bits and uses those to generate an integrated mind module of its own special 
internal format.


I think getting too far into these technical details is going beyond my own 
skills, so I should stop here and just retreat to my original idea: because the 
bits for a skill are there in the first AGI, and because two AGIs can transmit 
lossless data bits directly between themselves quickly (compared to humans), 
this could create at least hypothetically a "direct" skill sharing functionality 
which humans do not have.


We all have a lot of hypotheses at this point in history, I am just trying to 
err towards caution rather than ones that could be dangerous if proven wrong.

--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-06 Thread Brian Atkins

Huh that doesn't look right when I received it back. Here's a rewritten 
sentence:

Whatever the size of that group, do you claim that _all_ of these learning 
universalists would be capable of coming up with Einstein-class (or take your 
pick) ideas if they had been in his shoes during his lifetime?

--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-06 Thread Brian Atkins

Small correction:

Brian Atkins wrote:


So there is some group of humans you would say don't pass your learning 
universal test. Now, of the group that does pass, how big is that group 
roughly? The majority of humans? (IQ 100 and above) Whatever the size of 
that group, do you claim that any of these learning universalists would 

^^^
Should be "all" I suppose.
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-06 Thread Brian Atkins

J. Storrs Hall wrote:

On Monday 04 December 2006 07:55, Brian Atkins wrote:

Also, what is really the difference between an Einstein/Feynman brain, and 
someone with an 80 IQ?


I think there's very likely a significant structural difference and the IQ80 
one is *not* learning universal in my sense.  


So there is some group of humans you would say don't pass your learning 
universal test. Now, of the group that does pass, how big is that group roughly? 
The majority of humans? (IQ 100 and above) Whatever the size of that group, do 
you claim that any of these learning universalists would be capable of coming up 
with Einstein-class (or take your pick) ideas if they had been in his shoes 
during his lifetime? In other words, if they had access to his experiences, 
education, etc.


I would say no. I'm not saying that Einstein is the sole human who could have 
come up with his ideas, but I'm also saying that it's unlikely that someone with 
an IQ of 110 would be able to do so even if given every help. I would say there 
are yet more differences in human minds beyond your learning universal idea 
which separate us, which make the difference for example between a 110 IQ and 140.




For instance, let's say I want to design a new microprocessor. As part of 
that 
process I may need to design a multitude of different circuits, test them, 
and 
then integrate them. To humans, this is not a task that can run on 

autopilot
What if though I find doing this job kinda boring after a while and wish I 
could 
split off a largish chunk of my cognitive resources to chew away on it at a 
somewhat slower speed unconsciously in the background and get the work done 
while the conscious part of my mind surfs the web? Humans can't do this, but 
an 

AGI likely could.


At any given level, a mind will have some tasks that require all its attention 
and resources. If the task is simple enough that it can be done with a 
fraction of the resources, (e.g. driving) we learn to turn it into a habit / 
skill and to it more or less subconsciously. An AI might do that faster but 
we're assuming it could lots of things faster. On the other hand, it would 
still have to pay attention to tasks that require all its resources.


This isn't completely addressing my particular scenario, where let's say we have 
a roughly human level AGI, it has to work on a semi-repetitive design task, the 
kind of thing a human is forced to stare a monitor at yet doesn't take their 
full absolute maximum brainpower. The AGI should theoretically be able to divide 
its resources in such a way that the design task can be done unconsciously in 
the background, while it can use what resources remain to do other stuff at the 
same time.


The point being although this task takes only part of the human's max abilities, 
by their nature they can't split it off, automate it, or otherwise escape 
letting some brain cycles go to "waste". The human mind is too monolithic in 
such cases which go beyond simple habits, yet are below max output.




Again, aiming off the bullseye. Attempting to explain to someone about the 
particular clouds you saw yesterday, the particular colors of the sunrise, 
etc. 
you can of course not transfer the full information to them. A second 
example 
would be with skills, which could easily be shared among AGIs but cannot be 
shared between humans.


Actually the ability to copy skills is the key item, imho, that separates 
humans from the previous smart animals. It made us a memetic substrate. In 
terms of the animal kingdom, we do it very, very well.  I'm sure that AIs 
will be able to as well, but probably it's not quite as simple as simply 
copying a subroutine library from one computer to another.


The reason is learning. If you keep the simple-copy semantics, no learning 
happens when skills are transferred. In humans, a learning step is forced, 
contributing to the memetic evolution of the skill. 


IMO, AGIs plausibly could actually transfer full, complete skills including 
whatever learning is part of it. It's all computer bits sitting somewhere, and 
they should be transferable and then integrable on the other end.


If so, this is far more powerful, new, and distinct than a newbie tennis player 
watching a pro, and trying to learn how to serve that well over a period of 
years, or a math student trying to learn calculus. Even aside from the dramatic 
time scale difference, humans can never transfer their skills fully exactly in a 
lossless-esque fashion.




Currently all I see is a very large and rapidly growing very insecure 
network of 
rapidly improving computers out there ripe for the picking by the first 
smart 
enough AGI. 


A major architectural feature of both the brain and existing supercomputers is 
that the majority of the structure/cost is in the communications fabric, not 
the processors themselves. A botnet usin

Re: [agi] RSI - What is it and how fast?

2006-12-03 Thread Brian Atkins
ed by certain people and organizations who 
stand to profit by people's being concerned. There's no credible evidence 
that such a thing is even possible, much less likely. I've studied AI at the 
postgraduate level for 40 years; believe me, there are lots of major 
disagreements in the field and there are people who will listen to any 
reasonable idea. NO ONE with a serious research background in AI subscribes 
to the hard take-off idea. 



Well unfortunately for AGI researchers this item is an existential risk in many 
scenarios. That puts a certain onus on them IMO to provide proof of a very high 
level of safety. Physicists can do this fairly well when it comes to determining 
if this supercollider-flavor-of-the-decade will create an Earth-destroying 
particle or not. I'm afraid without something quite a bit stronger than the 
handwaving quoted above I must indeed rationally remain a bit concerned.


Our organization for one would certainly welcome research proposals to really 
try and work on this issue, and see what the limitations really might be. 
Currently all I see is a very large and rapidly growing very insecure network of 
rapidly improving computers out there ripe for the picking by the first smart 
enough AGI. What comes after that I think is unprovable currently. We do know of 
course that relatively small tweaks in brain design led from apes to us, a 
rather large difference in capabilities that did not apparently require too many 
more atoms in the skull. Your handwaving regarding universality aside, this also 
worries me.


...



IQ tests measure *something* that has highly significant correlations to 
criminality, academic success, earning potential, and so forth. Like any 
scientific property, this was originally based on some intuitive notions; a 
century of research has refined them; I expect they'll be refined further in 
the future.


On the other hand, the notion of what a "superintelligence" will or won't be 
able to do appears to be based entirely on unfounded speculation and some 
very shaky analogies. The actual experience we have with entities of arguably 
greater than human intelligence are organizations like corporations.  I've 
seen no substantive arguments from the "superAI takes over" side in support 
of any model that I find remotely more plausible.


Your analogy rests solely on your own model of fairly interchangeable X number 
of humans = one AGI of given computing resources.


It absolutely breaks if the AGI instead can think thoughts that no human or 
organized human group can plausibly come up with. That would be a qualitative 
difference in capabilities which the humans could only match by creating an 
equally powering AGI to use as their "tool" or upgrading themselves past the 
point where they could still claim to be human.


Much of your argumentation seems to rely on groups of AGIs forming, interacting 
with society for the long term, etc., but it seems to completely dismiss the 
idea of an initial singleton grabbing significant computing resources and then 
going further. The problem I have with this is this "story" you want to put 
across relies on multiple things that would have to go just right in order for 
the overall story to turn out just so. This is highly unlikely of course. I feel 
it is safer to try and work out what the real maximum limits might be in terms 
of possible events, and ask (and prove) what exactly prevents this from 
happening instead?

--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-09 Thread Brian Atkins

Matt Mahoney wrote:

Protein folding is hard.  We can't even plug in a simple formula like H2O and 
compute physical properties like density or melting point.
 


This seems to be a rapidly improving area:

http://tech.groups.yahoo.com/group/transhumantech/message/36865
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: A Mind Ontology Project? [Re: [agi] method for joining efforts]

2006-10-15 Thread Brian Atkins
I think it sounds like a good idea, but I would suggest making it clear on your 
wiki what kind of usage and redistribution terms you will allow if you want 
other people to contribute. Preferably I guess it would be completely free to 
copy and reuse.


If this policy will be different than the rest of the AGIRI site and/or wiki 
then I'd suggest placing this project under its own domain or somehow separate.

--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] [Fwd: [>Htech] comp.ai.nat-lang: Marcus Hutter's lossless compression of human knowledge prize]

2006-08-08 Thread Brian Atkins
gress
in keeping mice alive the longest. Here, modified for compression
ratios, is the formula:

S = size of program outputting the uncompressed knowledge
Snew = new record
Sprev = previous record
P = [Sprev - Snew] / Sprev = percent improvement

Award monies:

Fund contains: Z at noon GMT on day of new record
Winner receives: Z * P

Initially Z is 50,000 Euro with a minimum payout of 500 Euro (or
minimum improvement of 1% over the prior winner).

Donations are welcome. The history of improvement in The Calgary
Corpus Compression Challenge*** is about 3% per year. The larger the
commitment from donors to the fund, the greater the rate of progress
toward a high quality body of human knowledge and, quite possibly, the
long-held promise of artificial intelligence.

For further details of the Hutter Prize see:

http://prize.hutter1.net

For discussion of the Hutter Prize see:

http://groups.google.com/group/Hutter-Prize

-- Jim Bowery

* http://www.hutter1.net/ai/uaibook.htm

** Hutter's Razor has some caveats relating to the nature of the
universe and computability, but those conditions must be met for any
computer-based intelligence.



Post message: [EMAIL PROTECTED]
Subscribe:[EMAIL PROTECTED]
Unsubscribe:  [EMAIL PROTECTED]
List owner:   [EMAIL PROTECTED]
List home:http://www.yahoogroups.com/group/transhumantech/

Yahoo! Groups Links

<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/transhumantech/

<*> To unsubscribe from this group, send an email to:
[EMAIL PROTECTED]

<*> Your use of Yahoo! Groups is subject to:
http://docs.yahoo.com/info/terms/






--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Superrationality

2006-05-24 Thread Brian Atkins
There is currently an ongoing discussion of this coincidentally on the 
extropy-chat list if you are not aware of it.

--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI Theory Development Projects

2006-02-20 Thread Brian Atkins

Just in case there might be a few folks here unaware:

<http://www.nothingisreal.com/mentifex_faq.html>

I suggest banning him from the list now, as in the past he has demonstrated in 
many many other places only a capacity for spam, self-promotion of his spam 
websites, and useless babble. Why wait for more of it?

--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Notification of Limited Account Access (Case ID Number: PP-071-362-832)

2005-12-16 Thread Brian Atkins
If you don't have the mailing list configured to only allow subscribers to post, 
please do so. Otherwise, please figure out which subscriber is sending this and 
remove them. Looks like it came from a Bellsouth user: 
adsl-068-016-138-221.sip.bix.bellsouth.net if that helps.

--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA 50 GB memory for Windows

2004-08-31 Thread Brian Atkins
AMD demonstrates the first x86 dual-core processor
http://www.digitimes.com/news/a20040831PR200.html
Confirms it will re-use the current Opteron 940-pin socket
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA 50 GB memory for Windows

2004-08-24 Thread Brian Atkins
Opteron system are definitely the sweet spot currently, and for the near 
future. Rumors are that major server companies are working on 32-way 
systems to be released soon. Also of course Cray bought that OctigaBay 
company and now has this:

http://www.cray.com/products/systems/xd1/
Also rumor has it that when the dual-core Opterons come out in 2H 2005 
you will be able to drop them into most existing motherboards/servers 
that you can buy right now. So upgradability looks potentially excellent.
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Hard Wired Switch

2003-03-03 Thread Brian Atkins
Kevin Copple wrote:> Ben said,


When the system is smart enough, it will learn to outsmart the posited
Control Code, and the ethics-monitor AGI


This isn't apparent at all, given that the Control Code could be pervasively
imbedded and keyed to things beyond the AGI's control.  The idea is to limit
the AGI and control its progress as we wish.  I just don't see the risk that
the AGI will suddenly become so intelligent that it is able to "jump out of
the box" in a near-supernatural fashion, as some seem to fear.
Someone once said that a cave can trap and control a man, even though the
cave is dumb rock.  We are considerably more intelligent than granite, so I
would not hesitate to believe that we control an AGI that we create.
Of course, the details of a sophisticated "kill switch" would depend on the
architecture of system, and be beyond the scope of this casual conversation.
But to dismiss it out of hand as conceptually ineffectual is rather
puzzling.
Hi Kevin, you may not realize that you want to turn off the kill switch, 
but you do.

http://www.sysopmind.com/essays/aibox.html

In general, the idea that any lesser intelligences can communicate in 
any way with a smarter AGI, and manage to keep control over it is highly 
likely to be wrong. I certainly wouldn't want to bet on it.
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Breaking AIXI-tl

2003-02-15 Thread Brian Atkins
Ben Goertzel wrote:


So your basic point is that, because these clones are acting by simulating
programs that finish running in 

From my bystander POV I got something different out of this exchange of 
messages... it appeared to me that Eliezer was not trying to say that 
his point was regarding having more time for simulating, but rather that 
humans possess a qualitatively different "level" of reflectivity that 
allows them to "realize" the situation they're in, and therefore come up 
with a simple strategy that probably doesn't even require much 
simulating of their clone. It is this reflectivity difference that I 
thought was more important to understand... or am I wrong?
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]