Derek Zahn wrote:
Richard Loosemore:
> I am not sure I understand.
>
> There is every reason to think that "a currently-envisionable AGI would
> be millions of times "smarter" than all of humanity put together."
>
> Simply build a human-level AGI, then get it to bootstrap to a level of,
> say, a thousand times human speed (easy enough: we are not asking for
> better thinking processes, just faster implementation), then ask it to
> compact itself enough that we can afford to build and run a few billion
> of these systems in parallel
This viewpoint assumes that human intelligence is essentially trivial; I
see no evidence for this and tend to assume that a properly-programmed
gameboy is not going to pass the turing test. I realize that people on
this list tend to be more optimistic on this subject so I do accept your
answer as one viewpoint. It is surely a minority view, though, and my
question only makes sense if you assume significant limitations in the
capability of near-term hardware.
But if you want to make a meaningful statement about limitations, would
it not be prudent to start from a clear understanding of how the size of
the task can be measured, and how those measurements relate to the
available resources? If there is no information at all, we could not
make a statement either way.
Without knowing how to bake a cake, or what the contents of your pantry
are, I don't think you can state that "We simply do not have what it
takes to bake a cake in the near future".
I am only saying that I see no particular limitations, given the things
that I know about how to buld an AGI. That is the best I can do.
Richard Loosemore
-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com