>From biological conception to old age the mind changes quite a bit already.
Consciousness, sense of self, free will - all illusions. Fear of death - if
the mind agent lost it perhaps it would choose to terminate unless something
else supported its intent to keep running
> From: Matt Mahon
1. They will probably create more problems than they fix... as usual. But
they should be able to assist man with his issues. Kind of like machines
did.
2. You would have to imagine an ideal pure intelligence and bridge the gap
somewhat.
> 1.What are your AGI's going to do with their intelligen
On Monday 23 April 2007 19:45, Matt Mahoney wrote:
>... How do you distinguish between consciousness (sense of self) and the
> programmed belief in consciousness, free will, and fear of death that all
> animals possess because it confers a survival advantage?
A distinction without a difference,
Hmmm. Design a combinational logic circuit that has inputs a, b, and c, and
outputs not(a), not(b), and not(c) -- its function is just 3 paralleled
inverters. But, while you may use as many AND and OR gates as you like, you
may only use at most two NOT gates.
Josh
On Monday 23 April 2007 17:
"He who refuses to do arithmetic is doomed to talk nonsense. "
- John McCarthy
We're talking about relative numbers here. Suppose you had an AI algorithm
that was exactly as good as the one the human brain uses. In fact, let's
suppose you had one that was two orders of magnitude better, sin
--- "John G. Rose" <[EMAIL PROTECTED]> wrote:
> A baby AGI has immense advantage. It's starting (life?) after billions of
> years of evolution and thousands of years of civilization. A 5 YO child
> can't float all languages, all science, all mathematics, all recorded
> history, all encyclopedia
John: Our brains are good I mean they are
us but aren't they just biological blobs of goop that are half-assed
excuses
for intelligence? I mean why are AGI's coming about anyway? Is it
because
our brains are awesome and fulfill all of our needs? No. We need to be
uploaded otherwise we die.
A baby AGI has immense advantage. It's starting (life?) after billions of
years of evolution and thousands of years of civilization. A 5 YO child
can't float all languages, all science, all mathematics, all recorded
history, all encyclopedia, etc. in sub-millisecond RAM and be able to
interconnec
On Apr 23, 2007, at 2:05 PM, J. Storrs Hall, PhD. wrote:
On Monday 23 April 2007 15:40, Lukasz Stafiniak wrote:
... An AGI working with bigger numbers had better discovered binary
numbers. Could an AGI do it? Could it discover rational numbers? (It
would initially believe that irrational numbe
On Monday 23 April 2007 15:40, Lukasz Stafiniak wrote:
> ... An AGI working with bigger numbers had better discovered binary
> numbers. Could an AGI do it? Could it discover rational numbers? (It
> would initially believe that irrational numbers do not exist, as early
> Pythagoreans have believed.)
--- Lukasz Stafiniak <[EMAIL PROTECTED]> wrote:
> Perhaps CIC is simply too impractical.
Probably. Deriving multiplication from zero and S() is like computing m*n
using:
for (i=0; ihttp://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?me
On 4/23/07, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote:
We really are pigs in space when it comes to discrete symbol manipulation such
as arithmetic or logic. It's actually harder (mentally) to do a
multiplication step such as 8*7=56 than to catch a Frisbee -- and I claim
I've learnt multi
On 4/23/07, John G. Rose <[EMAIL PROTECTED]> wrote:
Hi,
Adding some thoughts on AGI math - If the AGI or a sub processor of the AGI
is allotted time to "sleep" or idle process it could lazily postulate and
construct theorems with spare CPU cycles (cores are cheap nowadays), put
things together a
Hopefully not the future of AGI...
http://www.crooksandliars.com/2007/04/22/torboto-the-robot-that-tortures-people/
[Warning: contents could be offensive to some... crude humor etc. ...]
-- Ben G
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your op
On Monday 23 April 2007 10:03, Matt Mahoney wrote:
> ... The brain is a billion times slower per step, has only about 7
> words of short term memory, ...
For some appropriate meaning of "word" -- I'd suggest that "frame" might be
more useful in thinking about what's going on. One of Miller's mag
Hi,
Adding some thoughts on AGI math - If the AGI or a sub processor of the AGI
is allotted time to "sleep" or idle process it could lazily postulate and
construct theorems with spare CPU cycles (cores are cheap nowadays), put
things together and use those theorems to further test the processing o
--- Lukasz Stafiniak <[EMAIL PROTECTED]> wrote:
> On 4/23/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > Ontic looks like an interesting and elegant formalism, but I don't see how
> it
> > would help an AGI learn mathematics. We are not yet at the point where we
> can
> > solve word problems lik
17 matches
Mail list logo