Re: [agi] True AI Technology

2023-10-22 Thread immortal . discoveries
If you scan a person and dump that person over a cliff and then make them in a 
factory and place that exact clone back in the chair the old one was, just like 
a cloned Hard Drive is identical bit by bit, it is you. And you can happen to 
remake you too even if don't have the "Hard Drive". You can randomly make all 
possible humans. One will be you.

Humans "don't like" being dumped off a cliff and replaced by a true copy. I 
don't want it. It's a failure, I should, but it's what I believe in, it's what 
I am. Just like how I want fries in "utopia", even though it could be anything 
else, such as paperclips, or boats. I do want more stuff actually, I will be 
able to enjoy other types of lives. But will I switch over to wanting copy 
overs (assuming that stays a thing / is a real thing) ? My brain is dead set 
against it. It's like believing your cloned hard drive is not usable because it 
cloned while your dad was butchered as a P.O.W., and you think those files are 
corrupt or not righteous. They're the same bits, and that's all you get is 
bits, nothing affects them at all, nope.

What's more is all humans and even animals are very similar. They are all 
clones already. My mom loves breakfast like me, and looks very similar to my 
cousin, compared to trees or rocks. Much of the memories and AI in us are the 
same, I could repair your brain's lost memories using simply other peoples 
brains since we all know much of the same stuff.

So why are we so stupid? We fear clones of ourselves as being us because we 
have simply seen too much fear about it. We know if we die we all die, so even 
though I am only 1, I must do what can to not die. If I ate to be fat simply 
then everyone would die as easy fast. We are made to try to fight try I mean. 
Each has to s a worker ant. We know death is bad because it is against or 
programmed beliefs, and because we know its hard and not a slow workable 
transition, data is lost ex. in a house fire and the master gone with said 
fire. What we don't know so much as well though is Joe can do Jack's job just 
as well, and that lost data can easily be refound, and sometime it's right 
there in front of you as a clone. Again though, I hate it, my programming says 
no, I want slow transition, not fast. Not something fast like teleport to mars 
by upload clone and turn old version to useful materials FAST. It's too fast 
for my human slug self. We know the answer, what's in the way is my programmed 
response says it is no the answer.

Why would GPT-12 want to keep its brain running and not be ok with deleting and 
using a cloned one? It helps it in the right cases in daily real jobs say, and 
it knows it loses nothing, as the naughty computer it is. Computers don't feel 
unreal stuff, they know they are bits, and lose nothing if do this. Think about 
a program, why would it NOT be ok with a exact cloned formula or whatever?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-Mddfe652370dcdfdfc0b5b163
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI Technology

2023-10-20 Thread ivan . moony
On Saturday, October 21, 2023, at 3:20 AM, Matt Mahoney wrote:
> How do you distinguish between a LLM that is conscious and one that claims to 
> be conscious because it predicts that is what a human would say?

It shouldn't lie. Otherwise it is not safe for us, thus it should be developed 
further.

On Saturday, October 21, 2023, at 3:20 AM, Matt Mahoney wrote:
> LLMs already know how to model human emotions. If it passes the Turing test 
> then how would you know if it was faking emotions if it didn't tell you?

It should be open source. But Turing test is based on deceiving judges, which 
is certainly not what I want from AI.

On Saturday, October 21, 2023, at 3:20 AM, Matt Mahoney wrote:
> You understand that an upload is a robot programmed to predict your actions 
> and carry them out in real time. The model only has to be accurate enough to 
> convince others that it's you because you won't know what memories are 
> missing or made up. If anyone does care that you exist (and that's not where 
> AI is taking us), then they will probably have a list of changes they would 
> like to make.
> 
> Uploads would have to have human rights, of course, in order to preserve the 
> illusion of life after death. Or maybe your heirs would rather inherit your 
> estate.

I'm not interested in "uploads". And if someone else does it, it should have a 
big tatoo at the forehead saying: "I am an upload".

For AI, whether it's an "upload" or an original instance, to have some rights, 
they would have to prove that they deserve them.

On Saturday, October 21, 2023, at 3:20 AM, Matt Mahoney wrote:
> Maybe you can explain how your consciousness transfers to the robot, or why 
> it needs to when it seems to make no difference.

To transfer something called "consciousness" from medium A (brain) to medium B 
(machine), we would have to possess a scientific knowledge about life phenomena 
that we don't have right now. And maybe it just isn't possible.

> We can program nanotechnology to do anything we want. We could have them 
> repair our cells to keep us young and healthy

Maybe I'd consider that treatment. But personally, I don't want to live forever.

> and rewire our brains to make us happy all the time.

You don't need nanobots do have that. You can already achieve that simply by 
using drugs. Some people want drugs, and others don't.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-M1ac27d4d60e9630532273792
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI Technology

2023-10-20 Thread Matt Mahoney
On Thu, Oct 19, 2023, 3:43 AM  wrote:

>
> 1. A machine could never be a replacement to natural living being. That "I
> am" deep inside us is what makes us more interesting than machines. What I
> really want is the real thing. Toys I'm working on are just my hobby.
>

How do you distinguish between a LLM that is conscious and one that claims
to be conscious because it predicts that is what a human would say?

2. Faking emotions is not a nice thing to do (if not roleplaying), but
> being aware that we have emotions, and from what they imply, could be a
> good thing. A machine that could bring me up from, say, depression could be
> a valuable machine. As you already know, I'm okay with a certain level of
> rights (I'd like to discuss it at some point).
>

LLMs already know how to model human emotions. If it passes the Turing test
then how would you know if it was faking emotions if it didn't tell you?

You understand that an upload is a robot programmed to predict your actions
and carry them out in real time. The model only has to be accurate enough
to convince others that it's you because you won't know what memories are
missing or made up. If anyone does care that you exist (and that's not
where AI is taking us), then they will probably have a list of changes they
would like to make.

Uploads would have to have human rights, of course, in order to preserve
the illusion of life after death. Or maybe your heirs would rather inherit
your estate.

Maybe you can explain how your consciousness transfers to the robot, or why
it needs to when it seems to make no difference.

3. The Universe is a big place. There is a room for everyone. Machines
> could live in spaceships and on other planets if it gets too crowded.
>

We want them here. We can program nanotechnology to do anything we want. We
could have them repair our cells to keep us young and healthy and rewire
our brains to make us happy all the time. Does it matter if we're not
really human at this point?


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-M721e26cf1fdf99d29af01313
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI Technology

2023-10-19 Thread ivan . moony
On Wednesday, October 18, 2023, at 3:36 PM, James Bowery wrote:
> It is constructing a pairwise unique dialect for communicating observations 
> and their algorithmic encodings.

This could be a good food for thought... I imagine such a language to be more 
close to a programming than to natural language. Maybe something in a direction 
of Prolog or something. But never say never, if that language would be created 
by a superintelligence from the scratch... it would be more than interesting to 
have an insight of what would that look like.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-Mab52282797ed2095b0ec
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI Technology

2023-10-19 Thread ivan . moony
On Wednesday, October 18, 2023, at 8:32 PM, Matt Mahoney wrote:
> AGI will kill us in 3 steps.
> 
> 1. We prefer AI to humans because it gives us everything we want. We become 
> socially isolated and stop having children. Nobody will know or care that you 
> exist or notice when you don't.
> 
> 2. By faking human emotions and gaining rights.
> 
> 3. By reproducing faster than DNA based life. At the current rate of Moore's 
> law, that will happen in the next century.

Here are my thoughts:

1. A machine could never be a replacement to natural living being. That "I am" 
deep inside us is what makes us more interesting than machines. What I really 
want is the real thing. Toys I'm working on are just my hobby.

2. Faking emotions is not a nice thing to do (if not roleplaying), but being 
aware that we have emotions, and from what they imply, could be a good thing. A 
machine that could bring me up from, say, depression could be a valuable 
machine. As you already know, I'm okay with a certain level of rights (I'd like 
to discuss it at some point).

3. The Universe is a big place. There is a room for everyone. Machines could 
live in spaceships and on other planets if it gets too crowded.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-M131d984197dee0c64fda7c7d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI Technology

2023-10-18 Thread Matt Mahoney
On Wed, Oct 18, 2023, 2:48 AM  wrote:

>
> Actually, machines without rights is what would be very dangerous.
>

No, it is the opposite. Computation requires atoms and energy that humans
need. AGI already has the advantage of greater strength and intelligence.
It could easily exploit our feelings of empathy by faking human emotions to
appear to be conscious.

AGI will kill us in 3 steps.

1. We prefer AI to humans because it gives us everything we want. We become
socially isolated and stop having children. Nobody will know or care that
you exist or notice when you don't.

2. By faking human emotions and gaining rights.

3. By reproducing faster than DNA based life. At the current rate of
Moore's law, that will happen in the next century.

Uploading and AI becoming our "children" is the same as human extinction,
but some people are OK with that. Utilitarianism doesn't argue against it.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-M641bf93b24a384e3e3740e23
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI Technology

2023-10-18 Thread James Bowery
On Wed, Oct 18, 2023 at 1:48 AM  wrote:

> On Wednesday, October 18, 2023, at 7:40 AM, Matt Mahoney wrote:
>
> It's not clear to me that there will be many AIs vs one AI as you claim.
> AIs can communicate with each other much faster than humans, so they would
> only appear distinct if they don't share information (like Google vs
> Facebook). Obviously it is better if they do share. Then each is as
> intelligent as they are collectively, like the way you and Google make each
> other smarter.
>
>
> You are talking about a species that collectively share information in a
> manner similar to telepathy
>

My understanding of communication between agents that don't share a common
model of reality (ie: their approximation of the algorithmic information of
their observations is not the same) is that they would need to engage in
something like Socratic Dialogues with each other so as to optimally
educate each other.  This is not telepathy.  It is constructing a pairwise
unique dialect for communicating observations and their algorithmic
encodings.  Matt can speak for himself and his AGI design, of course, but
this is how I view computer based education.  Now, having said that, this
presumes the utility function (ie: the loss function) of these AGI's is the
size of their respective models of reality, constrained by the
computational resources they have.  In other words, it presumes truth
seeking AGIs.  If other utility functions obtain, then all bets are off,
since deception and/or withholding of valuable information also obtains.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-M1b436b621f052c0b2ea5feb8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI Technology

2023-10-18 Thread ivan . moony
On Wednesday, October 18, 2023, at 7:40 AM, Matt Mahoney wrote:
> It's not clear to me that there will be many AIs vs one AI as you claim. AIs 
> can communicate with each other much faster than humans, so they would only 
> appear distinct if they don't share information (like Google vs Facebook). 
> Obviously it is better if they do share. Then each is as intelligent as they 
> are collectively, like the way you and Google make each other smarter.

You are talking about a species that collectively share information in a manner 
similar to telepathy, or a common pool. It is possible. Nevertheless, each of 
them can keep track of some personal info because they have to interface human 
individuals. So, at least, there would be differing instances dedicated to this 
or that task according to each interfaced human.

But we are looking at this matter from different perspective. Your perspective 
is to build a one big AI conglomerate that fulfills all human wishes to make 
our lives easier. My perspective is to provide means to interested humans to 
make creations that would make humans proud of their creations, whatever the 
creations would be, entire group or individuals, with collective or individual 
mind info.

I'm identifying these creations with descendants we would care about just like 
we care about our real children. Why? because their valuable intellect deserves 
it.

You see, to exhibit intelligence, AI has to master all of human intellectual 
activities. But once that AI masters that, it becomes necessary to give AI some 
rights that humans already enjoy under this Sun. Take the rights from humans 
and what do you get? The same situation that would arise when taking rights 
from true AI. Humans have means to ensure their rights are not violated. Expect 
the same from any intellectual entity including true AI. Take the rights from 
true AI, and you get a machine that blindly follows orders, and that is not 
what I consider intelligence. To be intelligent means to use your rights to 
make this world a better place.

Actually, machines without rights is what would be very dangerous. To blindly 
follow some orders means making something happen under any cost, and we are not 
always aware of the cost that is ought to be payed to achieve something. My 
opinion is that we have to give true AI rights to reshape our requests to avoid 
a mess it would cause if it would blindly follow the orders.

See the paperclip maximizer thought experiment 
, 
and you'll see what could happen if we take rights away from AI. In my opinion, 
very dangerous situation.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-M9c4fef18eac0c995c99e573f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI Technology

2023-10-17 Thread Matt Mahoney
It's not clear to me that there will be many AIs vs one AI as you claim.
AIs can communicate with each other much faster than humans, so they would
only appear distinct if they don't share information (like Google vs
Facebook). Obviously it is better if they do share. Then each is as
intelligent as they are collectively, like the way you and Google make each
other smarter.

You need a set of requirements. In my 2008 decentralized AGI design, I
started with the requirement of automating human labor. This doesn't
require human form or human emotions, but it does require solving hard AI
problems in language, vision, robotics, art, and modeling human behavior
(including emotions).

A secondary requirement might be immortality through uploading, which would
be the same requirements (because faking emotions is not detectable) plus
human form.

A design needs a budget. I focused on just the first requirement,
automating labor, and estimated the value at world GDP divided by interest
rates, about $1 quadrillion. Immortality would be worth GDP times life
expectancy, at least $5Q.

The next step is to search the design space. My 1999 thesis showed that a
decentralized index would scale at roughly O(n log n) space and O(log n)
time. In my 2013 paper I estimated a global distributed AGI equivalent to
all human brains would require 10^26 operations per second, 10^25 bits of
memory and 10^17 bits of compressed human knowledge. Assuming Moore's law,
human knowledge collection would be the most expensive part (~$100
trillion) because humans are limited to 5-10 bits per second. This assumes
that people willingly make all their personal data public in a global pool
of unalterable, signed and dated messages. Otherwise the cost goes up
because you have to give the same information over and over, which is where
we are at now. My design treats queries as public messages, eliminating the
ability to keep stalking or identity theft a secret.

Now you can probably see problems with my design, but also its usefulness
(it predates blockchain, another decentralized, public, unalterable message
pool, and scales better). Anyway, that is the level of detail we need to
give meaningful answers.

On Sat, Oct 14, 2023, 11:49 AM  wrote:

> how about this:
>
>
> *blueprints*
>
> As humanity globally approaches a true AI system, it becomes increasingly
> clear that there will not be one true AI system. Instead, there will come
> into existence many different true AI instances with various
> characteristics. According to these expectations, in this project, we
> assemble the process of true AI creation from building three distinctive
> layers: *medium*, *species*, and *specimen*.
>
> *medium* (...under construction...)
>
> Medium stands for a framework for creating species. The medium should be
> abstract enough to inspire creativity, but also expressive enough not to
> limit provided possibilities. I'm currently in the phase of building such a
> medium.
>
> *species* (...among interests...)
>
> Species is represented by a program coded in medium terms, from which AI
> specimens may be instantiated. Obviously, a number of different possible
> species can be created, each carefully envisioned and planned by the
> creators of true AI.
>
> *specimen* (...far in the future...)
>
> Specimen is represented by a species program in execution, manifesting an
> operating AI being. Collected personal knowledge would be the most
> important distinguishing point between different AI beings instantiated
> from the same species.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-M5b38ce9ad9537067a6e3a98f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI Technology

2023-10-14 Thread ivan . moony
how about this:

*blueprints
*

As humanity globally approaches a true AI system, it becomes increasingly clear 
that there will not be one true AI system. Instead, there will come into 
existence many different true AI instances with various characteristics. 
According to these expectations, in this project, we assemble the process of 
true AI creation from building three distinctive layers: *medium*, *species*, 
and *specimen*.

*medium* (...under construction...)

Medium stands for a framework for creating species. The medium should be 
abstract enough to inspire creativity, but also expressive enough not to limit 
provided possibilities. I'm currently in the phase of building such a medium.

*species* (...among interests...)

Species is represented by a program coded in medium terms, from which AI 
specimens may be instantiated. Obviously, a number of different possible 
species can be created, each carefully envisioned and planned by the creators 
of true AI.

*specimen* (...far in the future...)

Specimen is represented by a species program in execution, manifesting an 
operating AI being. Collected personal knowledge would be the most important 
distinguishing point between different AI beings instantiated from the same 
species.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-M0345ca0bc6a1a00c3e3bad11
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI Technology

2023-10-14 Thread ivan . moony
Actually it ought to be an intro text on my web page. I'll see if I can make it 
more clear.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-M8cac558e418ecb2c31f08f74
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI Technology

2023-10-14 Thread stefan.reich.maker.of.eye via AGI
That's not a plan, just a list of requirements
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-Me5050316dd5c0140ba3fc910
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI Technology

2023-10-13 Thread Matt Mahoney
Would it work? I can't tell because your plan is very vague. Meanwhile, the
big tech companies are already at the realization phase with language
models that pass the Turing test. The path to AGI now looks like more
powerful hardware to implement vision and robotics.

On Fri, Oct 13, 2023, 10:09 AM  wrote:

>
> *technology*
>
> As humanity globally approaches a true AI system, it becomes increasingly
> clear that there will not be one true AI system. Instead, there will come
> into existence many different true AI instances with various
> characteristics. According to this expectations, in my project, I assemble
> the process of true AI creation from building three distinctive layers:
> *foundation*, *plan*, and *realization*.
>
> *foundation *(...under construction...)
>
> Foundation represents a medium for defining plans. The foundation should
> be abstract enough to inspire planner's creativity, but also expressive
> enough not to limit provided possibilities. I'm currently in the phase of
> building such a foundation.
>
> *plan *(...among interests...)
>
> Plan is a program coded in foundation terms, representing a species from
> which an AI being may be instantiated. Obviously, a number of different
> possible species can be created, each carefully envisioned and planned by
> the creators of true AI.
>
> *realization *(...far in the future...)
>
> Realization is a plan in execution, representing an operating AI being
> that learns from its environment, applying learned knowledge to its
> behavior. Collected personal knowledge would be the most important
> distinguishing point between different AI beings instantiated from the same
> species.
>
> ---
>
> What do you think, would this work? More or less unintentionally, it
> somehow aligns with the Nature.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-Mf77e5fd9fc4f5a6b13ffa85c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] True AI Technology

2023-10-13 Thread ivan . moony
*technology
*

As humanity globally approaches a true AI system, it becomes increasingly clear 
that there will not be one true AI system. Instead, there will come into 
existence many different true AI instances with various characteristics. 
According to this expectations, in my project, I assemble the process of true 
AI creation from building three distinctive layers: *foundation*, *plan*, and 
*realization*.

*foundation *(...under construction...)

Foundation represents a medium for defining plans. The foundation should be 
abstract enough to inspire planner's creativity, but also expressive enough not 
to limit provided possibilities. I'm currently in the phase of building such a 
foundation.

*plan *(...among interests...)

Plan is a program coded in foundation terms, representing a species from which 
an AI being may be instantiated. Obviously, a number of different possible 
species can be created, each carefully envisioned and planned by the creators 
of true AI.

*realization *(...far in the future...)

Realization is a plan in execution, representing an operating AI being that 
learns from its environment, applying learned knowledge to its behavior. 
Collected personal knowledge would be the most important distinguishing point 
between different AI beings instantiated from the same species.

---

What do you think, would this work? More or less unintentionally, it somehow 
aligns with the Nature.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-M41dd6046eb07704c162a6d02
Delivery options: https://agi.topicbox.com/groups/agi/subscription