Re: [agi] Can symbolic approach entirely replace NN approach?

2024-06-16 Thread John Rose
On Sunday, June 16, 2024, at 7:09 PM, Matt Mahoney wrote:
> Not everything can be symbolized in words. I can't describe what a person 
> looks as well as showing you a picture. I can't describe what a novel 
> chemical smells like except to let you smell it. I can't tell you how to ride 
> a bicycle without you practicing.

That’s the point. You emit symbols that reference the qualia that you 
experienced of what the person looks like. The symbols or words are a 
compressed impressed representation of the original full symbol that you 
experienced in your mind. Your original qualia is your unique experience and 
another person receives your transmission or description to reference their own 
qualia which are also unique. It’s a hit or miss since you can’t transmit the 
full qualia but you can transmit more words to paint a more accurate picture 
and increase accuracy. There isn’t enough bandwidth, sampling capacity and 
instantaneousness but you have to reference something for the purposes of 
transmitting information spatiotemporally. A “thing” is a reference which it 
seems can only be a symbol, ever, unless the thing is the symbol itself and 
that would be the original unique qualia. Maybe there are exceptions? like 
numbers but they are still references to qualia going back in history... or 
computations? They are still derivatives. And no transmission is 100% reliable 
as there is always some small chance of error AFAIK. If I'm wrong I would like 
to know.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M773f13826341af38c56a4e09
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-06-16 Thread Matt Mahoney
Not everything can be symbolized in words. I can't describe what a person
looks as well as showing you a picture. I can't describe what a novel
chemical smells like except to let you smell it. I can't tell you how to
ride a bicycle without you practicing.

On Sun, Jun 16, 2024, 5:36 PM John Rose  wrote:

> On Friday, June 14, 2024, at 3:43 PM, James Bowery wrote:
>
> Etter: "Thing (n., singular): anything that can be distinguished from
> something else."
>
>
> I simply use “thing” as anything that can be symbolized and a unique case
> are qualia where from a first-person experiential viewpoint a qualia
> experiential symbol = the symbolized but for transmission the qualia are
> fitted or compressed into symbol(s). So, for example “nothing” is a thing
> simply because it can be symbolized. Is there anything that cannot be
> symbolized? Perhaps things that cannot be symbolized, what would they be?
> Pre-qualia? but then they are already symbolized since they are referenced…
> You could generalize it and say all things are ultimately derivatives of
> qualia and I speculate that it is impossible to name one that is not. Note
> that in ML a perceptron or a set of perceptrons could be considered
> artificial qualia symbol emitters and perhaps that’s why they are named
> such, percept -> tron. A basic binary classifier is emitting an
> experiential symbol as a bit and more sophisticated perceptrons emit higher
> symbol complexity such as color codes or text characters.
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M33eaab901fc926ab4a6ae137
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Internal Time-Consciousness Machine (ITCM)

2024-06-16 Thread Matt Mahoney
It is an interesting paper. But even though it references Tononi's
integrated information theory, I don't think it says anything about
consciousness. It is just the name they gave to part of their model. They
refer to a "consciousness vector" as the concatenation of vectors
representing perceptions and short and long term memory, so really just a
state machine vector. They show that their model, which also includes
models of space and time, improves the task completion rate of robots
tested in natural language using LLMs. It also shows just how far advanced
China is in the AI race.

Any LLM that passes the Turing test is conscious as far as you can tell, as
long as you assume that humans are conscious too. But this proves that
there is nothing more to consciousness than text prediction. Good
prediction requires a model of the world, which can be learned given enough
text and computing power, but can also be sped up by hard coding some basic
knowledge about how objects move, as the paper shows.

If you are looking for answers to the mystery of phenomenal consciousness,
you need to define it first. The test should be appropriate for humans,
animals, and machines. Of course nobody does this (including the authors)
because there isn't a test. We define consciousness as the difference
between a human and a philosophical zombie. We define a zombie as exactly
like a human in every observable way, except that it lacks consciousness.
If you poke one, they will react like a human and say "ouch" even though
they don't experience pain.

But of course we are conscious, right? If I poke you in the eye, are you
going to tell me it didn't hurt? Then what is it?

What you actually have is a sensation of consciousness. It feels like
something to think or recall memories or solve problems. Likewise, qualia
is what perception feels like, and free will is what action feels like.
These feelings are usually a net positive, which motivates us to not lose
them by dying. This results in more offspring.

Feelings have a physical explanation that we know how to encode in
reinforcement learning algorithms. If you do X and that is followed by a
positive (negative) signal, then you are more (less) likely to do X again.


On Sat, Jun 15, 2024, 8:34 PM John Rose  wrote:

>
> For those of us pursuing consciousness-based AGI this is an interesting
> paper that gets more practical... LLM agent based but still v. interesting:
>
> https://arxiv.org/abs/2403.20097
>
>
> I meant to say that this is an exceptionally well-written paper just
> teeming with insightful research on this subject. It's definitely worth a
> read.
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T32a7a90b43c665ae-M6b99887dcd5633d89566be07
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-06-16 Thread John Rose
On Friday, June 14, 2024, at 3:43 PM, James Bowery wrote:
>> Etter: "Thing (n., singular): anything that can be distinguished from 
>> something else."

I simply use “thing” as anything that can be symbolized and a unique case are 
qualia where from a first-person experiential viewpoint a qualia experiential 
symbol = the symbolized but for transmission the qualia are fitted or 
compressed into symbol(s). So, for example “nothing” is a thing simply because 
it can be symbolized. Is there anything that cannot be symbolized? Perhaps 
things that cannot be symbolized, what would they be? Pre-qualia? but then they 
are already symbolized since they are referenced… You could generalize it and 
say all things are ultimately derivatives of qualia and I speculate that it is 
impossible to name one that is not. Note that in ML a perceptron or a set of 
perceptrons could be considered artificial qualia symbol emitters and perhaps 
that’s why they are named such, percept -> tron. A basic binary classifier is 
emitting an experiential symbol as a bit and more sophisticated perceptrons 
emit higher symbol complexity such as color codes or text characters. 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-Ma5a8d7f7d388f150f9437cf3
Delivery options: https://agi.topicbox.com/groups/agi/subscription