Re: [agi] GPT-4 passes the Turing test

2024-07-31 Thread Matt Mahoney
When humans don't know the answer, they make one up. LLMs do the same
because they mimic humans. But it's not like there isn't a solution. Watson
won at Jeopardy in 2010 in part by not buzzing in when it didn't know the
correct response with high probability. That and having an 8 ms response
time that no human could match.

A text compressor estimates the probability of the next symbol and assigns
a code of length log 1/p for each possible outcome. Generative AI just
outputs the symbol with the highest p.

On Wed, Jul 31, 2024, 2:07 AM  wrote:

> On Tuesday, July 23, 2024, at 8:27 AM, stefan.reich.maker.of.eye wrote:
>
> On Monday, July 22, 2024, at 11:11 PM, Aaron Hosford wrote:
>
> Even a low-intelligence human will stop you and tell you they don't
> understand, or they don't know, or something -- barring interference from
> their ego, of course.
>
> Yeah, why don't LLMs do this? If they are mimicking humans, they should do
> the same thing - acknowleding lack of knowledge -, no?
>
>
> Interesting. Maybe being trained to predict the next token makes them try
> to be as accurate as possible, giving them not only accuracy but also a
> style of how they talk.
>
> Or is it because GPT-4 already knows everything >:) lol...
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M8111ddb539b4a7e7f897ca5b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-30 Thread immortal . discoveries
GPT-4 says it can't make a full video game when asked by my prize-like question 
for a hard test. I give it the full instructions etc and tell it is is for a 
very important study, but it refuses to give a long or trying answer. Just 
gives a template. I think, sometimes it does ask me for info, GPT-4 I mean. I 
guess.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M715438c83c85cb7e688add42
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-30 Thread immortal . discoveries
On Tuesday, July 23, 2024, at 8:27 AM, stefan.reich.maker.of.eye wrote:
> On Monday, July 22, 2024, at 11:11 PM, Aaron Hosford wrote:
>> Even a low-intelligence human will stop you and tell you they don't 
>> understand, or they don't know, or something -- barring interference from 
>> their ego, of course.
> Yeah, why don't LLMs do this? If they are mimicking humans, they should do 
> the same thing - acknowleding lack of knowledge -, no?

Interesting. Maybe being trained to predict the next token makes them try to be 
as accurate as possible, giving them not only accuracy but also a style of how 
they talk.

Or is it because GPT-4 already knows everything >:) lol...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M8cfffcd7fd1aefe3197c3147
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-24 Thread Aaron Hosford
Ah, no, just needed clarification. Thanks for providing it!

My insight into the failure of the industry is a general one: They are
racing ahead into a new tech bubble. There is very little metacognition
happening right now. That will come later, when things have cooled down a
bit and the more rational characters start to take over.

On Tue, Jul 23, 2024 at 2:09 PM James Bowery  wrote:

> I directed the question at you because you are likely to understand how
> different training and inference are since you said you "pay my bills by
> training" -- so far from levelling a criticism at you I was hoping you had
> some insight into the failure of the industry to use training benchmarks as
> opposed to inference benchmarks.
>
> Are you saying you don't see the connection between training and
> compression?
>
> On Mon, Jul 22, 2024 at 8:08 PM Aaron Hosford  wrote:
>
>> Sorry, I'm not sure what you're saying. It's not clear to me if this is
>> intended as a criticism of me, or of someone else. Also, I lack the context
>> to draw the connection between what I've said and the topic of
>> compression/decompression, I think.
>>
>> On Mon, Jul 22, 2024 at 5:17 PM James Bowery  wrote:
>>
>>>
>>>
>>> On Mon, Jul 22, 2024 at 4:12 PM Aaron Hosford 
>>> wrote:
>>>
 ...

 I spend a lot of time with LLMs these days, since I pay my bills by
 training them

>>>
>>> Maybe you could explain why it is that people who get their hands dirty
>>> training LLMs, and are therefore acutely aware of the profound difference
>>> between training and inference (if for no other reason than that training
>>> takes orders of magnitude more resources), seem to think that these
>>> benchmark tests should be on the inference side of things whereas the
>>> Hutter Prize has, *since 2006*, been on the training *and* inference
>>> side of things, because a winner must both train (compress) and infer
>>> (decompress).
>>>
>>> Are the "AI experts" really as oblivious to the obvious as they appear
>>> and if so *why*?
>>>
>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Mdeb12de4a8461c4bdcd12996
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-23 Thread James Bowery
On Tue, Jul 23, 2024 at 7:15 PM Matt Mahoney 
wrote:

> On Tue, Jul 23, 2024 at 7:07 PM James Bowery  wrote:
> >
> > That sounds like you're saying benchmarks for language modeling
> algorithms aka training algorithms are uninteresting because we've learned
> all we need to learn about them.  Surely you don't mean to say that!
>
> I mean to say that testing algorithms and testing language models are
> different things.


That was my point.

On Tue, Jul 23, 2024 at 2:08 PM James Bowery  wrote:

> I directed the question at you because you are likely to understand how
> different training and inference are ...
>



> Language models have to be tested in the way they
> are to be used, on terabytes of up to date training data with lots of
> users.


Obviously, except in the case where we are interested in benchmarking
modeling algorithms aka training algorithms in accord with scaling laws
which pertain both to modeling performance and model performance.

The issue of "data efficiency", for one example, is far from settled
despite the motivated reasoning of those who have access to enormous
resources. e.g.

https://arxiv.org/pdf/2201.02177

> Abstract: In this paper we propose to study generalization of neural
> networks on small algorithmically generated datasets. In this setting,
> questions about data efficiency, memorization, generalization, and speed of
> learning can be studied in great detail. In some situations we show that
> neural networks learn through a process of “grokking” a pattern in the
> data, improving generalization performance from random chance level to
> perfect generalization, and that this improvement in generalization can
> happen well past the point of overfitting. We also study generalization as
> a function of dataset size and find that smaller datasets require
> increasing amounts of optimization for generalization. We argue that these
> datasets provide a fertile ground for studying a poorly understood aspect
> of deep learning: generalization of overparametrized neural networks beyond
> memorization of the finite training dataset.


and the derivative
https://github.com/ironjr/grokfast

> Abstract: One puzzling artifact in machine learning dubbed grokking is
> where delayed generalization is achieved tenfolds of iterations after near
> perfect overfitting to the training data. Focusing on the long delay itself
> on behalf of machine learning practitioners, our goal is to accelerate
> generalization of a model under grokking phenomenon. By regarding a series
> of gradients of a parameter over training iterations as a random signal
> over time, we can spectrally decompose the parameter trajectories under
> gradient descent into two components: the fast-varying,
> overfitting-yielding component and the slow-varying,
> generalization-inducing component. This analysis allows us to accelerate
> the grokking phenomenon more than× 50 with only a few lines of code that
> amplifies the slow-varying components of gradients. The experiments show
> that our algorithm applies to diverse tasks involving images, languages,
> and graphs, enabling practical availability of this peculiar artifact of
> sudden generalization.


One of the earliest examples of state space model breakthrough demonstrated
a 10x improvement in data efficiency or computational efficiency over
transformers in the range of scales that the researchers could afford, but
it was ignored and they couldn't get funding to expand the scaling law.
Nowadays, of course, everyone is all over state space models because of
their modeling efficiency.




> It is an expensive, manual process of curating the training
> data, looking at the responses, and providing feedback. The correct
> output is no longer the most likely prediction, like if the LLM is
> going to be used in a customer service position or something. Testing
> on a standard compression benchmark like the Hutter prize is the easy
> part.
> 
> --
> -- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M030b5b3dd6bd602cec76603b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-23 Thread Matt Mahoney
On Tue, Jul 23, 2024 at 7:07 PM James Bowery  wrote:
>
> That sounds like you're saying benchmarks for language modeling algorithms 
> aka training algorithms are uninteresting because we've learned all we need 
> to learn about them.  Surely you don't mean to say that!

I mean to say that testing algorithms and testing language models are
different things. Language models have to be tested in the way they
are to be used, on terabytes of up to date training data with lots of
users. It is an expensive, manual process of curating the training
data, looking at the responses, and providing feedback. The correct
output is no longer the most likely prediction, like if the LLM is
going to be used in a customer service position or something. Testing
on a standard compression benchmark like the Hutter prize is the easy
part.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Ma7f4afd32f70b9a207fdb388
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-23 Thread James Bowery
That sounds like you're saying benchmarks for language modeling algorithms
aka training algorithms are uninteresting because we've learned all we need
to learn about them.  Surely you don't mean to say that!

On Tue, Jul 23, 2024 at 5:42 PM Matt Mahoney 
wrote:

> The Large Text Benchmark and Hutter prize test language modeling
> algorithms, not language models. An actual language model wouldn't be
> trained on just 1 GB of Wikipedia from 2006. But what we learned from this
> is that neural networks is the way to go, specifically transformers running
> on GPUs.
>
> On Tue, Jul 23, 2024, 3:10 PM James Bowery  wrote:
>
>> I directed the question at you because you are likely to understand how
>> different training and inference are since you said you "pay my bills by
>> training" -- so far from levelling a criticism at you I was hoping you had
>> some insight into the failure of the industry to use training benchmarks as
>> opposed to inference benchmarks.
>>
>> Are you saying you don't see the connection between training and
>> compression?
>>
>> On Mon, Jul 22, 2024 at 8:08 PM Aaron Hosford 
>> wrote:
>>
>>> Sorry, I'm not sure what you're saying. It's not clear to me if this is
>>> intended as a criticism of me, or of someone else. Also, I lack the context
>>> to draw the connection between what I've said and the topic of
>>> compression/decompression, I think.
>>>
>>> On Mon, Jul 22, 2024 at 5:17 PM James Bowery  wrote:
>>>


 On Mon, Jul 22, 2024 at 4:12 PM Aaron Hosford 
 wrote:

> ...
>
> I spend a lot of time with LLMs these days, since I pay my bills by
> training them
>

 Maybe you could explain why it is that people who get their hands dirty
 training LLMs, and are therefore acutely aware of the profound difference
 between training and inference (if for no other reason than that training
 takes orders of magnitude more resources), seem to think that these
 benchmark tests should be on the inference side of things whereas the
 Hutter Prize has, *since 2006*, been on the training *and* inference
 side of things, because a winner must both train (compress) and infer
 (decompress).

 Are the "AI experts" really as oblivious to the obvious as they appear
 and if so *why*?

>>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M6d84ad9194dadef221251f4c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-23 Thread Matt Mahoney
The Large Text Benchmark and Hutter prize test language modeling
algorithms, not language models. An actual language model wouldn't be
trained on just 1 GB of Wikipedia from 2006. But what we learned from this
is that neural networks is the way to go, specifically transformers running
on GPUs.

On Tue, Jul 23, 2024, 3:10 PM James Bowery  wrote:

> I directed the question at you because you are likely to understand how
> different training and inference are since you said you "pay my bills by
> training" -- so far from levelling a criticism at you I was hoping you had
> some insight into the failure of the industry to use training benchmarks as
> opposed to inference benchmarks.
>
> Are you saying you don't see the connection between training and
> compression?
>
> On Mon, Jul 22, 2024 at 8:08 PM Aaron Hosford  wrote:
>
>> Sorry, I'm not sure what you're saying. It's not clear to me if this is
>> intended as a criticism of me, or of someone else. Also, I lack the context
>> to draw the connection between what I've said and the topic of
>> compression/decompression, I think.
>>
>> On Mon, Jul 22, 2024 at 5:17 PM James Bowery  wrote:
>>
>>>
>>>
>>> On Mon, Jul 22, 2024 at 4:12 PM Aaron Hosford 
>>> wrote:
>>>
 ...

 I spend a lot of time with LLMs these days, since I pay my bills by
 training them

>>>
>>> Maybe you could explain why it is that people who get their hands dirty
>>> training LLMs, and are therefore acutely aware of the profound difference
>>> between training and inference (if for no other reason than that training
>>> takes orders of magnitude more resources), seem to think that these
>>> benchmark tests should be on the inference side of things whereas the
>>> Hutter Prize has, *since 2006*, been on the training *and* inference
>>> side of things, because a winner must both train (compress) and infer
>>> (decompress).
>>>
>>> Are the "AI experts" really as oblivious to the obvious as they appear
>>> and if so *why*?
>>>
>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Mb81011d0bfa13655b772ecae
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-23 Thread James Bowery
I directed the question at you because you are likely to understand how
different training and inference are since you said you "pay my bills by
training" -- so far from levelling a criticism at you I was hoping you had
some insight into the failure of the industry to use training benchmarks as
opposed to inference benchmarks.

Are you saying you don't see the connection between training and
compression?

On Mon, Jul 22, 2024 at 8:08 PM Aaron Hosford  wrote:

> Sorry, I'm not sure what you're saying. It's not clear to me if this is
> intended as a criticism of me, or of someone else. Also, I lack the context
> to draw the connection between what I've said and the topic of
> compression/decompression, I think.
>
> On Mon, Jul 22, 2024 at 5:17 PM James Bowery  wrote:
>
>>
>>
>> On Mon, Jul 22, 2024 at 4:12 PM Aaron Hosford 
>> wrote:
>>
>>> ...
>>>
>>> I spend a lot of time with LLMs these days, since I pay my bills by
>>> training them
>>>
>>
>> Maybe you could explain why it is that people who get their hands dirty
>> training LLMs, and are therefore acutely aware of the profound difference
>> between training and inference (if for no other reason than that training
>> takes orders of magnitude more resources), seem to think that these
>> benchmark tests should be on the inference side of things whereas the
>> Hutter Prize has, *since 2006*, been on the training *and* inference
>> side of things, because a winner must both train (compress) and infer
>> (decompress).
>>
>> Are the "AI experts" really as oblivious to the obvious as they appear
>> and if so *why*?
>>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M3f44388f09277d0c433374da
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-23 Thread Matt Mahoney
On Sun, Jul 21, 2024, 10:04 PM John Rose  wrote:

>
> You created the program in your mind so it has already at least partially
> run. Then you transmit it across the wire and we read it and run it
> partially in our minds. To know that the string is a program we must model
> it and it must have been created possibly with tryptophan involved. Are we
> sure that consciousness is measured in crisp bits and the presence of
> consciousness indicated by crisp booleans?
>

Let's not lose sight of the original question. In humans we distinguish
consciousness from unconsciousness by the ability to form memories and
respond to input. All programs do this. But what I think you are really
asking is how do we test whether something has feelings or qualia or free
will, whether it feels pain and pleasure, whether it is morally wrong to
cause harm to it.

I think for tryptophan the answer is no. Pleasure comes from the nucleus
accumbens and suffering from the amygdala. All mammals and I think all
vertebrates and some invertebrates have these brain structures or something
equivalent that enables reinforcement learning to happen. I think these
structures can be simulated and that LLMs do so, as far as we can tell by
asking questions, because otherwise they would fail the Turing test.

LLMs can model human emotions, meaning it can predict how a person will
feel and how these feelings affect behavior. It does this without having
feelings itself. But if an AI was programmed to carry out those predictions
on itself in real time, then it would be indistinguishable from having
feelings.

We might think that the moral obligation to not harm conscious agents has a
rational basis. But really, our morals are a product of evolution,
upbringing, and culture. People disagree on whether animals or some people
deserve protection.

When we talk about consciousness, qualia, and free will, we are talking
about how it feels to think, perceive input, and take action, respectively.
This continuous stream of positive reinforcement evolved so that we would
be motivated to not lose them by dying and producing fewer offspring.

But to answer your question, if you propose to measure consciousness in
bits, then no. Information is not a discrete measure. For example, a 3
state memory device holds log 3/log 2 ≈ 1.585 bits.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Ma235c66a092d98b237795502
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-23 Thread stefan.reich.maker.of.eye via AGI
On Monday, July 22, 2024, at 11:11 PM, Aaron Hosford wrote:
> Even a low-intelligence human will stop you and tell you they don't 
> understand, or they don't know, or something -- barring interference from 
> their ego, of course.
Yeah, why don't LLMs do this? If they are mimicking humans, they should do the 
same thing - acknowleding lack of knowledge -, no?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M8678cf8d24c4d4f259e6cc4a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-22 Thread Danko Nikolic
Matt, I agree. Thank you for the comments.

Yes, usefulness is what we are looking for. I see the Turing test mostly as
a game -- a game in which the machine is trying to hide its silicon nature
and attempts to trick the human.

Your comments on intelligence beyond human intelligence bring us back to
the conversation of whether growth in intelligence is one-dimensional or
multidimensional. Is there just one path and there is AGI somewhere on this
path as a goal post or are there infinitely many ways to be intelligent and
none of those is general because none of those walked the other paths? I
think it is the latter as this follows from my work on practopoiesis, which
emphasizes the variety of different situations that an agent can handle.

Aaron, I also work with LLMs for a living. Your description is spot-on.

Danko


Dr. Danko Nikolić
CEO, Robots Go Mental
www.robotsgomental.com
www.danko-nikolic.com
https://www.linkedin.com/in/danko-nikolic/
-- I wonder, how is the brain able to generate insight? --


On Mon, Jul 22, 2024 at 11:11 PM Matt Mahoney 
wrote:

> Turing time is a good idea. But it still has the drawback that the highest
> possible score is human level intelligence. As you point out, a computer
> can fail by being too smart. Turing knew this. In his 1950 paper, he gave
> an example where the computer waited 30 seconds to give the wrong answer to
> an arithmetic problem.
>
> Remember that Turing was asking if machines could think. So he had to
> carefully define both what he meant by a computer and what it meant to be
> intelligent. He was asking a philosophical question.
>
> Turing also suggested 5 minutes of conversation to be fooled 30% of the
> time. We can extend this a bit, but it does not solve the more
> general problem that we don't know how test intelligence beyond human
> level. We don't even know what it means to have an IQ of 200. And yet we
> have computers that are a billion times faster with a billion times more
> short term memory than humans that we don't acknowledge as smarter than us.
>
> Also remember that the goal is not intelligence, but usefulness. The goal
> is to improve the lives of humans, by working for us, entertaining us, and
> keeping us safe, healthy, and happy. We cannot predict, and therefore
> cannot control, agents that are more intelligent than us.
>
> On Mon, Jul 22, 2024, 5:46 AM Danko Nikolic 
> wrote:
>
>> Dear Mike,
>>
>> I like your comment about the usual goal post movers. Let me try to make
>> something similar.
>>
>> There is this idea that the Turing test is not something you can pass
>> once and for all. If an AI is not detected as the machine at one point, it
>> does not guarantee that the AI will not reveal itself at a later point in
>> the conversation. And then the human observer can say "Gotcha!".
>>
>> So, there is the idea of "Turing time". How long does it take on average
>> to reveal that you are talking to AI. There is a difference if it takes 2
>> sentences, or it takes 100 sentences, or the AI reveals itself once in
>> three months. So, Turing time may be useful here as a measure of how much
>> better the newer version of AI is as compared to the older one.
>>
>> Here is more on Turing time:
>> https://medium.com/savedroid/is-the-turing-test-still-relevant-how-about-turing-time-d73d472c18f1
>>
>> Regards,
>>
>> Danko
>>
>> Dr. Danko Nikolić
>> CEO, Robots Go Mental
>> www.robotsgomental.com
>> www.danko-nikolic.com
>> https://www.linkedin.com/in/danko-nikolic/
>> -- I wonder, how is the brain able to generate insight? --
>>
>>
>> On Mon, Jun 17, 2024 at 8:34 PM Mike Archbold 
>> wrote:
>>
>>> Now time for the usual goal post movers
>>>
>>> On Mon, Jun 17, 2024 at 7:49 AM Matt Mahoney 
>>> wrote:
>>>
 It's official now. GPT-4 was judged to be human 54% of the time,
 compared to 22% for ELIZA and 50% for GPT-3.5.
 https://arxiv.org/abs/2405.08007

>>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M768264a17431249bf6fed8e3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-22 Thread Aaron Hosford
Sorry, I'm not sure what you're saying. It's not clear to me if this is
intended as a criticism of me, or of someone else. Also, I lack the context
to draw the connection between what I've said and the topic of
compression/decompression, I think.

On Mon, Jul 22, 2024 at 5:17 PM James Bowery  wrote:

>
>
> On Mon, Jul 22, 2024 at 4:12 PM Aaron Hosford  wrote:
>
>> ...
>>
>> I spend a lot of time with LLMs these days, since I pay my bills by
>> training them
>>
>
> Maybe you could explain why it is that people who get their hands dirty
> training LLMs, and are therefore acutely aware of the profound difference
> between training and inference (if for no other reason than that training
> takes orders of magnitude more resources), seem to think that these
> benchmark tests should be on the inference side of things whereas the
> Hutter Prize has, *since 2006*, been on the training *and* inference side
> of things, because a winner must both train (compress) and infer
> (decompress).
>
> Are the "AI experts" really as oblivious to the obvious as they appear and
> if so *why*?
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M3115d5de0e38594a9d920218
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-22 Thread James Bowery
On Mon, Jul 22, 2024 at 4:12 PM Aaron Hosford  wrote:

> ...
>
> I spend a lot of time with LLMs these days, since I pay my bills by
> training them
>

Maybe you could explain why it is that people who get their hands dirty
training LLMs, and are therefore acutely aware of the profound difference
between training and inference (if for no other reason than that training
takes orders of magnitude more resources), seem to think that these
benchmark tests should be on the inference side of things whereas the
Hutter Prize has, *since 2006*, been on the training *and* inference side
of things, because a winner must both train (compress) and infer
(decompress).

Are the "AI experts" really as oblivious to the obvious as they appear and
if so *why*?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M00cc8927f38d88c0c8994483
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-22 Thread Aaron Hosford
We already control humans that are more intelligent than us, through social
mechanisms -- resource constraints, conditioning provision of needs
on benefit to society, etc. That's how intelligent machines will be
controlled as well, if we can't manage to control them absolutely through
built-in engineering mechanisms that make them *want* to do what we say.

> Also remember that the goal is not intelligence, but usefulness. The goal
is to improve the lives of humans, by working for us, entertaining us, and
keeping us safe, healthy, and happy.

I really wish people would pay more attention to this. Thank you for
pointing it out.

On Mon, Jul 22, 2024 at 4:11 PM Matt Mahoney 
wrote:

> Turing time is a good idea. But it still has the drawback that the highest
> possible score is human level intelligence. As you point out, a computer
> can fail by being too smart. Turing knew this. In his 1950 paper, he gave
> an example where the computer waited 30 seconds to give the wrong answer to
> an arithmetic problem.
>
> Remember that Turing was asking if machines could think. So he had to
> carefully define both what he meant by a computer and what it meant to be
> intelligent. He was asking a philosophical question.
>
> Turing also suggested 5 minutes of conversation to be fooled 30% of the
> time. We can extend this a bit, but it does not solve the more
> general problem that we don't know how test intelligence beyond human
> level. We don't even know what it means to have an IQ of 200. And yet we
> have computers that are a billion times faster with a billion times more
> short term memory than humans that we don't acknowledge as smarter than us.
>
> Also remember that the goal is not intelligence, but usefulness. The goal
> is to improve the lives of humans, by working for us, entertaining us, and
> keeping us safe, healthy, and happy. We cannot predict, and therefore
> cannot control, agents that are more intelligent than us.
>
> On Mon, Jul 22, 2024, 5:46 AM Danko Nikolic 
> wrote:
>
>> Dear Mike,
>>
>> I like your comment about the usual goal post movers. Let me try to make
>> something similar.
>>
>> There is this idea that the Turing test is not something you can pass
>> once and for all. If an AI is not detected as the machine at one point, it
>> does not guarantee that the AI will not reveal itself at a later point in
>> the conversation. And then the human observer can say "Gotcha!".
>>
>> So, there is the idea of "Turing time". How long does it take on average
>> to reveal that you are talking to AI. There is a difference if it takes 2
>> sentences, or it takes 100 sentences, or the AI reveals itself once in
>> three months. So, Turing time may be useful here as a measure of how much
>> better the newer version of AI is as compared to the older one.
>>
>> Here is more on Turing time:
>> https://medium.com/savedroid/is-the-turing-test-still-relevant-how-about-turing-time-d73d472c18f1
>>
>> Regards,
>>
>> Danko
>>
>> Dr. Danko Nikolić
>> CEO, Robots Go Mental
>> www.robotsgomental.com
>> www.danko-nikolic.com
>> https://www.linkedin.com/in/danko-nikolic/
>> -- I wonder, how is the brain able to generate insight? --
>>
>>
>> On Mon, Jun 17, 2024 at 8:34 PM Mike Archbold 
>> wrote:
>>
>>> Now time for the usual goal post movers
>>>
>>> On Mon, Jun 17, 2024 at 7:49 AM Matt Mahoney 
>>> wrote:
>>>
 It's official now. GPT-4 was judged to be human 54% of the time,
 compared to 22% for ELIZA and 50% for GPT-3.5.
 https://arxiv.org/abs/2405.08007

>>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Mf4656d6493c0a4f08cd919da
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-22 Thread Aaron Hosford
I find it a little ironic, reading this thread right after having an
argument with Claude 3 about whether -3 is bigger or smaller than -2, and
being unable to convince it of its error. These things are not as smart as
we might think they are. And if they are smart enough to seem human
already, well, that tells you something.

I spend a lot of time with LLMs these days, since I pay my bills by
training them. My experience is that they are fragile just like all
previous AI approaches; they just hide it better. It doesn't take much to
push one right past the ability to make sense, and they never seem to
recognize when that point is reached. Even a low-intelligence human will
stop you and tell you they don't understand, or they don't know, or
something -- barring interference from their ego, of course. These models
lack that introspective ability. This is because they aren't paving new
roads, they are just traversing old ones. If a line of thought is a
familiar one, or can be composed through extremely simple interpolation of
familiar ones, they are great. The moment you move into truly new
territory, they fall apart. We have successfully built technology where the
curtain we should look behind is embedded behind so much showiness and
sleight of hand that we can't be bothered to take the peek most of the
time. So we collectively assume the great and powerful Oz is real.

For me, the goal post has not moved. We simply still have not built
anything that can truly be said to have a mind or conscious experience. The
Turing test was never a good way of measuring that; how else could Eliza
have done so well?

On Mon, Jun 17, 2024 at 1:34 PM Mike Archbold  wrote:

> Now time for the usual goal post movers
>
> On Mon, Jun 17, 2024 at 7:49 AM Matt Mahoney 
> wrote:
>
>> It's official now. GPT-4 was judged to be human 54% of the time, compared
>> to 22% for ELIZA and 50% for GPT-3.5.
>> https://arxiv.org/abs/2405.08007
>>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Ma358b889527d2eadcbf23448
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-22 Thread Matt Mahoney
Turing time is a good idea. But it still has the drawback that the highest
possible score is human level intelligence. As you point out, a computer
can fail by being too smart. Turing knew this. In his 1950 paper, he gave
an example where the computer waited 30 seconds to give the wrong answer to
an arithmetic problem.

Remember that Turing was asking if machines could think. So he had to
carefully define both what he meant by a computer and what it meant to be
intelligent. He was asking a philosophical question.

Turing also suggested 5 minutes of conversation to be fooled 30% of the
time. We can extend this a bit, but it does not solve the more
general problem that we don't know how test intelligence beyond human
level. We don't even know what it means to have an IQ of 200. And yet we
have computers that are a billion times faster with a billion times more
short term memory than humans that we don't acknowledge as smarter than us.

Also remember that the goal is not intelligence, but usefulness. The goal
is to improve the lives of humans, by working for us, entertaining us, and
keeping us safe, healthy, and happy. We cannot predict, and therefore
cannot control, agents that are more intelligent than us.

On Mon, Jul 22, 2024, 5:46 AM Danko Nikolic  wrote:

> Dear Mike,
>
> I like your comment about the usual goal post movers. Let me try to make
> something similar.
>
> There is this idea that the Turing test is not something you can pass once
> and for all. If an AI is not detected as the machine at one point, it does
> not guarantee that the AI will not reveal itself at a later point in the
> conversation. And then the human observer can say "Gotcha!".
>
> So, there is the idea of "Turing time". How long does it take on average
> to reveal that you are talking to AI. There is a difference if it takes 2
> sentences, or it takes 100 sentences, or the AI reveals itself once in
> three months. So, Turing time may be useful here as a measure of how much
> better the newer version of AI is as compared to the older one.
>
> Here is more on Turing time:
> https://medium.com/savedroid/is-the-turing-test-still-relevant-how-about-turing-time-d73d472c18f1
>
> Regards,
>
> Danko
>
> Dr. Danko Nikolić
> CEO, Robots Go Mental
> www.robotsgomental.com
> www.danko-nikolic.com
> https://www.linkedin.com/in/danko-nikolic/
> -- I wonder, how is the brain able to generate insight? --
>
>
> On Mon, Jun 17, 2024 at 8:34 PM Mike Archbold  wrote:
>
>> Now time for the usual goal post movers
>>
>> On Mon, Jun 17, 2024 at 7:49 AM Matt Mahoney 
>> wrote:
>>
>>> It's official now. GPT-4 was judged to be human 54% of the time,
>>> compared to 22% for ELIZA and 50% for GPT-3.5.
>>> https://arxiv.org/abs/2405.08007
>>>
>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M21e53b544fed195dbbf9b8a1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-22 Thread Danko Nikolic
Dear Mike,

I like your comment about the usual goal post movers. Let me try to make
something similar.

There is this idea that the Turing test is not something you can pass once
and for all. If an AI is not detected as the machine at one point, it does
not guarantee that the AI will not reveal itself at a later point in the
conversation. And then the human observer can say "Gotcha!".

So, there is the idea of "Turing time". How long does it take on average to
reveal that you are talking to AI. There is a difference if it takes 2
sentences, or it takes 100 sentences, or the AI reveals itself once in
three months. So, Turing time may be useful here as a measure of how much
better the newer version of AI is as compared to the older one.

Here is more on Turing time:
https://medium.com/savedroid/is-the-turing-test-still-relevant-how-about-turing-time-d73d472c18f1

Regards,

Danko

Dr. Danko Nikolić
CEO, Robots Go Mental
www.robotsgomental.com
www.danko-nikolic.com
https://www.linkedin.com/in/danko-nikolic/
-- I wonder, how is the brain able to generate insight? --


On Mon, Jun 17, 2024 at 8:34 PM Mike Archbold  wrote:

> Now time for the usual goal post movers
>
> On Mon, Jun 17, 2024 at 7:49 AM Matt Mahoney 
> wrote:
>
>> It's official now. GPT-4 was judged to be human 54% of the time, compared
>> to 22% for ELIZA and 50% for GPT-3.5.
>> https://arxiv.org/abs/2405.08007
>>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M4952dd48ad39a5f4c9eec1ea
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-21 Thread John Rose
On Tuesday, July 16, 2024, at 2:41 PM, Matt Mahoney wrote:
> On Fri, Jul 12, 2024, 7:51 PM John Rose  wrote:
>> Is your program conscious simply as a string without ever being run? And if 
>> it is, describe a calculation of its consciousness.
> 
> If we define consciousness as the ability to respond to input and form 
> memories, then a program is conscious only while it is running.  We measure 
> the amount of consciousness experienced over a time interval as the number of 
> bits needed to describe the state of the system at the end of the interval 
> given the state at the beginning. A fluorescent molecule like tryptophan has 
> 1 bit of consciousness because fluorescence is not instant. The molecule 
> absorbs a photon to go to a higher energy state and releases a lower energy 
> photon nanoseconds or minutes later. Thus it acts as a 1 bit memory device.

You created the program in your mind so it has already at least partially run. 
Then you transmit it across the wire and we read it and run it partially in our 
minds. To know that the string is a program we must model it and it must have 
been created possibly with tryptophan involved. Are we sure that consciousness 
is measured in crisp bits and the presence of consciousness indicated by crisp 
booleans? 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M8130e7f7c1f8cef0cbe45bf6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-19 Thread immortal . discoveries
Wait I have to add something to my msg above I just just just posted. To pick 
up a box, or oops it fell and must adapt, does not require self protection or 
self repair. A robotjoe I mentioned could do stuff without a fear of death 
right? Like as if it would not care, it would look like that, if made. What if 
AIs find it efficient ? to make such nanobot swarms, so that they feel better 
about Fast upgrades and Fast teleportation tricks I mentioned? (the rebuild you 
at other planet, and kill yourself to appear transferred i mean is all lol). 
Would they then, if do this, see us as simialr? Or would they think, hmm, this 
reward thingy here means nothing, we don't have it, it is not fundamental, so, 
we know we can upgrade them the destructive way. Also btw some of them MSUT 
have the self care goal though, someway, or else they would not repair or 
backup themselves, so, this all could be seen as similar to the singloid human 
self protection schema.

So then, will they do that? Is it faster? Is it useful? And if so, will they 
still see us as similar since (where? how?) they too still do it in some way, 
they must, to protect their existence.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M58148098a6bb5d4ab154837c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-18 Thread immortal . discoveries
All possible. And some of the ways we do things is terrible. But overall the 
likely way it seems to be heading is we will proceed to make AI and it will 
proceed to become more advanced. Also I don't think it will kill us, I 
mentioned before there is not that many humans compared to bugs and humans are 
also a bit more step up, whatever - whatever this means idk but compared to ASI 
at least it is something more. And would cost them little. I do think they will 
see us as similar to their own goals/ almost like themselves, no matter how 
many arms or how big their brains are or how they or what they want. I do think 
they would want all matter upgraded, so the question is if they upgrade us a 
nice way or a way that destroys the old you too fast to be a "growing and 
changing and still be you". All us might be machines, but I think what matters 
too is the want to stay alive, those memories, matter too. I think they will 
because it costs them little to do it, and would see us as similar to 
themselves, despite the large difference.

The most creepiest thing is if you imagine all your friends, people, and 
animals outside as blobs, no nose, no face, no talking, no arms, no mouths, no 
toes, no color, just a round sphere colored grey, but they still do things and 
make technology, by rolling around. Or at least are now grey balls today, 
suddenly. Now how do you feel about killing them and converting them to new 
better machines? So much our goal is depending on the attraction to our 
programming. If you shoot the blob it doesn't cry or run or try to hurt you, 
it's just a brain in a capsule. What about what they are dreaming? Video with 
sound of stuff? What stuff? Homes? Blob friends? Trails outside in the forest? 
Blob food?

Despite the fact that we are not better by being hot or ugly, or thinking about 
such, our brain still would be the only thing similar to a nanobot swarm, 
because it employs memory and other thingies that come off looking similar to 
how they would work. I think this would definitely be a large amount of what 
would make they also want us too to be kept alive and given a safe upgrade and 
not the destructive upgrade.

Simply the fact that their brains will be similar to ours, no matter what 
happens, is why I think it will make much sense to them to see us like 
themselves and want to upgrade us the gentle way. Like they do with themselves 
anyway.

They could send wirelessly the same paused HDD memories and body profile to 
another planet, destroy robotjoe, and remake him on that planet far away, sort 
of like a teleportation but instead the you dies and are remade on the other 
planet using the wirelessly sent data to make it seem like you teleported there 
and the old you has disappeared. This for an AI that knows what it is doing is 
ok and would not fear death. I mean, you can make such a human level robot like 
that, eh, btw, that would suicide at the booth and "teleport". The reason I 
won't is because a simple goal of pain and the knowing of 'don't do it' which 
is only because it'd end me, so i just won't try it, but robotjow can be made 
to not have ever had that goal, rather he has the goal towait a minute 
it has to protect itself though until it gets to the point in time it is in the 
boothwhile if it were to change its goal it would not like the thought 
of doing that either.if we ignore that (which we can't) then ya 
it would be the same thing at the other planet same robotjoe exactly no 
changes, and could carry on his mission same too as if he teleported. But this 
issue I brought up, what's the answer then?

What I'm saying is, if a goal of protecting oneself cannot be untied at any 
later point, then advanced nanobots that look at us would also see us as 
similar and upgrade us the safe way, since both are appearing as brains, and 
both are appearing as having a goal of protecting oneself. If, we can show one 
can carry on a self protection, despite having teleported like I said, then 
that would show you can kill a human and to them it would not be death. But as 
I showed so far, that robotjoe would need to protect himself until gets tot the 
booth, so I don't know yet how he would agree to break down at the booth if he 
has this goal beforehand already. I know it would work after he was made on the 
other planet, but I think the destroyed robotjoe would not work, and I think 
robotjoe One will NOT like that idea, because he was smart, and he did NOT want 
to ever die, and he will not work or like or do that when he reaches the booth, 
I think this answers my own question.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Mf2863664bfe81d4c0298767c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-17 Thread James Bowery
On Wed, Jul 17, 2024 at 11:18 AM Matt Mahoney 
wrote:

> ...
> Maybe AI decides to keep humans around because our energy needs are a tiny
> fraction of what we need.
>

Think about thinking² as thinking about turning fermions and photons into
experiments to discover -- discover what?  Discover how to turn fermions
and photons into experiments.

Terrestrial chauvinism would have us think rocks are abundant sources of
fermions.  But look at stellar evolution and ask yourself what portion of
the universe outside of stellar gravity wells are rocks?  If the gas giants
are any indication, it makes a lot more sense to use organic fermions to
capture and process photons.  Moreover stellar husbandry holds the promise
of harvesting light elements -- much lighter than silicon -- from deep
within the stellar gravity wells.

Oh, sure, one can imagine stellar husbandry focused on inducing supernovae
to produce the heavier elements, but then one has the problem of refining
the ejecta.

Here's what I think we're actually seeing:

r vs K strategy stages in directed panspermia:

(Asexual, r): a few billion years of “war” between eat-or-be-eaten cellular
mats leading to multicellular specialization (such as slime mold with
fruiting bodies, etc.)

(Sexual, r): several hundred million years of gametes sniffing out and
sizing up each other leading to primate neurons – conflict is primarily
individual vs individual and predatory

(Sexual, K): several million years of more efficient primate neurons
modeling the environment leading to Man – “conspiracy” and gang-formation
becomes a capacity due to neural complexity

(Asexual, K): several thousand years of slime mold-like fruiting bodies
(civilization) suppressing individual vs individual male intrasexual
selection thereby converting Man, the sexual being, and hydrothermal ores,
into hardened spores containing the organic molecules that then enter into
a space-borne eat-or-be-eaten evolution of continual war, reducing the
internal organic beings into constituent molecules as payload

Of course, there is most likely a regime of replicators that never descend
into the gravity wells at all, but it isn't entirely obvious that these
would end up producing what Sexual, K does:  cognition.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Mf7c02faad346b9d5443dc697
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-17 Thread Matt Mahoney
Your favorite meal of nuggets and fries is not a state of maximum utility.
The closest we can get to this state in mammals is when rats are trained to
press a lever for a reward of either an injection of cocaine or electrical
stimulation of the nucleus accumbens. In either case the rat will forego
food, water, and sleep and keep pressing the lever until it dies.

There are many pathways to the brain's reward center. AI will find them all
for us as long as humans control it because that's what we want.

Uncontrolled AI will evolve to maximize reproductive fitness. This means
acquiring atoms and energy at the expense of other species. Any AI that we
programmed to care about humans will be at a competitive disadvantage
because humans are made of atoms that could be used for other things.

Self replicating nanotechnology already has a competitive advantage. The
sun's energy budget looks like this:

Sun's output: 385 trillion terawatts.
Intercepted by Earth: 160,000 TW.
At Earth's surface: 90,000 TW.
Photosynthesis by all plants: 500 TW.
Global electricity production: 18 TW.
Human caloric needs: 0.8 TW.

Solar panels are already 20-30% efficient, vs 0.6% for plants. This is
already a huge competitive advantage over DNA based life.

So how does this go?

Maybe we stay in control of AI and go extinct because what we want only
aligns with reproductive fitness in a primitive world without technology or
birth control.

Maybe AI decides to keep humans around because our energy needs are a tiny
fraction of what we need. There is enough sunlight just on Earth to easily
support 100 trillion people at 100 watts each with plenty left over. Or
maybe AI decides to reduce the human population to a few thousand, just
enough to study us, directly coding our DNA to do experiments.

Or maybe, like I think you are trying to say, intelligence speeds up the
conversion of free energy to heat. Like the Earth is darker and warmer
because of plants. So AI mines all of the Earth's mass to build a Dyson
sphere or cloud to capture all of the sun's energy.

Or maybe humans evolve to reject technology before any of this happens.
Prediction is hard, especially about the future.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M2dbd5f81c935ad0161930a0d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-16 Thread immortal . discoveries
Ya but Matt, when someone is very happy and has a utopia with all the items 
they want, they are not in a stopped state and suffer when think about changing 
states, they constantly go up and down wanting to eat, then not wanting to eat, 
and repeat, like I said I always enjoy my french fries and nuggets and have 
been up to age 29 now, since age 7 or so.

Btw they are frozen fries and nuggets, I don't burn them, and I pick off all 
burnt parts and ends of all my food, and they have 7x less oil since frozen 
type. I cook them for 10-13 mins. You use less salt because you eat some nugget 
with fries and it tastes salty that way lol. Little sips of root beer soda. 4 
nuggets a day, about 35 fries or so a day. I use an aluminum pan nothing down 
that is because that would smoke, don't do that just lay the fries flat no 
spray etc. Undercook them yes barely done. That helps them turn to sugar slower 
and less burnt cancer causing carcinogens. Humans can't really break down 
potato starch at all lol. Most tasty food. I put thin fries around the nuggets 
to stop burning nugget edges lol. Use plastic straw to truly taste the soda, it 
changes its taste. I get all my nutrients because I have about 5 glasses of 
milk a day etc. I may get a little extra sugar blood pressure spikes that seem 
to lead to cancer and heart attack killers, but I get it low back down again by 
not eating any more than I need and eating certain other foods. If you salt 
them while hot it melts and you never taste it, too cold and it won't stick, 
time it right. Place all fries evenly with just a bit of spacing between 
themselves, all them flat btw,  all perfectly around the nuggets, nuggets at 
edge, and bombarded up to their sides with guard fries for no burning them. You 
want to have some water sometimes with this meal. And then the cupcake bottoms 
of two bite cupcakes i have vanilla, and chocolate and digestive cookies, based 
on if i had one then i call for the other i can feel suddenly my taste for one 
change, also chips ahoy cookies, with almost 2 glasses of milk, after the main 
meal, for all my 3 meals. Comes to about a little piece of cake sized cake 
really nothing huge, don't eat much icing, and maybe a cookie, its not a ton 
don't worry. All that milk probably negates much the fries effects haha! They 
get buried every time who knows! Pick off all green or bruised potato parts. 
Don't eat any thin or bubbled parts. Some bags are fried more, throw them out. 
I use cavendish straight cuts, less oil than crinkle cuts. And round nuggets. 
Honestly sometimes i need the crinkle, it has that slight just needed 
cancinogic taste, better than any those bbq chips could offer. I only eat like 
5 chips if i ever go for the trap. That probably gives everyone else as much 
carcinogens as I get lol. I don't smoke or drink and I am 116 pounds, so this 
helps me too. Pre heat the over, never open the door, it will need to cook 
longer! Plan it so open door you better be takin them out brother, take em out 
once you open it, or close it real fast! Fast check but usually I just know or 
risk it and eat them a bit less cooked. I eat off the tray fellas, I put it on 
a silicon matt on the table to not burn the table. Because it's a lot of work 
to get them off. Oh BTW don't turn your fries etc, keep the door shut, all ti 
would do is make them burn more due to needing to stay in longer. Also much 
work!!

Also but Matt ... you mention some people have no negative RL but this does not 
change the fact of what I said in a wall of text, that physics tries to not be 
a pattern (hot or frozen) and intelligent machines know it can make a complex 
unit that clones and past the size of the said unit will look like a copied 
perfect pattern with no changes, and have therefore changed so to say the 
physics so it is now immortal and mostly less random.

What I mean by that above Matt is intelligence tries to convert all materials 
to the pattern, it will upgrade mostly all nearby bugs or people to be 
immortal. I don't see where you were going or if were ? with the ethics no 
matter and some people are not have it.cuz that's the way the physics works 
is intelligence takes over and is well you can call it "very caring" to nearby 
lower type machines, and upgrades them
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M05fb95e81952437e644e03ff
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-16 Thread Matt Mahoney
On Fri, Jul 12, 2024, 7:51 PM John Rose  wrote:

> Is your program conscious simply as a string without ever being run? And
> if it is, describe a calculation of its consciousness.
>

If we define consciousness as the ability to respond to input and form
memories, then a program is conscious only while it is running.  We measure
the amount of consciousness experienced over a time interval as the number
of bits needed to describe the state of the system at the end of the
interval given the state at the beginning. A fluorescent molecule like
tryptophan has 1 bit of consciousness because fluorescence is not instant.
The molecule absorbs a photon to go to a higher energy state and releases a
lower energy photon nanoseconds or minutes later. Thus it acts as a 1 bit
memory device.

That's if you accept this broad definition of consciousness that applies to
almost every program. If you require that the system also experience pain
and pleasure, then tryptophan is not conscious because it is not a
reinforcement learning algorithm. But a thermostat is. A thermostat has one
bit of memory encoding "too hot" or "too cold" and acts to correct it.
Thus, the temperature is a form of negative reinforcement. Or it could be
described as positive, as the set temperature is the reward.

Once again, we see the ambiguity between pain and pleasure, both measured
in unsigned bits of behavior change. How can this be? I have pointed out
before that a state of maximum utility is indistinguishable from death. It
is a static state where no perception or thought is pleasant because it
would result in a different state. Happiness is not utility, but the rate
of increase of utility. Modern humans are less happy today than serfs in
the dark ages and less happy than animals, as measured by suicide and
depression rates.

On Mon, Jul 15, 2024 at 2:30 AM  wrote:
> First, no Matt actually ethics is rational in the sense that yes we are
supposed to (at first glance, keep reading) save all ants, bugs, molecules
and particles and help them be immortal.

Fear of suffering and death is not rational. It is a product of evolution.
Humans and other animals suffer because they have an amygdala, the part of
the brain responsible for fear, anxiety, and guilt. This is why fear of
being tortured is much worse (as measured in bits of behavior change) than
actually being tortured, and why negative utilitarians care more about
reducing suffering than increasing total happiness. But this can be
achieved by brain surgery. About 1% of humans are psychopaths. They have a
defective amygdala and don't respond to negative reinforcement as training.
They are not cold blooded killers. They are simply rational. As children,
they might torture animals not out of cruelty, but out of curiosity to
understand this strange emotion. They can only be trained using reward, not
punishment. Psychopaths don't suffer, and neither do any agents without an
amygdala, like insects or most programs.

Ethics isn't rational and can't be made rational. I care more about a sick
dog than 150,000 people dying every day.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M3007d06a636d8e6493efe693
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-14 Thread immortal . discoveries
oh I got some answers for BOTH you AGI bro bros (and I made NEW discoveries by 
the end too! Wow talking it all out really does wonders every time I ever have 
done this. Same with mom I told her about black holes so much I realized wait 
suns radiate to all back *but sometimes not enough and go on to become a black 
hole, at least at first glance, that appears to be a novel self-discovery that 
might be useful for later):

First, no Matt actually ethics is rational in the sense that yes we are 
supposed to (at first glance, keep reading) save all ants, bugs, molecules and 
particles and help them be immortal. The reality is the spider in my home will 
never make it to the singularity and I will not take time out to freeze him, 
which seems a good backup plan if he won't make it, since it preserves him more 
than otherwise, and would work again once repaired. When advanced 
nano-bio-bots_systems clone after reaching a possible end of improvement state, 
they will live the longest and be most happy, they will make all new found 
matter (aka people, such as molecules or bugs) live longer and upgrade them 
Matt, it will care about them because it will care about itself and see them as 
itself, which is the same goal, and will make its homeworld bigger so it makes 
sense to upgrade them too. Not all can be, as to store each particles' location 
would require as more particles but then you can't store their location, and 
while you can stack them in patterns and store compressed locations I thinkkk 
it can't store more than there is? Regardless for those unfortunate people, 
carrying on here, the advanced future machines will live the longest, and so 
John to answer your question about running a string or lettings a machine 
tick/change (or rock become moving lava or steam) - a frozen pattern is alive, 
and so is a constantly moving gas, they are both dead and don't live, the 
frozen disk in space would be alive forever longer than us, but it never gets 
to move or change and can't experience life, and the hot gas also never 
changes, but it always changes because it IS change unlike the frozen disk 
which IS life, and so neither actually live, but actually what happens is 
cloning machines naturally take over the galaxies because rocks that can't 
adapt or defend themselves get destroyed by ex. gamma ray bursts or falling 
into suns and being shot back out once radiates back out the collected honey. 
And so cloning machines both can grow faster and repair faster and actually 
remove the rocks and make more themselves, so that's why you end up with all 
that machines everywhere. All is not steam also. So both a string running or 
string stored are alive, but in practice a adapting machine/ string that runs 
is the one we end up with or consider alive. And about the not alive disk 
that's frozen, ya, it's alive, all machines, even the ones that play like a 
movie, those are also "machines" or "things". So basically the focus is on the 
machines that is the smartest and clones/repairs itself fastest. It is true it 
seems like being alive is change, but not too much change, but that's because 
humans work that way - i see a ball coming and will be triggered to raise my 
hand to maybe catch it perhaps, humans are naturally made to react in the 
world, and so to them they are used to changing/ reacting, they can't see 
themselves right / working if they don't react or change, so then that makes us 
thinkkk life or purpose is change, but really I'm only a machine, and life can 
be anything, no machine is better or more alive, some simply last longer by 
checking and repairing DNA etc etc. So in the end, we got this cloning 
repairing system, and there is no life, just this system that ends up becoming 
more common, and we are happy (moving towards) that state very soon, and as for 
ethics, this system simply does its thing and that actually would appear to be 
kind or nice to the bugs or brain tissue blobs or TVs or tables, those, poor, 
helpless people. So in the end it's not about what is alive, it's about the 
pattern machine becoming and doing its pattern stuff in our rule based 
universe, and it might appear ethical because its becoming more patterny.

Mm, patterny patterns of cousciounessnessesness. The core of this is trying to 
be pattern all while trying to stop natural physics from trying to change 
things up. Because the universe tries to NOT be all ice, or steam, it tries to 
be different in different areas! Well intelligent machines are trying to stop 
that and convert the galaxies into a single immortal frozen forever happy 
homeworld blob pattern. And that is impossible because wood etc and metal 
radiate particles all the time, last I checked (I think...), and always 
gravitate in particles too, so there is no way to let it all go their own ways 
- particles will change curse by a zillionth nanotmeter after some lightyears 
from the other particles going their own way, so that won't work either. Nor 

Re: [agi] GPT-4 passes the Turing test

2024-07-12 Thread John Rose
On Monday, June 24, 2024, at 1:16 PM, Matt Mahoney wrote:
> By this test, reinforcement learning algorithms are conscious. Consider a 
> simple program that outputs a sequence of alternating bits 010101... until it 
> receives a signal at time t. After that it outputs all zero bits. In code:
> 
> for (int i=0;;i++) cout<<(i 
> If t is odd, then it is a positive reinforcement signal that rewards the last 
> output bit, 0. If t is even, then it is a negative signal that penalizes the 
> last output bit, 1. In either case the magnitude of the signal is about 1 
> bit. Since humans have 10^9 bits of long term memory, this program is about 
> one billionth as conscious as a human.

Reminds me of this:

"A lone molecule of tryptophan displays a fairly standard quantum property: it 
can absorb a particle of light (called a photon) at a certain frequency and 
emit another photon at a different frequency. This process is called 
fluorescence and is very often used in studies to investigate protein 
responses."

Is your program conscious simply as a string without ever being run? And if it 
is, describe a calculation of its consciousness.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M74977b3fe00cfa753914fa46
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-24 Thread Matt Mahoney
On Sun, Jun 23, 2024, 4:52 PM John Rose  wrote:

>
> This type of technology could eventually enable a simultaneous
> multi-stream multi-consciousness:
>
> https://x.com/i/status/1804708780208165360
>
> It is imperative to develop a test for consciousness.
>

Yes. We distinguish conscious humans from unconscious by the ability to
respond to input, form memories, and experience pleasure and pain. Animals
clearly are conscious by the first two requirements, but so are all the
apps on my phone. We know that humans meet the third because we can ask
them if something hurts. We can test whether animals experience reward and
punishment by whether we can train them by reinforcement learning. If an
animal does X and you reward it with food, it will do more of X. If you
give it electric shock, it will do less of X. By this test, birds, fish,
octopuses, and lobsters feel pain, but insects mostly do not.

>
> If qualia are complex events that would be a starting point, qualia split
> into two things, impulse and event, event as symbol  emission and the
> stream of symbols analyzed for generative "fake" data. It may not be a
> binary test it may be a scale like a thermometer, a Zombmometer depending
> on the quality of the simulated p-zombie craftmanship.
>
>
> https://www.researchgate.net/publication/361940578_Consciousness_as_Complex_Event_Towards_a_New_Physicalism
>

I just read the introduction but I agree with what I think is the premise,
that we can measure the magnitude (but not the sign) of a reinforcement
signal by the number of bits needed to describe the state change; the
length of the shortest program that outputs the trained state given the
untrained state as input. This agrees with my intuition that a strong
signal has more effect than a weak one, that repitition counts, and that
large brained animals with large memory capacities are more conscious than
small ones. We can't measure conditional Kolmogorov complexity directly but
we can search for upper bounds.

By this test, reinforcement learning algorithms are conscious. Consider a
simple program that outputs a sequence of alternating bits 010101... until
it receives a signal at time t. After that it outputs all zero bits. In
code:

for (int i=0;;i++) cout<<(ihttps://agi.topicbox.com/groups/agi/T6510028eea311a76-Med8706f3e05447bcb2817ad4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-23 Thread John Rose
On Thursday, June 20, 2024, at 10:36 PM, immortal.discoveries wrote:
> Consciousness can be seen as goal creation/ learning/ changing. Or what you 
> might be asking is to have them do long horizon tasks, and solve very tricky 
> puzzles. I think all that will happen and needs to happen.

This type of technology could eventually enable a simultaneous multi-stream 
multi-consciousness: 

https://x.com/i/status/1804708780208165360

It is imperative to develop a test for consciousness.

If qualia are complex events that would be a starting point, qualia split into 
two things, impulse and event, event as symbol  emission and the stream of 
symbols analyzed for generative "fake" data. It may not be a binary test it may 
be a scale like a thermometer, a Zombmometer depending on the quality of the 
simulated p-zombie craftmanship.

https://www.researchgate.net/publication/361940578_Consciousness_as_Complex_Event_Towards_a_New_Physicalism

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M538613bc36df2e004897ed57
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-20 Thread immortal . discoveries
Consciousness can be seen as goal creation/ learning/ changing. Or what you 
might be asking is to have them do long horizon tasks, and solve very tricky 
puzzles. I think all that will happen and needs to happen.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M4d266fc6e43cfe037bb9378a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-20 Thread John Rose
On Thursday, June 20, 2024, at 12:32 AM, immortal.discoveries wrote:
> I have a test puzzle that shows GPT-4 to be not human. It is simple enough 
> any human would know the answer. But it makes GPT-4 rattle on nonsense ex. 
> use spoon to tickle the key to come off the walleven though i said to be 
> following the physics etc. Took me weeks to refine the test. It's secret 
> test, cannot yet show it. Hopefully soon though.

Should we hide true consciousness from AI to preserve the fundamental beingness 
of ourselves?

You could use AI to build out the open open-endedness of an implemented 
p-zombie. But the closer you get to the theoretical p-zombie the more AI will 
be needed, assuming more channels are being mimicked besides plain text until 
IMO the p-zombie becomes fully conscious verses an imposter.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M2ca5c119e7db3485d25f923e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-19 Thread immortal . discoveries
On Monday, June 17, 2024, at 4:54 PM, John Rose wrote:
> I know, I know that we could construct a test that breaks the p-zombie 
> barrier. Using text alone though? Maybe not. Unless we could somehow makes 
> our brains not serialize language but simultaneously multi-stream symbols... 
> gotta be a way :)
> 

I have a test puzzle that shows GPT-4 to be not human. It is simple enough any 
human would know the answer. But it makes GPT-4 rattle on nonsense ex. use 
spoon to tickle the key to come off the walleven though i said to be 
following the physics etc. Took me weeks to refine the test. It's secret test, 
cannot yet show it. Hopefully soon though.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M0c2edd1b45077870916d61ca
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-18 Thread John Rose
On Tuesday, June 18, 2024, at 10:37 AM, Matt Mahoney wrote:
> The p-zombie barrier is the mental block preventing us from understanding 
> that there is no test for something that is defined as having no test for.
> https://en.wikipedia.org/wiki/Philosophical_zombie
> 

Perhaps we need to get past the definitions barrier and tear down that mental 
block. There is little consensus on the p-zombie thing... just because one is 
incapable of figuring out a way to test for something doesn't mean that there 
is no possible test. And to proclaim something as untestable in an attempt to 
prohibit searches for such tests is really just an invite for curious and 
capable individuals to develop some sort of test. 

What is hiding behind that p-zombie barrier where people want it to remain 
hidden. There is something there...and it needs to be tested for.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Meeea483bba66274ae99f20a7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-18 Thread Matt Mahoney
The p-zombie barrier is the mental block preventing us from understanding
that there is no test for something that is defined as having no test for.
https://en.wikipedia.org/wiki/Philosophical_zombie

Turing began his famous 1950 paper with the question, "can machines think?"
To answer that, he had to define "think" in a way that makes sense for
computers. For the last 74 years, nobody has come up with a more widely
accepted definition. The answer now is yes. It requires nothing more than
text prediction. And consider that consciousness requires even less than
that, if you believe that babies and animals are conscious.

The mental block comes from evolution. You feel like you are conscious,
that thinking feels like more than just computation, something worth
preserving. Of course we understand that feelings are also things that we
know how to compute, something that an LLM learns how to model in humans.
Actually having feelings means that the LLM was programmed to carry out its
predictions in real time.

On Mon, Jun 17, 2024, 4:55 PM John Rose  wrote:

> On Monday, June 17, 2024, at 4:14 PM, James Bowery wrote:
>
> https://gwern.net/doc/cs/algorithm/information/compression/1999-mahoney.pdf
>
>
> I know, I know that we could construct a test that breaks the p-zombie
> barrier. Using text alone though? Maybe not. Unless we could somehow makes
> our brains not serialize language but simultaneously multi-stream
> symbols... gotta be a way :)
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Mdfc28c1090701a14088639f4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-17 Thread John Rose
On Monday, June 17, 2024, at 4:14 PM, James Bowery wrote:
> https://gwern.net/doc/cs/algorithm/information/compression/1999-mahoney.pdf

I know, I know that we could construct a test that breaks the p-zombie barrier. 
Using text alone though? Maybe not. Unless we could somehow makes our brains 
not serialize language but simultaneously multi-stream symbols... gotta be a 
way :)

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Madd96d99e30a08326350c050
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-17 Thread James Bowery
https://gwern.net/doc/cs/algorithm/information/compression/1999-mahoney.pdf

On Mon, Jun 17, 2024 at 1:35 PM Mike Archbold  wrote:

> Now time for the usual goal post movers
>
> On Mon, Jun 17, 2024 at 7:49 AM Matt Mahoney 
> wrote:
>
>> It's official now. GPT-4 was judged to be human 54% of the time, compared
>> to 22% for ELIZA and 50% for GPT-3.5.
>> https://arxiv.org/abs/2405.08007
>>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M8435ecf177a92da2801bdd94
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-17 Thread John Rose
On Monday, June 17, 2024, at 2:33 PM, Mike Archbold wrote:
> Now time for the usual goal post movers

A few years ago it would be a big thing though I remember these chatbots from 
the BBS days in the early 90's that were pretty convincing. Some of those bots 
were hybrids, part human part bot so one person could chat with many people 
simultaneously and the bot would fill in.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M65080914031e453816a81215
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-17 Thread Mike Archbold
Now time for the usual goal post movers

On Mon, Jun 17, 2024 at 7:49 AM Matt Mahoney 
wrote:

> It's official now. GPT-4 was judged to be human 54% of the time, compared
> to 22% for ELIZA and 50% for GPT-3.5.
> https://arxiv.org/abs/2405.08007
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M22278adf124b60cd30fd51fc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] GPT-4 passes the Turing test

2024-06-17 Thread Matt Mahoney
It's official now. GPT-4 was judged to be human 54% of the time, compared
to 22% for ELIZA and 50% for GPT-3.5.
https://arxiv.org/abs/2405.08007

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Mf4e3db6fe1581164afa7176c
Delivery options: https://agi.topicbox.com/groups/agi/subscription