Re: [agi] Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-25 Thread John Rose
On Tuesday, July 25, 2023, at 10:10 AM, Matt Mahoney wrote:
> Consciousness is what thinking feels like, the positive reinforcement that 
> you want to preserve by not dying.

I use another definition but taking yours then we could say that the Universe 
is self-simulating itself. Through all compression simultaneously occurring on 
Earth for example, improving over time, the Universe is modelling itself by 
utilizing the renewing graph of human brain nodes. Why is it doing that? 
Because the universe is thinking and it feels good and it's self-reflecting. 
Though at times it’s gets angry and wants to know its own K complexity and 
throws rocks at Earth so we need to appease it once in a while with better 
compression ratios 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb2574dcac5560d73-M0474922c6ce34291249c2b52
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-25 Thread Matt Mahoney
On Mon, Jul 24, 2023, 6:31 AM John Rose  wrote:

> On Sunday, July 23, 2023, at
> I’m not convinced though the brain isn’t performing quantum computation…
> we don’t know enough to say that do we? And I’m not in the camp that says
> one human brain is a general intelligence. The graph of human brains is IMO
>

Penrose seems to think so. He observed quantum superposition in the
structure of microtubule proteins in neurons (like molecules in all of
chemistry) and makes the giant leap to explaining consciousness, because he
has no other good explanation for why the brain does stuff that he believes
can't ever be done by computers. It would help if he defined consciousness,
but of course you can't because the definition of a p-zombie is logically
inconsistent.

Consciousness is what thinking feels like, the positive reinforcement that
you want to preserve by not dying.

The evidence that the brain is not a quantum computer is the success of AI
using neural networks.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb2574dcac5560d73-Mc53ed4b5f7d84c2f9fed4a7b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: The extinction tournament

2023-07-25 Thread Matt Mahoney
The goal of evolution is to maximize population growth. Your goal is
immortality because animals that fear death and then die have more
offspring.

On Tue, Jul 25, 2023, 4:50 AM  wrote:

> On Monday, July 24, 2023, at 11:05 PM, Matt Mahoney wrote:
>
> 4. Reproduction is a requirement for life, not for intelligence. Not
> unless you want to measure reproductive fitness as intelligence like
> evolution does. Is gray goo our goal?
>
>
> It takes longest but is most accurate for measuring how smart we are. That
> is because the end goal is immortality and persistence of some future
> machine, and so if we really are smart and succeed, this score should be
> high. However, if brains are smart enough to discover knowledge or
> functions that makes them score better on prediction tests but doesn't
> improve immortality, then yes technically they can be smarter without
> increasing the immortality test's score I think maybe. *Do we though
> "need" brains to "know/waste energy on" stuff like "some thing that does
> not improve lifespan or fleet size"? We'd not want that actually.*
>
> GPT-4 right now can earn 500USD for junior programming jobs. The money
> test for AI we can do now, it was not often that back then you have a AI
> like GPT-4 that can generate solutions for jobs. Maybe very narrow
> programs, but nothing like GPT-4. The one famous guy that I think I forget
> he was/is CEO of Google or something but anyway he said in 2 years he
> predicts AI will bring in 1 million dollars after devises a research plan
> and sells something all on its own. *GPT-5 I imagine would be able to be
> asked to "make a new programming language easier than C++ but better than
> C++ and easier then Python" and from just that prompt it would go on to do
> all steps on its own (a ton of little problems it can solve on its own) and
> burn through nights and days (possibly hours or even minutes in its fast
> brain) tirelessly and hand back the whole software. *
>
> Generation of senses and actions. This one we know already is a fun and
> useful measure. It's for humans to compare to AIs only, AIs comparing to
> other AIs they made need not this anymore, it's too subjective.
>
> Lastly Prediction Score, not comparable to us, but to other AIs, a very
> solid way to improve AIs. *However it doesn't tell us long term if it
> really worked or made things better resolution or more coherent etc. Other
> tests are needed then.*
>
>
>
> I tend to feel, as we go up the 4 paragraphs above, it gets more long-term
> horizon effect no? Money takes longer to measure, you need do many many
> many steps, then sell the final product! So, way 3 takes longer but not
> like top way4, that is much longer to wait for. Prediction score is fast,
> it seems it doesn't exactly tell you much about even what your AI makes,
> maybe. We currently use most the 1st bottom way to make AI, I'd imagine.
>
> What test to test for AGI? Way1 can't unless Matt is really sure about
> humans can compress what he said to 1bpc. Yes way2 and way3 already show we
> are closeish to human level.
>
> Have to think on this more but for now...:
> But what really to look for? What is the real thing we want? Not only do
> we want AI to work on AI, but we know humans - only smart humans - can work
> on AI. So, the test should maybe be if we can make an AI that can work on
> AI and have the AI use evaluations I listed above, it must be therefore
> human level AI, and, would also be what we want to see happen, too, hehe :).
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T91a503beb3f94ef7-M447545b70a1fe0ee2a319caf
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: The extinction tournament

2023-07-25 Thread immortal . discoveries
On Monday, July 24, 2023, at 11:05 PM, Matt Mahoney wrote:
> 4. Reproduction is a requirement for life, not for intelligence. Not unless 
> you want to measure reproductive fitness as intelligence like evolution does. 
> Is gray goo our goal?
It takes longest but is most accurate for measuring how smart we are. That is 
because the end goal is immortality and persistence of some future machine, and 
so if we really are smart and succeed, this score should be high. However, if 
brains are smart enough to discover knowledge or functions that makes them 
score better on prediction tests but doesn't improve immortality, then yes 
technically they can be smarter without increasing the immortality test's score 
I think maybe. *Do we though "need" brains to "know/waste energy on" stuff like 
"some thing that does not improve lifespan or fleet size"? We'd not want that 
actually.*

GPT-4 right now can earn 500USD for junior programming jobs. The money test for 
AI we can do now, it was not often that back then you have a AI like GPT-4 that 
can generate solutions for jobs. Maybe very narrow programs, but nothing like 
GPT-4. The one famous guy that I think I forget he was/is CEO of Google or 
something but anyway he said in 2 years he predicts AI will bring in 1 million 
dollars after devises a research plan and sells something all on its own. 
*GPT-5 I imagine would be able to be asked to "make a new programming language 
easier than C++ but better than C++ and easier then Python" and from just that 
prompt it would go on to do all steps on its own (a ton of little problems it 
can solve on its own) and burn through nights and days (possibly hours or even 
minutes in its fast brain) tirelessly and hand back the whole software. *

Generation of senses and actions. This one we know already is a fun and useful 
measure. It's for humans to compare to AIs only, AIs comparing to other AIs 
they made need not this anymore, it's too subjective.

Lastly Prediction Score, not comparable to us, but to other AIs, a very solid 
way to improve AIs. *However it doesn't tell us long term if it really worked 
or made things better resolution or more coherent etc. Other tests are needed 
then.*



I tend to feel, as we go up the 4 paragraphs above, it gets more long-term 
horizon effect no? Money takes longer to measure, you need do many many many 
steps, then sell the final product! So, way 3 takes longer but not like top 
way4, that is much longer to wait for. Prediction score is fast, it seems it 
doesn't exactly tell you much about even what your AI makes, maybe. We 
currently use most the 1st bottom way to make AI, I'd imagine.

What test to test for AGI? Way1 can't unless Matt is really sure about humans 
can compress what he said to 1bpc. Yes way2 and way3 already show we are 
closeish to human level.

Have to think on this more but for now...:
But what really to look for? What is the real thing we want? Not only do we 
want AI to work on AI, but we know humans - only smart humans - can work on AI. 
So, the test should maybe be if we can make an AI that can work on AI and have 
the AI use evaluations I listed above, it must be therefore human level AI, 
and, would also be what we want to see happen, too, hehe :).
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T91a503beb3f94ef7-Mb549e6c37e9761668fda9896
Delivery options: https://agi.topicbox.com/groups/agi/subscription