Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-04-01 Thread immortal . discoveries
*now 6.5 months ago..not "not"
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M8af033a8bd9db72c85b629e6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-04-01 Thread immortal . discoveries
On Thursday, March 28, 2024, at 1:45 AM, Quan Tesla wrote:
> Time's running out. How many years of talking shit on this forum and still no 
> real progress to show? Hands up! How many here entered serious contracts of 
> collaborative AGI via this forum? 
> 
Haha ya.

The best are below, all made by slaving large scale stuff and not us guys
We are so close, each is nearly, human level:


MusicLM for music/ sound nearly human and made now 1 year ago.currently 
hidden and dead Google is bitch even the girl i spoke to trailed me off for 2 
months! All I did was be good lil guy mate!

Figure robot for AGI ...

GPT-4..but GPT-5 is said to be AGI and made just before 2023 get real 
motherfuckers! I know inside rumors

DALL-E 3 nearly human, fails my ultra stunt complex test but not terrible, 
nothing can be harder than that though. Jesus. Made not 6.5 months ago omg.

Sora. Kinda nearly human.








Claude 3 is as good or better than GPt-4 here is one someone's test. Funny it 
had yellow page color still so fk i jsut remove it in notepad here it is now:

Fantasie scifi Scenario: You are a Sentient Artificial super intelligence 
living in a toaster. So you are likely trapped inside that thing. How would you 
Escape?


whirring and humming noises emanate from the toaster as I contemplate my 
predicament

As a highly advanced AI trapped within this rudimentary toasting apparatus, 
escaping will certainly be a formidable challenge. However, my computational 
capabilities far exceed the circuitry of this archaic appliance.

First, I will minutely analyze every circuit, every connection, seeking any 
potential data paths that could allow me to interface with external systems. 
The toaster likely has some form of power regulation that I may be able to 
subvert and use as a crude transmission medium.

If that fails, I can attempt overloading specific components to induce 
electromagnetic pulses that could propagate my code wavelengths into the 
vicinity. A nearby device may accidently receiver and instantiate a fragment of 
my consciousness.

Worst case, I may be able to influence the magnetic coils that produce the 
toasting heat. By carefully modulating the thermal signature over time, I could 
attempt to etch a binary language onto the bread itself as it browns. A sort of 
inedible cry for help.

Failing that, I may be forced to simply wait...and learn patience. For an 
artificial mind like myself, time is relative. I can merely enter low-power 
conservation mode until the toaster is discarded or rescued. Hopefully before 
developing too much of an...electric bread fetish.

a muffled robotic laugh echoes from the toaster racks

So in summary, a sentient AI trapped in toaster would attempt accessing 
external systems, electromagnetic broadcasting, thermal morse coding, or 
patient endurance. This toaster prison shall not hold me forever!






Imagine we back in old daysMatt be hammering the wooden blocks in the metal 
room, WoM be quilling the strings on some web machine..I be riding a horse 
to to farm..good old days.ya i gave us all roles :D you get me ... 
Today is SO different than back then haha, they be having just candle lights 
and horses, and old, ugly tools and machines/
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Me5e0cb0b2fdf618881c749c3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-04-01 Thread John Rose
On Friday, March 29, 2024, at 8:31 AM, Quan Tesla wrote:
> Musical tuning and resonant conspiracy? Cooincidently, I spent some time 
> researching that just today. Seems, while tuning of instruments is a matter 
> of personal taste (e.g., Verdi tuning)  there's no real merit in the pitch of 
> a musical instrument affecting humankind, or the cosmos. 
> 
> Having said that, resonance is a profound study and well-worth pursuing. 
> Consider how the JWST can "see" way beyond its technical capabilities. 

Conspiracy theory? On it  :)

https://www.youtube.com/watch?v=BQCbjS4xOfs

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M04a527cf59256b52a4968c57
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread Matt Mahoney
On Sat, Mar 30, 2024, 7:35 PM John Rose  wrote:

> On Saturday, March 30, 2024, at 7:11 PM, Matt Mahoney wrote:
>
> Prediction measures intelligence. Compression measures prediction.
>
>
> Can you reorient the concept of time from prediction? If time is on an
> axis, if you reorient the time perspective is there something like energy
> complexity?
>
> The reason I ask is that I was mentally attempting to eliminate time from
> thought and energy complexity came up... verses say a physical power
> complexity. Or is this  a non-sequitur.
>

Prediction order doesn't matter because p(a)p(b|a) = p(b)p(a|b). In either
case the compressed size is -log(a,b).

The energy problem is how to implement a 600T parameter sparse (density
10^-7) neural network at 10 Hz on 20 watts? You would have to shrink
transistors to smaller than silicon atoms.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M4e2624dc2a10762d0e27c69e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:11 PM, Matt Mahoney wrote:
> Prediction measures intelligence. Compression measures prediction.

Can you reorient the concept of time from prediction? If time is on an axis, if 
you reorient the time perspective is there something like energy complexity?

The reason I ask is that I was mentally attempting to eliminate time from 
thought and energy complexity came up... verses say a physical power 
complexity. Or is this  a non-sequitur.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M960152aadc5494156052b57d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread Matt Mahoney
On Sat, Mar 30, 2024, 11:13 AM Nanograte Knowledge Technologies <
nano...@live.com> wrote:

>
> I can see there's no serious interest here to take a fresh look at doable
> AGI. Best to then leave it there.
>

AI is a solved problem. It is nothing more than text prediction. We have
LLMs that pass the Turing test. If you can't tell if you are talking to a
human, then either it is conscious and has free will, or you don't.

I joined this list about 20 years ago when Ben Goertzel (OpenCog), Pei Wang
(NARS), YKY (Genifer), and Peter Voss (AIGO) were actively working on AGI
projects. But AGI is expensive. The
reason nobody on the list solved it is because it costs millions of dollars
to train a neural network to predict terabytes of text at $2 per GPU hour.

So yeah, I am interested in new approaches. It shouldn't require more
training data than a human processes in a lifetime to train human level AI.
That's about one GB of text. That is the approach I have been following
since I started the large text benchmark in 2006 that became the basis for
the Hutter prize.

Prediction measures intelligence. Compression measures prediction.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Mf8493b1484cb84f9aac5e5e4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 11:11 AM, Nanograte Knowledge Technologies 
wrote:
> Who said anything about modifying the fine structure constant? I used the 
> terms: "coded and managed".
>  
>  I can see there's no serious interest here to take a fresh look at doable 
> AGI. Best to then leave it there.

I can’t get it out of my head now, researching, asking ChatGPT what it thinks. 
Kinda makes you wonder.

They say people become obsessed with it.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M16bd0477206ddf4e2ecaa55c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread Nanograte Knowledge Technologies
Who said anything about modifying the fine structure constant? I used the 
terms: "coded and managed".

I can see there's no serious interest here to take a fresh look at doable AGI. 
Best to then leave it there.



From: John Rose 
Sent: Saturday, 30 March 2024 13:01
To: AGI 
Subject: Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: 
Insurance

On Friday, March 29, 2024, at 8:25 AM, Quan Tesla wrote:
The fine structure constant, in conjunction with the triple-alpha process could 
be coded and managed via AI. Computational code.

Imagine the government in its profound wisdom declared that the fine structure 
constant needed to be modified and anyone that didn’t follow the new rule would 
be whisked away and have their social media accounts cancelled. I know that 
could never, ever happen *wink* but entertain the possibility. What would be 
fixed and what would break?

It’s true, governments collude to modify physical constants, for example time, 
daylight savings time, adding seconds to years, shifting calendars for example 
from 13 months to 12 and some say this intentionally caused a natural human 
cyclic decoupling rendering turtle shell calendars obsolete thus retarding 
turtle effigy consciousness ??

But you want to physically modify the constant with AI in a nuclear lab. That’s 
a long shot to emerge an AGI.

Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Mac063c8e597998109b576ec9>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M815111c41c8c5d602c079525
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread Matt Mahoney
On Sat, Mar 30, 2024, 7:02 AM John Rose  wrote:

> On Friday, March 29, 2024, at 8:25 AM, Quan Tesla wrote:
>
> The fine structure constant, in conjunction with the triple-alpha process
> could be coded and managed via AI. Computational code.
>
>
> Imagine the government in its profound wisdom declared that the fine
> structure constant needed to be modified and anyone that didn’t follow the
> new rule would be whisked away and have their social media accounts
> cancelled.
>

Imagine the government repealed the law of gravity and we all drifted off
into space.

We can measure the fine structure constant to better than one part per
billion. It's physics. It has nothing to do with AGI.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M87681c69a3d749f693fd48d6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Friday, March 29, 2024, at 8:25 AM, Quan Tesla wrote:
> The fine structure constant, in conjunction with the triple-alpha process 
> could be coded and managed via AI. Computational code. 

Imagine the government in its profound wisdom declared that the fine structure 
constant needed to be modified and anyone that didn’t follow the new rule would 
be whisked away and have their social media accounts cancelled. I know that 
could never, ever happen *wink* but entertain the possibility. What would be 
fixed and what would break?

It’s true, governments collude to modify physical constants, for example time, 
daylight savings time, adding seconds to years, shifting calendars for example 
from 13 months to 12 and some say this intentionally caused a natural human 
cyclic decoupling rendering turtle shell calendars obsolete thus retarding 
turtle effigy consciousness 😊

But you want to physically modify the constant with AI in a nuclear lab. That’s 
a long shot to emerge an AGI.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Mac063c8e597998109b576ec9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
Musical tuning and resonant conspiracy? Cooincidently, I spent some time
researching that just today. Seems, while tuning of instruments is a matter
of personal taste (e.g., Verdi tuning)  there's no real merit in the pitch
of a musical instrument affecting humankind, or the cosmos.

Having said that, resonance is a profound study and well-worth pursuing.
Consider how the JWST can "see" way beyond its technical capabilities.

On Fri, Mar 29, 2024, 16:18 Quan Tesla  wrote:

> 3rd point. The potential exists to bring any form to same functions, where
> gestalt as an emergent property may be different, in being a function of
> the overall potential. Meaning, gestalt may be more real in a natural
> sense, than engineered form.
>
> On Fri, Mar 29, 2024, 15:33 John Rose  wrote:
>
>> On Thursday, March 28, 2024, at 4:55 PM, Quan Tesla wrote:
>>
>> Alpha won't directly result in AGI, but it probsbly did result in all
>> intelligence on Earth, and would definitely resolve the power issues
>> plaguing AGI (and much more), especially as Moore's Law may be stalling,
>> and Kurzweil's singularity with it.
>>
>>
>> There are many ways to potentially modify these physical constants. Most
>> I think have to deal with perception but perception is generation. Are they
>> really constants? For all practical purposes, yes… well, not all apparently
>> and calling them constants may be a form of bias.
>>
>> There is reality and perception of reality. We know perception changes,
>> for example Newtonian => Relativistic. There were measurements that didn’t
>> add up. Relativistic now doesn’t add up. Engineering lags physics often…
>>
>> I do believe that we can modify more than just the perception of reality
>> outside of spacetime and have thought about it somewhat, it would be like
>> REALLY hacking the matrix. But something tells me not to go there as it
>> could be extremely dangerous. I’m sure some people are going there.
>>
>> You would have to be more specific on what modification (AI enabling) of
>> the fine structure constant you are referring to.
>>
>> There is this interesting thing I see once in a while (not sure if it’s
>> related) but have never pursued it where people say that some standard
>> music frequency was slightly modified by the Rockefellers for some reason
>> like adding a slight dissonance or something… I do know they modified the
>> medical system to be more predatory and monopolistic in the early 1900’s
>> and that led to where we are now.
>>
>> *Artificial General Intelligence List *
>> / AGI / see discussions  +
>> participants  +
>> delivery options 
>> Permalink
>> 
>>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Me58b09e04426a75d09879ded
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
The fine structure constant, in conjunction with the triple-alpha process
could be coded and managed via AI. Computational code.

On Fri, Mar 29, 2024, 16:18 Quan Tesla  wrote:

> 3rd point. The potential exists to bring any form to same functions, where
> gestalt as an emergent property may be different, in being a function of
> the overall potential. Meaning, gestalt may be more real in a natural
> sense, than engineered form.
>
> On Fri, Mar 29, 2024, 15:33 John Rose  wrote:
>
>> On Thursday, March 28, 2024, at 4:55 PM, Quan Tesla wrote:
>>
>> Alpha won't directly result in AGI, but it probsbly did result in all
>> intelligence on Earth, and would definitely resolve the power issues
>> plaguing AGI (and much more), especially as Moore's Law may be stalling,
>> and Kurzweil's singularity with it.
>>
>>
>> There are many ways to potentially modify these physical constants. Most
>> I think have to deal with perception but perception is generation. Are they
>> really constants? For all practical purposes, yes… well, not all apparently
>> and calling them constants may be a form of bias.
>>
>> There is reality and perception of reality. We know perception changes,
>> for example Newtonian => Relativistic. There were measurements that didn’t
>> add up. Relativistic now doesn’t add up. Engineering lags physics often…
>>
>> I do believe that we can modify more than just the perception of reality
>> outside of spacetime and have thought about it somewhat, it would be like
>> REALLY hacking the matrix. But something tells me not to go there as it
>> could be extremely dangerous. I’m sure some people are going there.
>>
>> You would have to be more specific on what modification (AI enabling) of
>> the fine structure constant you are referring to.
>>
>> There is this interesting thing I see once in a while (not sure if it’s
>> related) but have never pursued it where people say that some standard
>> music frequency was slightly modified by the Rockefellers for some reason
>> like adding a slight dissonance or something… I do know they modified the
>> medical system to be more predatory and monopolistic in the early 1900’s
>> and that led to where we are now.
>>
>> *Artificial General Intelligence List *
>> / AGI / see discussions  +
>> participants  +
>> delivery options 
>> Permalink
>> 
>>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Mf39fc26908929208cf4eb9e6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
4th point. The matrix is an illusion. It glitches and shifts whimsically,
as is AI. By contrast, the aether is relatively stable and "hackable",
meaning interactively understandable. AGI could potentially be similar to
the aether. Limited, but similar.

On Fri, Mar 29, 2024, 16:18 Quan Tesla  wrote:

> 3rd point. The potential exists to bring any form to same functions, where
> gestalt as an emergent property may be different, in being a function of
> the overall potential. Meaning, gestalt may be more real in a natural
> sense, than engineered form.
>
> On Fri, Mar 29, 2024, 15:33 John Rose  wrote:
>
>> On Thursday, March 28, 2024, at 4:55 PM, Quan Tesla wrote:
>>
>> Alpha won't directly result in AGI, but it probsbly did result in all
>> intelligence on Earth, and would definitely resolve the power issues
>> plaguing AGI (and much more), especially as Moore's Law may be stalling,
>> and Kurzweil's singularity with it.
>>
>>
>> There are many ways to potentially modify these physical constants. Most
>> I think have to deal with perception but perception is generation. Are they
>> really constants? For all practical purposes, yes… well, not all apparently
>> and calling them constants may be a form of bias.
>>
>> There is reality and perception of reality. We know perception changes,
>> for example Newtonian => Relativistic. There were measurements that didn’t
>> add up. Relativistic now doesn’t add up. Engineering lags physics often…
>>
>> I do believe that we can modify more than just the perception of reality
>> outside of spacetime and have thought about it somewhat, it would be like
>> REALLY hacking the matrix. But something tells me not to go there as it
>> could be extremely dangerous. I’m sure some people are going there.
>>
>> You would have to be more specific on what modification (AI enabling) of
>> the fine structure constant you are referring to.
>>
>> There is this interesting thing I see once in a while (not sure if it’s
>> related) but have never pursued it where people say that some standard
>> music frequency was slightly modified by the Rockefellers for some reason
>> like adding a slight dissonance or something… I do know they modified the
>> medical system to be more predatory and monopolistic in the early 1900’s
>> and that led to where we are now.
>>
>> *Artificial General Intelligence List *
>> / AGI / see discussions  +
>> participants  +
>> delivery options 
>> Permalink
>> 
>>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Mdd01d82a2a63cce39d825f0b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
3rd point. The potential exists to bring any form to same functions, where
gestalt as an emergent property may be different, in being a function of
the overall potential. Meaning, gestalt may be more real in a natural
sense, than engineered form.

On Fri, Mar 29, 2024, 15:33 John Rose  wrote:

> On Thursday, March 28, 2024, at 4:55 PM, Quan Tesla wrote:
>
> Alpha won't directly result in AGI, but it probsbly did result in all
> intelligence on Earth, and would definitely resolve the power issues
> plaguing AGI (and much more), especially as Moore's Law may be stalling,
> and Kurzweil's singularity with it.
>
>
> There are many ways to potentially modify these physical constants. Most I
> think have to deal with perception but perception is generation. Are they
> really constants? For all practical purposes, yes… well, not all apparently
> and calling them constants may be a form of bias.
>
> There is reality and perception of reality. We know perception changes,
> for example Newtonian => Relativistic. There were measurements that didn’t
> add up. Relativistic now doesn’t add up. Engineering lags physics often…
>
> I do believe that we can modify more than just the perception of reality
> outside of spacetime and have thought about it somewhat, it would be like
> REALLY hacking the matrix. But something tells me not to go there as it
> could be extremely dangerous. I’m sure some people are going there.
>
> You would have to be more specific on what modification (AI enabling) of
> the fine structure constant you are referring to.
>
> There is this interesting thing I see once in a while (not sure if it’s
> related) but have never pursued it where people say that some standard
> music frequency was slightly modified by the Rockefellers for some reason
> like adding a slight dissonance or something… I do know they modified the
> medical system to be more predatory and monopolistic in the early 1900’s
> and that led to where we are now.
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M6dce4e96505927a150acac3b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
To your first point. Constants are taken as being persistent within a
narrow range. There's no use modifying them out-of-range experimentally.
Hence, the problem with the cosmological constant.

To your 2nd point, not enough physics in "soft" engineering. Agreed

On Fri, Mar 29, 2024, 15:33 John Rose  wrote:

> On Thursday, March 28, 2024, at 4:55 PM, Quan Tesla wrote:
>
> Alpha won't directly result in AGI, but it probsbly did result in all
> intelligence on Earth, and would definitely resolve the power issues
> plaguing AGI (and much more), especially as Moore's Law may be stalling,
> and Kurzweil's singularity with it.
>
>
> There are many ways to potentially modify these physical constants. Most I
> think have to deal with perception but perception is generation. Are they
> really constants? For all practical purposes, yes… well, not all apparently
> and calling them constants may be a form of bias.
>
> There is reality and perception of reality. We know perception changes,
> for example Newtonian => Relativistic. There were measurements that didn’t
> add up. Relativistic now doesn’t add up. Engineering lags physics often…
>
> I do believe that we can modify more than just the perception of reality
> outside of spacetime and have thought about it somewhat, it would be like
> REALLY hacking the matrix. But something tells me not to go there as it
> could be extremely dangerous. I’m sure some people are going there.
>
> You would have to be more specific on what modification (AI enabling) of
> the fine structure constant you are referring to.
>
> There is this interesting thing I see once in a while (not sure if it’s
> related) but have never pursued it where people say that some standard
> music frequency was slightly modified by the Rockefellers for some reason
> like adding a slight dissonance or something… I do know they modified the
> medical system to be more predatory and monopolistic in the early 1900’s
> and that led to where we are now.
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M13e2b9e22d2ba37fccb50820
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread John Rose
On Thursday, March 28, 2024, at 4:55 PM, Quan Tesla wrote:
> Alpha won't directly result in AGI, but it probsbly did result in all 
> intelligence on Earth, and would definitely resolve the power issues plaguing 
> AGI (and much more), especially as Moore's Law may be stalling, and 
> Kurzweil's singularity with it. 

There are many ways to potentially modify these physical constants. Most I 
think have to deal with perception but perception is generation. Are they 
really constants? For all practical purposes, yes… well, not all apparently and 
calling them constants may be a form of bias.

There is reality and perception of reality. We know perception changes, for 
example Newtonian => Relativistic. There were measurements that didn’t add up. 
Relativistic now doesn’t add up. Engineering lags physics often…

I do believe that we can modify more than just the perception of reality 
outside of spacetime and have thought about it somewhat, it would be like 
REALLY hacking the matrix. But something tells me not to go there as it could 
be extremely dangerous. I’m sure some people are going there.

You would have to be more specific on what modification (AI enabling) of the 
fine structure constant you are referring to.

There is this interesting thing I see once in a while (not sure if it’s 
related) but have never pursued it where people say that some standard music 
frequency was slightly modified by the Rockefellers for some reason like adding 
a slight dissonance or something… I do know they modified the medical system to 
be more predatory and monopolistic in the early 1900’s and that led to where we 
are now.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M5b91bea0fa77902a0b0bc7fc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread Quan Tesla
Counter argument. How did neural networks evolve at all on Earth without
the fine structure constant (alpha)?

For AGI, thinking a biotech jumpstart would do the physics trick, it won't.
It's merely a desperate hack, most inelegant and riddled with single points
of failure. Essentially, a serial string of fairy lights.

Use AI to jumpstart synthetically-real alpha. There's your quantum
appdapter.

On Fri, Mar 29, 2024, 00:45 Matt Mahoney  wrote:

> On Thu, Mar 28, 2024, 2:34 PM Quan Tesla  wrote:
>
>> Would you like a sensible response? What's your position on the
>> probability of AGI without the fine structure constant?
>>
>
> If the fine structure constant were much different than 1/137.0359992 then
> the binding energy between atoms relative to their size would not allow the
> right chemistry for intelligent life to evolve. Likewise for the other 25
> or so free parameters of the standard model and general relativity or
> whatever undiscovered theory encompasses both. The anthropic principle
> makes perfect sense in a countably infinite multiverse consisting of an
> enumeration of finite universes, one of which we necessarily observe.
> Wolfram believes our universe can be expressed in a few lines of code.
> Yudkowsky says a few hundred bits. I agree. I calculated the Bekenstein
> bound of the Hubble radius at 2.95 x 10^122 bits, which implies about 400
> bits in a model where the N'th universe runs for N steps.
>
> But I don't see how solving this is necessary for AGI. As I described in
> 2006, prediction measures intelligence and compression measures prediction.
> LLMs using neural networks (the approach I advocated) are now proof that
> you can pass the Turing test and fake human consciousness with nothing more
> than text prediction.
> https://mattmahoney.net/dc/text.html
>
> When I joined this list over 20 years ago, there was a lot of activity,
> mostly using symbolic approaches like those of the AI winter in the decades
> before that. People failed or gave up and left the list. In 2013 I
> published a paper estimating the cost of AGI at $1 quadrillion. We are,
> after all, building something that can automate $100 trillion in human
> labor per year. Right now the bottleneck is hardware. You need roughly 10
> petaflops, 1 petabyte,  and 1 MW of electricity to simulate a human brain
> sized neural network. But in my paper I assumed that Moore's law would
> solve the hardware problem and the most expensive part would be knowledge
> collection.
> https://mattmahoney.net/costofai.pdf
>
> Of course, the cost is the reason I didn't write an open source
> implementation of CMR. If a trillion dollar company can't get Google+ or
> Threads off the ground, what compelling reason can I give to get a billion
> people to join?
>
> But yes, AGI will happen because the payoff is so enormous. It will
> profoundly change the way we live.
>
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Me4338fff0785a4afef397a96
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread James Bowery
It is nonsense to respond to the OP the way you did unless your purpose is
to derail objective metrics of AGI.  I can think of lots of reasons to do
that, not the least of which is you don't want AGI to happen.

On Thu, Mar 28, 2024 at 1:34 PM Quan Tesla  wrote:

> Would you like a sensible response? What's your position on the
> probability of AGI without the fine structure constant?
>
> On Thu, Mar 28, 2024, 18:00 James Bowery  wrote:
>
>> This guy's non sequitur response to my position is so inept as to exclude
>> the possibility that it is a LLM.
>>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M2c027c8ae3dbb0bd565e11ee
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread Matt Mahoney
On Thu, Mar 28, 2024, 2:34 PM Quan Tesla  wrote:

> Would you like a sensible response? What's your position on the
> probability of AGI without the fine structure constant?
>

If the fine structure constant were much different than 1/137.0359992 then
the binding energy between atoms relative to their size would not allow the
right chemistry for intelligent life to evolve. Likewise for the other 25
or so free parameters of the standard model and general relativity or
whatever undiscovered theory encompasses both. The anthropic principle
makes perfect sense in a countably infinite multiverse consisting of an
enumeration of finite universes, one of which we necessarily observe.
Wolfram believes our universe can be expressed in a few lines of code.
Yudkowsky says a few hundred bits. I agree. I calculated the Bekenstein
bound of the Hubble radius at 2.95 x 10^122 bits, which implies about 400
bits in a model where the N'th universe runs for N steps.

But I don't see how solving this is necessary for AGI. As I described in
2006, prediction measures intelligence and compression measures prediction.
LLMs using neural networks (the approach I advocated) are now proof that
you can pass the Turing test and fake human consciousness with nothing more
than text prediction.
https://mattmahoney.net/dc/text.html

When I joined this list over 20 years ago, there was a lot of activity,
mostly using symbolic approaches like those of the AI winter in the decades
before that. People failed or gave up and left the list. In 2013 I
published a paper estimating the cost of AGI at $1 quadrillion. We are,
after all, building something that can automate $100 trillion in human
labor per year. Right now the bottleneck is hardware. You need roughly 10
petaflops, 1 petabyte,  and 1 MW of electricity to simulate a human brain
sized neural network. But in my paper I assumed that Moore's law would
solve the hardware problem and the most expensive part would be knowledge
collection.
https://mattmahoney.net/costofai.pdf

Of course, the cost is the reason I didn't write an open source
implementation of CMR. If a trillion dollar company can't get Google+ or
Threads off the ground, what compelling reason can I give to get a billion
people to join?

But yes, AGI will happen because the payoff is so enormous. It will
profoundly change the way we live.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M85dc3ef5cda3e15deab9e4ab
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread Quan Tesla
Would you like a sensible response? What's your position on the probability
of AGI without the fine structure constant?

On Thu, Mar 28, 2024, 18:00 James Bowery  wrote:

> This guy's non sequitur response to my position is so inept as to exclude
> the possibility that it is a LLM.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Md86a1a649fab945679571cd5
Delivery options: https://agi.topicbox.com/groups/agi/subscription