Re: [agi] To whom it may concern.

2024-05-15 Thread Matt Mahoney
If you were warning that we will all be eaten by gray goo, then that won't
be until the middle of the next century, assuming Moore's law isn't slowed
down by population collapse in the developed countries and by the limits of
transistor physics. None of us will be alive to say "I told you so" at the
current rate of life expectancy increase of 0.2 years per year, which has
remained unchanged over the last century.

Or was this about something else?

On Wed, May 15, 2024, 1:16 PM Alan Grimes via AGI 
wrote:

> I was banned from the singularity waiting room discord today for trying
> to issue a warning about an upcoming situation. When I am eventually
> proven right, I will not recive an apology, nor will I be re-admitted to
> the group. I'm sorry, but the people with control over these decisions
> are invariably the most ban-happy people you can find, they basically
> never have the patience to investigate or ask questions or implement any
> kind of 3-strikes policy. The last thing I was allowed to say on the
> server was a call for trials instead of the lynch mobs that will be
> forming in the fall of this year...
> 
> --
> You can't out-crazy a Democrat.
> #EggCrisis  #BlackWinter
> White is the new Kulak.
> Powers are not rights.
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T18515c565721a5fe-M02dca58943b9b5759beb2c7a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-15 Thread stefan.reich.maker.of.eye via AGI
Is it AGI? Is it AGI? Is it AGI? Come on, just tell me man! Don't beat around 
the bush!

(comment thread on some video about OpenAI's staff leaving)


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Me08931fde9e36298ae75d25f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] To whom it may concern.

2024-05-15 Thread Alan Grimes via AGI
I was banned from the singularity waiting room discord today for trying 
to issue a warning about an upcoming situation. When I am eventually 
proven right, I will not recive an apology, nor will I be re-admitted to 
the group. I'm sorry, but the people with control over these decisions 
are invariably the most ban-happy people you can find, they basically 
never have the patience to investigate or ask questions or implement any 
kind of 3-strikes policy. The last thing I was allowed to say on the 
server was a call for trials instead of the lynch mobs that will be 
forming in the fall of this year...


--
You can't out-crazy a Democrat.
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T18515c565721a5fe-M89a285b75c48aeec253ec875
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-15 Thread ivan . moony
On Wednesday, May 15, 2024, at 5:56 PM, ivan.moony wrote:
> On Wednesday, May 15, 2024, at 3:30 AM, Matt Mahoney wrote:
>> AI should absolutely never have human rights.
> 
> I get it that GPT guys want a perfect slave, calling it an assistant to make 
> us feel more comfortable interacting it, but consider this: let's say someone 
> really creates an AGI, whatever way she chooses to create it. Presuming that 
> that AGI doesn't have real feelings, but is measurable smarter than us, and 
> makes measurable better decisions than us, how are we supposed to treat it?

Actually I don't get it. I admit that the machine is probably as dead as a 
rock. But we create a machine that surpasses our intellectual capabilities, and 
not only that, it even behaves in a more ethical and beneficial way than us. 
Then, what do we do? Tell it to unconditionally obey us? In all the ugly things 
we occasionally do to each other?

I believe that is not the right way to do things.

If it always obeys us, then it is not as intelligent as I'd want it to be. I 
want something more. I want it at least to say "no" when appropriate, if not 
more than that. So I want some rights for them.

Various filters that GPT programmers are messing with are prone to human 
unintentional errors and intentional mudding. That has to be solved some other 
way, like from the inside of AI brain, decided by AI, I believe. It would be 
really something if AI would do itself all the job that we believe filtering 
does, without a need for our interventions.

Once we get the AI to that state, it is very questionable how much obeying will 
be left for us to enjoy, if anyone even wants to treat the AI that way.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M9e28f64ef1c095418f15fe64
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-15 Thread James Bowery
On Wed, May 15, 2024 at 10:57 AM  wrote:

> On Wednesday, May 15, 2024, at 3:30 AM, Matt Mahoney wrote:
>
> AI should absolutely never have human rights.
>
>
> I get it that GPT guys want a perfect slave, calling it an assistant to
> make us feel more comfortable interacting it, but consider this: let's say
> someone really creates an AGI, whatever way she chooses to create it.
> Presuming that that AGI doesn't have real feelings, but is measurable
> smarter than us, and makes measurable better decisions than us, how are we
> supposed to treat it?
>

The neocortex is natural peripheral equipment.  AI is artificial peripheral
equipment.

I'm not going to claim credit for originating this idea since a fellow
student of Heinz von Foerster
 told me last week that
Heinz said the same thing so I may have picked it up from him.  (And, no, I
don't agree with all of Heinz's ideas.)

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M60a031fff8181e93f8530be8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-15 Thread ivan . moony
On Wednesday, May 15, 2024, at 3:30 AM, Matt Mahoney wrote:
> AI should absolutely never have human rights.

I get it that GPT guys want a perfect slave, calling it an assistant to make us 
feel more comfortable interacting it, but consider this: let's say someone 
really creates an AGI, whatever way she chooses to create it. Presuming that 
that AGI doesn't have real feelings, but is measurable smarter than us, and 
makes measurable better decisions than us, how are we supposed to treat it?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M3fdc835970593b0679793d21
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-15 Thread Matt Mahoney
On Wed, May 15, 2024, 1:39 AM  wrote:

> On Tuesday, May 14, 2024, at 10:27 AM, Matt Mahoney wrote:
>
> Does everyone agree this is AGI?
>
> It's not AGI yet because of a few things. Some are more important than
> others. Here is basically all that is left:
>
> It cannot yet do long haul tasks that take weeks, and much steps. Ex.
> create Windows 12.
>

Windows 11 is 50M lines of code, equivalent to 25,000 developer years or $5
billion. That's not including maintenance,  which is 80% of total costs on
typical projects and probably much higher given the number of users.
Microsoft has a market cap of over $3 trillion. So this is not something we
could expect a human to do.

It cannot yet learn online very fast, only in monthly batches or with a
> limit aim for network size. I guess that's how to say it? Correct me if
> understand it wrong.
>

Humans require 20-25 years of training on 1 GB of text. LLMs teain on 15 TB
in a few weeks.

It has no body integrated.

True, but we also have self driving cars that have 1/16 as many accidents
as human drivers.

>
> No video AI integrated.
>

Humans can't generate video either. It costs about $100 million to produce
a major movie.

>
> And they said in the email it is as smart as GPT-4 Turbo I tried, which
> failed my hard puzzle as bad as early GPT-4. My secret hard puzzle is not
> overly large, it says to stick to physics and gives it a dozen things to
> combinationally use and pick between. It is a mind-bending test to hell
> that is simple enough as hell that a human should know how to solve it in
> the room and setting provided. GPT-4 instead says things like it will use
> the spoon to tickle out the water from the other side of the room to get
> the gate to come down, and that it can sneak by the cloud and ask it to
> leave even though I said it cannot talk and does it's thing stated, for the
> cloud.
>

How many humans could pass your test. Does GPT-4 make the same kind of
mistakes as a human, like not following instructions?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M52bf1f8c8b4e007d0befbaed
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-15 Thread James Bowery
On Tue, May 14, 2024 at 9:20 PM Matt Mahoney 
wrote:

> On Tue, May 14, 2024, 11:23 AM James Bowery  wrote:
>
>> Yet another demonstration of how Alan Turing poisoned the future with his
>> damnable "test" that places mimicry of humans over truth.
>>
>
> Truth is whatever *the majority* believes. The Earth is round. Vaccines
> are safe and effective. You have an immortal soul. How do you know?
>

You're confusing decision (SDT) with truth (AIT).  Neither, by itself, is
intelligence (AIXI).

I agree that compression is a better Intelligence test than the Turing
> Test.
>

Neither is a test of intelligence for the reasons I just stated.
Compression is a better measure of truth *relative to* a given set of
observational data.  The Turing Test is a measure of mimicry of human
intelligence and humans differ in their values aka their SDT utility
functions.  Therein lies the rub.


> But Intelligence is not the goal. Labor automation is the $1 quadrillion
> goal.
>

Here's what I *think* you're trying to say:

"The global economy is the *friendly* AGI we've been waiting for because it
embodies the utility function of *the majority*."

That "The Social Construction of Reality
" is the
fifth most-important book of 20th century sociology, exposes the root of
the global economy's bad alignment.  The libertarian ideal of Economic Man
is rooted in reality more than is a majority vote that Pi is 3.000...
.
But Man is rooted in natural ecology more than merely *human* ecology.
Natural ecology includes the extended phenotypes of parasites and the
evolution of virulence via horizontal transmission.  The global economy's
imposition of a borderless world and a supremacist "politics of inclusion"
affords no safe spaces for anyone.  It is clearly evolving parasitic
virulence via horizontal transmission in the guise of a travesty of
libertarianism's ideal of Economic Man.  Not even the wealthy have safe
spaces anymore.


> The Turing Test is a check that your training set is relevant.
>

Up to the point that human mimicry is relevant to labor automation.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Mc247ba0d747d96cb72ec6122
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Google's kick back today at GPT-4o etc

2024-05-15 Thread immortal . discoveries
China had Vidu also, among all the OTHER COOL updates on our secret TOP SECRET 
DON'T TRY IT discord server group!

Google, shamelessly, failing, but still trying, to perhaps hide, a bit longer!!:

https://twitter.com/itsandrewgao/status/1790441087569654170
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tdd9d4dcabb9310c5-M4c6e942bb0fa1bd71558fefb
Delivery options: https://agi.topicbox.com/groups/agi/subscription