Re: [agi] Ruting Test of AGI

2024-05-14 Thread Keyvan M. Sadeghi
where the slugs are called "riders".

Better ride than be ridden, especially when fuckers like Altman are driving
the world!

In his below interview, he outsources the worries, despite being the only
person in the world currently in possession of resources to address the
said worries:

https://www.businessinsider.com/sam-altman-says-ais-economic-impact-top-concern-2024-5

It doesn’t hurt to at least have a clue of how your product works when one
is the CEO of a behemoth!

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-Mb24144659abbf8e689f64009
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-14 Thread James Bowery
Currently reading "The Puppet Masters" where the slugs are called "riders".

On Tue, May 14, 2024 at 11:21 AM Keyvan M. Sadeghi <
keyvan.m.sade...@gmail.com> wrote:

>
>
> That you find "tyranny for the good of their victims" "philosophical"
>> rather than "direct" indicates your ethical poverty.
>>
>
> More wise words from under the blanket ;)
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M8bdbd808403c6e9eb4cbc2dd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-14 Thread Keyvan M. Sadeghi
That you find "tyranny for the good of their victims" "philosophical"
> rather than "direct" indicates your ethical poverty.
>

More wise words from under the blanket ;)

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M26f5902a39bef67b4e2fa191
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-14 Thread James Bowery
That you find "tyranny for the good of their victims" "philosophical"
rather than "direct" indicates your ethical poverty.

On Tue, May 14, 2024 at 8:20 AM Keyvan M. Sadeghi <
keyvan.m.sade...@gmail.com> wrote:

> The Sam Altmans of the world are bound and determined to exercise tyranny
>> for the good of their victims -- which amplifies any mistakes in choosing a
>> world model selection criterion (ie: loss function).
>>
>
> Too philosophical for my taste, I like being direct and express my
> feelings in real world:
>
> https://x.com/keyvanmsadeghi/status/1790369335153742081
> 
>
> > @bengoertzel is a hero who started his crusade against bigotry when my
> generation were infants. The world owes him the scientific foundation of
> #AGI, that for the time being is represented by capitalist zealots like
> @sama.
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M7f4b7e5e743222663563be0a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-14 Thread Keyvan M. Sadeghi
>
> The Sam Altmans of the world are bound and determined to exercise tyranny
> for the good of their victims -- which amplifies any mistakes in choosing a
> world model selection criterion (ie: loss function).
>

Too philosophical for my taste, I like being direct and express my feelings
in real world:

https://x.com/keyvanmsadeghi/status/1790369335153742081


> @bengoertzel is a hero who started his crusade against bigotry when my
generation were infants. The world owes him the scientific foundation of
#AGI, that for the time being is represented by capitalist zealots like
@sama.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M381e37dd5d82fe81c6cd0b29
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-11 Thread James Bowery
“Of all tyrannies, a tyranny sincerely exercised for the good of its
victims may be the most oppressive. It would be better to live under robber
barons than under omnipotent moral busybodies. The robber baron's cruelty
may sometimes sleep, his cupidity may at some point be satiated; but those
who torment us for our own good will torment us without end for they do so
with the approval of their own conscience.”
― C. S. Lewis

"c'est pire qu'un crime; c'est une faute" (it's worse than a crime; it's a
mistake).
― Charles Maurice de Talleyrand-Périgord

The Sam Altmans of the world are bound and determined to exercise tyranny
for the good of their victims -- which amplifies any mistakes in choosing a
world model selection criterion (ie: loss function).

Now, I'm not saying it is preferable that they exercise tyranny (as
opposed to, say, taking down civilization and starting over again); I'm
just being realistic.

PS:  Where's Ilya ?


On Sat, May 11, 2024 at 3:37 PM Keyvan M. Sadeghi <
keyvan.m.sade...@gmail.com> wrote:

> Anything other than lossless compression as Turing Test V2 is best called
>> a "Rutting Test" since it is all about suitors of capital displaying one's
>> prowess in a contest of bullshit.
>>
>
> If an email list on AGI that’s been going on for 20 years can’t devise a
> benchmark for AGI, wouldn’t history call them useless wankers? Do you
> want Altman to achieve it without you having a say?
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-Mdda4bbab8fcc9d8e55f5d587
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-11 Thread Keyvan M. Sadeghi
>
> Anything other than lossless compression as Turing Test V2 is best called
> a "Rutting Test" since it is all about suitors of capital displaying one's
> prowess in a contest of bullshit.
>

If an email list on AGI that’s been going on for 20 years can’t devise a
benchmark for AGI, wouldn’t history call them useless wankers? Do you want
Altman to achieve it without you having a say?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-Mf6489f9bf28785f036297bd2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-11 Thread Keyvan M. Sadeghi
>
> Your test is the opposite of objective and measurable. What if two high IQ
> people disagree if a robot acts like a human or not?
>
> Which IQ test? There are plenty of high IQ societies that will tell you
> your IQ is 180 as long as you pay the membership fee.
>
> What if I upload the same software to a Boston Dynamics robot dog or robot
> humanoid like Atlas, do you really think you will get the same answer?
>

Valid criticisms 👌

I wanted to start the conversation on a true benchmark, mission
accomplished! 😎

If a consensus is formed in this community, the results can be published in
AGI25?

Here’s some ideas for addressing the points Matt raised:

- Add a code postfix to ground the conditions
- E.g. Ruting Binet100_humanoid_SH
- Above example can mean:
  - The IQ test taken by the observing person is Stanford-Binet 100
questions in 24 minutes
  - The robot is in humanoid form, quality of the parts not important
  - The robot has the “Sight” and “Hearing” of the five human senses,
quality of the sensors not important
- The necessary and sufficient condition for passing the test is if only
one person validated by the IQ test confirms that the robot has human-like
behavior
- Someone can take a bribe and confirm a robot, that’s a fraudulent case of
passing the test, and can be contested by the scientific community
- A committee of trusted test takers can exist to take the test annually on
a live stage!

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M534a366eeb945fdb092a6a13
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-11 Thread Matt Mahoney
Your test is the opposite of objective and measurable. What if two high IQ
people disagree if a robot acts like a human or not?

Which IQ test? There are plenty of high IQ societies that will tell you
your IQ is 180 as long as you pay the membership fee.

What if I upload the same software to a Boston Dynamics robot dog or robot
humanoid like Atlas, do you really think you will get the same answer?


On Sat, May 11, 2024, 7:59 AM Keyvan M. Sadeghi 
wrote:

> It’s different than Turing Test in that it’s measurable and not subject to
> interpretation. But it follows the same principle, that an agent’s behavior
> is ultimately what matters. It’s Turing Test V2.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M6a15dcd8d68f096880f8c3c8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-11 Thread James Bowery
Anything other than lossless compression as Turing Test V2 is best called a
"Rutting Test" since it is all about suitors of capital displaying one's
prowess in a contest of bullshit.

On Sat, May 11, 2024 at 6:59 AM Keyvan M. Sadeghi <
keyvan.m.sade...@gmail.com> wrote:

> It’s different than Turing Test in that it’s measurable and not subject to
> interpretation. But it follows the same principle, that an agent’s behavior
> is ultimately what matters. It’s Turing Test V2.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M8377f8b3f36a06f85afc3716
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-11 Thread Keyvan M. Sadeghi
It’s different than Turing Test in that it’s measurable and not subject to
interpretation. But it follows the same principle, that an agent’s behavior
is ultimately what matters. It’s Turing Test V2.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M0841709f213990d0960f613c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-11 Thread Keyvan M. Sadeghi
>
> An LLM has human like behavior.  Does it pass the Ruting test? How is this
> different from the Turing test?
>

The instructions are clear, one should upload the code in a robot body, and
let it act in the real world. Then a high IQ human observer can confirm
whether the behavior is human-like or not.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-Mb3f256d5dcb7288784e8b408
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-10 Thread Matt Mahoney
An LLM has human like behavior.  Does it pass the Ruting test? How is this
different from the Turing test?

On Fri, May 10, 2024, 9:05 PM Keyvan M. Sadeghi 
wrote:

> The name is a joke, but the test itself is concise and simple, a true
> benchmark.
>
> > If you upload your code in a robot and 1 high IQ person confirms it has
> human-like behavior, you’ve passed the Ruting Test.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M4e751bafce562cf6c3c4c330
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-10 Thread Keyvan M. Sadeghi
High IQ is 145 to 159, according to Google.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M3bce942aa67c46a4785c1df9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-10 Thread Keyvan M. Sadeghi
The name is a joke, but the test itself is concise and simple, a true
benchmark.

> If you upload your code in a robot and 1 high IQ person confirms it has
human-like behavior, you’ve passed the Ruting Test.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M9b857534f4b763a39cfd9279
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-10 Thread Keyvan M. Sadeghi
>
> Ruting is an anagram of Turing?
>

Yeah, too lame? I’ve recently became a father, so I’m generating dad jokes
apparently 😂

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M98a7abac4626191f2e5ad6ea
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-10 Thread Matt Mahoney
Ruting is an anagram of Turing?

On Thu, May 9, 2024, 8:04 PM Keyvan M. Sadeghi 
wrote:

>
> https://www.linkedin.com/posts/keyvanmsadeghi_agi-activity-7194481824406908928-0ENT
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M703209cabf3add52a3bef4b7
Delivery options: https://agi.topicbox.com/groups/agi/subscription