Re: [agi] GPT-4o

2024-06-03 Thread Keyvan M. Sadeghi
LeCun’s awesome start turned into “I’m your god”, it’s a generational flaw. https://x.com/keyvanmsadeghi/status/1797670341436977417?s=46 -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] GPT-4o

2024-06-02 Thread Keyvan M. Sadeghi
Tame the butterfly effect. Just imagine you switch a couple words around > and the whole world starts conversing. > Aka click bait? :) ;) -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] GPT-4o

2024-05-29 Thread Keyvan M. Sadeghi
> > Smearing those who are concerned of particular AI risks by pooling them > into a prejudged category entitled “Doomers” is not really being serious. > Judging the future of AGI (not distant, 5 years), with our current premature brains is a joke. Worse, it's an unholy/profitable business for

Re: [agi] GPT-4o

2024-05-29 Thread Keyvan M. Sadeghi
Evolution == Technology -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M85f0ad77d68bb0926bfb8db7 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] GPT-4o

2024-05-29 Thread Keyvan M. Sadeghi
1. Prediction measures intelligence. Compression measures prediction. This is a a great insight, and the foundation of the research that peeps like LeCun, Ben, and sometimes me, are doing. It’s not an unbreakable rule though, everything is achievable with Nash-esque equilibriums in the world.

Re: [agi] GPT-4o

2024-05-28 Thread Keyvan M. Sadeghi
Matt, Now this is expressing opinion and engaging in a dialogue, kudos to you! ❤️ However, for someone who spent a life in the field of compression, you seem to like the keys on your keyboard a lot! Allow me to demonstrate: I would love to see a debate between Yann LeCun and Eliezer Yudkowsky.

Re: [agi] GPT-4o

2024-05-27 Thread Keyvan M. Sadeghi
Good thing is some productive chat happens outside this forum: https://x.com/ylecun/status/1794998977105981950 On Thu, May 23, 2024 at 6:52 PM Keyvan M. Sadeghi < keyvan.m.sade...@gmail.com> wrote: > > Not sure who you mean to say that too but, > > Not directed at yo

Re: [agi] GPT-4o

2024-05-23 Thread Keyvan M. Sadeghi
> Not sure who you mean to say that too but, > Not directed at you at all. Was just complaining since that's most of what happens on this list. Apart from sharing papers occasionally, most emails are either nagging about a subject, or worshipping false gods. I'm far from saying I am the best

Re: [agi] GPT-4o

2024-05-22 Thread Keyvan M. Sadeghi
A previous post on this forum proved no one here really cares about testing or achieving AGI. Apparently all we care about here is proving SELF superiority. On Fri, May 17, 2024, 2:07 PM wrote: > Matt, > > GPT4o still thinks my hard puzzle it can say to use a spoon to push the > truck, even

Re: [agi] Ruting Test of AGI

2024-05-14 Thread Keyvan M. Sadeghi
where the slugs are called "riders". Better ride than be ridden, especially when fuckers like Altman are driving the world! In his below interview, he outsources the worries, despite being the only person in the world currently in possession of resources to address the said worries:

Re: [agi] Ruting Test of AGI

2024-05-14 Thread Keyvan M. Sadeghi
That you find "tyranny for the good of their victims" "philosophical" > rather than "direct" indicates your ethical poverty. > More wise words from under the blanket ;) -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Ruting Test of AGI

2024-05-14 Thread Keyvan M. Sadeghi
> > The Sam Altmans of the world are bound and determined to exercise tyranny > for the good of their victims -- which amplifies any mistakes in choosing a > world model selection criterion (ie: loss function). > Too philosophical for my taste, I like being direct and express my feelings in real

Re: [agi] Ruting Test of AGI

2024-05-11 Thread Keyvan M. Sadeghi
> > Anything other than lossless compression as Turing Test V2 is best called > a "Rutting Test" since it is all about suitors of capital displaying one's > prowess in a contest of bullshit. > If an email list on AGI that’s been going on for 20 years can’t devise a benchmark for AGI, wouldn’t

Re: [agi] Ruting Test of AGI

2024-05-11 Thread Keyvan M. Sadeghi
> > Your test is the opposite of objective and measurable. What if two high IQ > people disagree if a robot acts like a human or not? > > Which IQ test? There are plenty of high IQ societies that will tell you > your IQ is 180 as long as you pay the membership fee. > > What if I upload the same

Re: [agi] Ruting Test of AGI

2024-05-11 Thread Keyvan M. Sadeghi
It’s different than Turing Test in that it’s measurable and not subject to interpretation. But it follows the same principle, that an agent’s behavior is ultimately what matters. It’s Turing Test V2. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Ruting Test of AGI

2024-05-11 Thread Keyvan M. Sadeghi
> > An LLM has human like behavior. Does it pass the Ruting test? How is this > different from the Turing test? > The instructions are clear, one should upload the code in a robot body, and let it act in the real world. Then a high IQ human observer can confirm whether the behavior is human-like

Re: [agi] Ruting Test of AGI

2024-05-10 Thread Keyvan M. Sadeghi
High IQ is 145 to 159, according to Google. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M3bce942aa67c46a4785c1df9 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Ruting Test of AGI

2024-05-10 Thread Keyvan M. Sadeghi
The name is a joke, but the test itself is concise and simple, a true benchmark. > If you upload your code in a robot and 1 high IQ person confirms it has human-like behavior, you’ve passed the Ruting Test. -- Artificial General Intelligence List: AGI

Re: [agi] Ruting Test of AGI

2024-05-10 Thread Keyvan M. Sadeghi
> > Ruting is an anagram of Turing? > Yeah, too lame? I’ve recently became a father, so I’m generating dad jokes apparently  -- Artificial General Intelligence List: AGI Permalink:

[agi] Ruting Test of AGI

2024-05-09 Thread Keyvan M. Sadeghi
https://www.linkedin.com/posts/keyvanmsadeghi_agi-activity-7194481824406908928-0ENT -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T244a8630dc835f49-Mb5b95fd1d7ea545b8b9a0f44 Delivery options:

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-08 Thread Keyvan M. Sadeghi
> I would also like to invite everyone to AGI-24 at UW in Seattle in > August to discuss AGI in person< > > https://agi-conf.org/2024. . Tell govt of Canada to issue my passport, since apparently according to this thread you have control over govts! thanks much and love to all... > Ben ❤️

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-08 Thread Keyvan M. Sadeghi
> Perhaps we need to sort out human condition issues that stem from human > consciousness? > Exactly what we should do and what needs funding, but shitheads of the world be funding wars. And Altman :)) -- Artificial General Intelligence List: AGI

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Keyvan M. Sadeghi
llege now. It is taboo to > suggest this is because of biology. > > On Tue, May 7, 2024, 9:05 PM Keyvan M. Sadeghi > wrote: > >> Ah also BTW, just a theory, maybe less females in STEM, tech, chess, etc. >> is due to upbringing conditioning. And in chimpanzees, result of ph

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Keyvan M. Sadeghi
Ah also BTW, just a theory, maybe less females in STEM, tech, chess, etc. is due to upbringing conditioning. And in chimpanzees, result of physical strength? -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Keyvan M. Sadeghi
Agreed  male ego is a necessity for human civilization, I have a whole lot of it, most likely. But as people living in the post-barbaric age, we should be more self-aware  On Tue, May 7, 2024 at 6:01 PM Matt Mahoney wrote: > On Tue, May 7, 2024 at 4:17 PM Keyvan M. Sadeghi >

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Keyvan M. Sadeghi
This list reeks of male testosterone, egos reaching other universes, remembered why I stopped reading it 10 years ago. Poor Ben Man is a father to all of ya, when you had no one else in the world that had the slightest idea of what you talk about, he gathered you in this sanctuary! Give away to

Re: [agi] Re: Iran <> Israel, can AGI zealots do anything?

2024-04-20 Thread Keyvan M. Sadeghi
Matt, you'll be free when you escape the box of thinking about the world through the lense of compression. Transistor clock speeds stalled in 2010. We can't make feature sizes > smaller than atoms, 0.11 nm for silicon. A DRAM capacitor stores a bit > using 8 electrons. So how does Moore's law

Re: [agi] Re: Iran <> Israel, can AGI zealots do anything?

2024-04-19 Thread Keyvan M. Sadeghi
> > Come to Canada, it's all free over here, and you will finally feel so > safe. It's damn clean over here. Maybe more than new york? IDK but it's > nice town. > I live in Toronto, missed my flight back home due to Iranian fireworks. Where you at? ❤️ --

Re: [agi] Re: Iran <> Israel, can AGI zealots do anything?

2024-04-17 Thread Keyvan M. Sadeghi
> Apparently we want to go extinct. > We've been wanting to merge with our tools since the beginning of our species, what proof you have that unlocking maximum potential of this is harmful/negative? -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Iran <> Israel, can AGI zealots do anything?

2024-04-17 Thread Keyvan M. Sadeghi
> > Who is an AGI zealot? > Sam Altman?  -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta9d6271053bbd3f3-M3c80d46974b27022b100f4f1 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Re: Iran <> Israel, can AGI zealots do anything?

2024-04-15 Thread Keyvan M. Sadeghi
> throw 18yo catgirls at it >  Yeah I wonder if that actually solves it. The problem is they're too old to get it hard and too stupid to use Viagra. -- Artificial General Intelligence List: AGI Permalink:

[agi] Iran <> Israel, can AGI zealots do anything?

2024-04-14 Thread Keyvan M. Sadeghi
Thoughts? -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta9d6271053bbd3f3-M3825fef158d2d8e1176cde6c Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] How AI will kill us

2024-03-31 Thread Keyvan M. Sadeghi
> The problem with this explanation is that it says that all systems with > memory are conscious. A human with 10^9 bits of long term memory is a > billion times more conscious than a light switch. Is this definition really > useful? > It's as useful as the calling the next era a Singularity. We

Re: [agi] How AI will kill us

2024-03-30 Thread Keyvan M. Sadeghi
> > I would rather have a recommendation algorithm that can predict what I > would like without having to watch. A better algorithm would be one that > actually watches and rates the movie. Even better would be an algorithm > that searches the space of possible movies to generate one that it

Re: [agi] How AI will kill us

2024-03-30 Thread Keyvan M. Sadeghi
> > Exactly. If people can’t snuff Wuffy to save the planet how could they > decide to kill off a few billion useless eaters? Although central banks do > fuel both sides of wars for reasons that include population modifications > across multi-decade currency cycles. > It's not the logical

Re: [agi] How AI will kill us

2024-03-30 Thread Keyvan M. Sadeghi
> > Why is that delusional? It may be a logical decision for the AI to make an > attempt to save the planet from natural destruction. > For the same reason that we, humans, don't kill dogs to save the planet. -- Artificial General Intelligence List: AGI

Re: [agi] How AI will kill us

2024-03-30 Thread Keyvan M. Sadeghi
> Contributing to the future might mean figuring out ways to have AI stop > killing us. An issue is that living people need to do this, the dead ones > only leave memories. Many scientists have proven now that the mRNA jab > system is a death machine but people keep getting zapped. That is a >

Re: [agi] How AI will kill us

2024-03-30 Thread Keyvan M. Sadeghi
Matt, you don't have free will because you watch on Netflix, download from Torrent and get your will back  On Sat, Mar 30, 2024, 3:10 AM Matt Mahoney wrote: > On Thu, Mar 28, 2024, 5:56 PM Keyvan M. Sadeghi < > keyvan.m.sade...@gmail.com> wrote: > >> The problem with fin

Re: [agi] How AI will kill us

2024-03-28 Thread Keyvan M. Sadeghi
> > The problem with finer grades of > like/dislike is that it slows down humans another half a second, which > adds up over thousands of times per day. > I'm not sure the granularity of feedback mechanism is the problem. I think the problem lies in us not knowing if we're looping or contributing

Re: [agi] How AI will kill us

2024-03-27 Thread Keyvan M. Sadeghi
I'm thinking of a solution Re: free speech https://github.com/keyvan-m-sadeghi/volume-buttons Wrote this piece but initial feedback from a few friends is that the text is too top down. Feedback is much appreciated 珞 On Wed, Mar 27, 2024, 2:42 PM John Rose wrote: > On Monday, March 25, 2

Re: [agi] How AI will kill us

2024-03-21 Thread Keyvan M. Sadeghi
> > Thank you Elon for fixing Twitter without which we were in a very, very > dark place. > Worship stars, not humans  -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T991e2940641e8052-M6b4784a6adf7ed7e55b84995

Re: [agi] Re: Generalized Theory of Accelerating Returns

2024-03-11 Thread Keyvan M. Sadeghi
On Mon, Mar 11, 2024, 7:49 PM wrote: > Doesn't need to eat the core of Earth to be grey goo takeover. Nor use ALL > energy or matter. > > Thoughts? > First iteration could be: transform(human) = cyborg -- Artificial General Intelligence List: AGI

Re: [agi] Re: Generalized Theory of Accelerating Returns

2024-03-11 Thread Keyvan M. Sadeghi
Cheers 珞 I think I'm not an ass IRL, but that's not for me to judge. I'm fucked if AGI won't give us immortality, all those cigarrets when I was younger!  On Mon, Mar 11, 2024 at 11:07 AM wrote: > Writes it like a pimp. Fuck that shit! :) I like this guy. (Just don't be > an ass when we try to

[agi] Generalized Theory of Accelerating Returns

2024-03-11 Thread Keyvan M. Sadeghi
One diagram then I'll stop looping 8)  [image: diagram.png] https://github.com/keyvan-m-sadeghi/about-time Have to take the GitHub repo  private in a week, traveling to Iran for a month. Will open up again after I'm back. -- Artificial General