LeCun’s awesome start turned into “I’m your god”, it’s a generational flaw.
https://x.com/keyvanmsadeghi/status/1797670341436977417?s=46
--
Artificial General Intelligence List: AGI
Permalink:
Tame the butterfly effect. Just imagine you switch a couple words around
> and the whole world starts conversing.
>
Aka click bait? :) ;)
--
Artificial General Intelligence List: AGI
Permalink:
>
> Smearing those who are concerned of particular AI risks by pooling them
> into a prejudged category entitled “Doomers” is not really being serious.
>
Judging the future of AGI (not distant, 5 years), with our current
premature brains is a joke. Worse, it's an unholy/profitable business for
Evolution == Technology
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M85f0ad77d68bb0926bfb8db7
Delivery options: https://agi.topicbox.com/groups/agi/subscription
1. Prediction measures intelligence. Compression measures prediction.
This is a a great insight, and the foundation of the research that peeps
like LeCun, Ben, and sometimes me, are doing. It’s not an unbreakable rule
though, everything is achievable with Nash-esque equilibriums in the world.
Matt,
Now this is expressing opinion and engaging in a dialogue, kudos to you! ❤️
However, for someone who spent a life in the field of compression, you seem
to like the keys on your keyboard a lot! Allow me to demonstrate:
I would love to see a debate between Yann LeCun and Eliezer Yudkowsky.
Good thing is some productive chat happens outside this forum:
https://x.com/ylecun/status/1794998977105981950
On Thu, May 23, 2024 at 6:52 PM Keyvan M. Sadeghi <
keyvan.m.sade...@gmail.com> wrote:
>
> Not sure who you mean to say that too but,
>
> Not directed at yo
> Not sure who you mean to say that too but,
>
Not directed at you at all. Was just complaining since that's most of what
happens on this list. Apart from sharing papers occasionally, most emails
are either nagging about a subject, or worshipping false gods.
I'm far from saying I am the best
A previous post on this forum proved no one here really cares about testing
or achieving AGI. Apparently all we care about here is proving SELF
superiority.
On Fri, May 17, 2024, 2:07 PM wrote:
> Matt,
>
> GPT4o still thinks my hard puzzle it can say to use a spoon to push the
> truck, even
where the slugs are called "riders".
Better ride than be ridden, especially when fuckers like Altman are driving
the world!
In his below interview, he outsources the worries, despite being the only
person in the world currently in possession of resources to address the
said worries:
That you find "tyranny for the good of their victims" "philosophical"
> rather than "direct" indicates your ethical poverty.
>
More wise words from under the blanket ;)
--
Artificial General Intelligence List: AGI
Permalink:
>
> The Sam Altmans of the world are bound and determined to exercise tyranny
> for the good of their victims -- which amplifies any mistakes in choosing a
> world model selection criterion (ie: loss function).
>
Too philosophical for my taste, I like being direct and express my feelings
in real
>
> Anything other than lossless compression as Turing Test V2 is best called
> a "Rutting Test" since it is all about suitors of capital displaying one's
> prowess in a contest of bullshit.
>
If an email list on AGI that’s been going on for 20 years can’t devise a
benchmark for AGI, wouldn’t
>
> Your test is the opposite of objective and measurable. What if two high IQ
> people disagree if a robot acts like a human or not?
>
> Which IQ test? There are plenty of high IQ societies that will tell you
> your IQ is 180 as long as you pay the membership fee.
>
> What if I upload the same
It’s different than Turing Test in that it’s measurable and not subject to
interpretation. But it follows the same principle, that an agent’s behavior
is ultimately what matters. It’s Turing Test V2.
--
Artificial General Intelligence List: AGI
Permalink:
>
> An LLM has human like behavior. Does it pass the Ruting test? How is this
> different from the Turing test?
>
The instructions are clear, one should upload the code in a robot body, and
let it act in the real world. Then a high IQ human observer can confirm
whether the behavior is human-like
High IQ is 145 to 159, according to Google.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M3bce942aa67c46a4785c1df9
Delivery options: https://agi.topicbox.com/groups/agi/subscription
The name is a joke, but the test itself is concise and simple, a true
benchmark.
> If you upload your code in a robot and 1 high IQ person confirms it has
human-like behavior, you’ve passed the Ruting Test.
--
Artificial General Intelligence List: AGI
>
> Ruting is an anagram of Turing?
>
Yeah, too lame? I’ve recently became a father, so I’m generating dad jokes
apparently
--
Artificial General Intelligence List: AGI
Permalink:
https://www.linkedin.com/posts/keyvanmsadeghi_agi-activity-7194481824406908928-0ENT
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-Mb5b95fd1d7ea545b8b9a0f44
Delivery options:
> I would also like to invite everyone to AGI-24 at UW in Seattle in
> August to discuss AGI in person<
>
> https://agi-conf.org/2024. .
Tell govt of Canada to issue my passport, since apparently according to
this thread you have control over govts!
thanks much and love to all...
> Ben
❤️
> Perhaps we need to sort out human condition issues that stem from human
> consciousness?
>
Exactly what we should do and what needs funding, but shitheads of the
world be funding wars. And Altman :))
--
Artificial General Intelligence List: AGI
llege now. It is taboo to
> suggest this is because of biology.
>
> On Tue, May 7, 2024, 9:05 PM Keyvan M. Sadeghi
> wrote:
>
>> Ah also BTW, just a theory, maybe less females in STEM, tech, chess, etc.
>> is due to upbringing conditioning. And in chimpanzees, result of ph
Ah also BTW, just a theory, maybe less females in STEM, tech, chess, etc.
is due to upbringing conditioning. And in chimpanzees, result of physical
strength?
--
Artificial General Intelligence List: AGI
Permalink:
Agreed male ego is a necessity for human civilization, I have a whole
lot of it, most likely. But as people living in the post-barbaric age, we
should be more self-aware
On Tue, May 7, 2024 at 6:01 PM Matt Mahoney wrote:
> On Tue, May 7, 2024 at 4:17 PM Keyvan M. Sadeghi
>
This list reeks of male testosterone, egos reaching other universes,
remembered why I stopped reading it 10 years ago.
Poor Ben Man is a father to all of ya, when you had no one else in the
world that had the slightest idea of what you talk about, he gathered you
in this sanctuary!
Give away to
Matt, you'll be free when you escape the box of thinking about the world
through the lense of compression.
Transistor clock speeds stalled in 2010. We can't make feature sizes
> smaller than atoms, 0.11 nm for silicon. A DRAM capacitor stores a bit
> using 8 electrons. So how does Moore's law
>
> Come to Canada, it's all free over here, and you will finally feel so
> safe. It's damn clean over here. Maybe more than new york? IDK but it's
> nice town.
>
I live in Toronto, missed my flight back home due to Iranian fireworks.
Where you at? ❤️
--
> Apparently we want to go extinct.
>
We've been wanting to merge with our tools since the beginning of our
species, what proof you have that unlocking maximum potential of this is
harmful/negative?
--
Artificial General Intelligence List: AGI
Permalink:
>
> Who is an AGI zealot?
>
Sam Altman?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Ta9d6271053bbd3f3-M3c80d46974b27022b100f4f1
Delivery options: https://agi.topicbox.com/groups/agi/subscription
> throw 18yo catgirls at it
>
Yeah I wonder if that actually solves it. The problem is they're too old to
get it hard and too stupid to use Viagra.
--
Artificial General Intelligence List: AGI
Permalink:
Thoughts?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Ta9d6271053bbd3f3-M3825fef158d2d8e1176cde6c
Delivery options: https://agi.topicbox.com/groups/agi/subscription
> The problem with this explanation is that it says that all systems with
> memory are conscious. A human with 10^9 bits of long term memory is a
> billion times more conscious than a light switch. Is this definition really
> useful?
>
It's as useful as the calling the next era a Singularity. We
>
> I would rather have a recommendation algorithm that can predict what I
> would like without having to watch. A better algorithm would be one that
> actually watches and rates the movie. Even better would be an algorithm
> that searches the space of possible movies to generate one that it
>
> Exactly. If people can’t snuff Wuffy to save the planet how could they
> decide to kill off a few billion useless eaters? Although central banks do
> fuel both sides of wars for reasons that include population modifications
> across multi-decade currency cycles.
>
It's not the logical
>
> Why is that delusional? It may be a logical decision for the AI to make an
> attempt to save the planet from natural destruction.
>
For the same reason that we, humans, don't kill dogs to save the planet.
--
Artificial General Intelligence List: AGI
> Contributing to the future might mean figuring out ways to have AI stop
> killing us. An issue is that living people need to do this, the dead ones
> only leave memories. Many scientists have proven now that the mRNA jab
> system is a death machine but people keep getting zapped. That is a
>
Matt, you don't have free will because you watch on Netflix, download from
Torrent and get your will back
On Sat, Mar 30, 2024, 3:10 AM Matt Mahoney wrote:
> On Thu, Mar 28, 2024, 5:56 PM Keyvan M. Sadeghi <
> keyvan.m.sade...@gmail.com> wrote:
>
>> The problem with fin
>
> The problem with finer grades of
> like/dislike is that it slows down humans another half a second, which
> adds up over thousands of times per day.
>
I'm not sure the granularity of feedback mechanism is the problem. I think
the problem lies in us not knowing if we're looping or contributing
I'm thinking of a solution Re: free speech
https://github.com/keyvan-m-sadeghi/volume-buttons
Wrote this piece but initial feedback from a few friends is that the text
is too top down.
Feedback is much appreciated 珞
On Wed, Mar 27, 2024, 2:42 PM John Rose wrote:
> On Monday, March 25, 2
>
> Thank you Elon for fixing Twitter without which we were in a very, very
> dark place.
>
Worship stars, not humans
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M6b4784a6adf7ed7e55b84995
On Mon, Mar 11, 2024, 7:49 PM wrote:
> Doesn't need to eat the core of Earth to be grey goo takeover. Nor use ALL
> energy or matter.
>
> Thoughts?
>
First iteration could be: transform(human) = cyborg
--
Artificial General Intelligence List: AGI
Cheers 珞 I think I'm not an ass IRL, but that's not for me to judge. I'm
fucked if AGI won't give us immortality, all those cigarrets when I was
younger!
On Mon, Mar 11, 2024 at 11:07 AM wrote:
> Writes it like a pimp. Fuck that shit! :) I like this guy. (Just don't be
> an ass when we try to
One diagram then I'll stop looping 8)
[image: diagram.png]
https://github.com/keyvan-m-sadeghi/about-time
Have to take the GitHub repo private in a week, traveling to Iran for a
month. Will open up again after I'm back.
--
Artificial General
44 matches
Mail list logo