He's basically saying the real test for AGI is not just doing a lot of diverse
tasks like answering questions using text or even video, and not even also
doing diverse tasks robotically or botically (bots surf internet), but to
actually measure it using money.
---
How about: how much people the AI made happier?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T42db51de471cbcb9-Mc885eb26b089a5599f28a807
Delivery options: https://agi.topicbox.com/groups/agi/subscription
On Sun, Jul 2, 2023, 5:18 PM wrote:
> How about: how much people the AI made happier?
>
Happiness is hard to measure. We don't know if people are happier today
than 100 or 1000 years ago. We don't know if people are happier than
animals. We don't know if farm animals are happier than wild animal
Exponential progress has been happing since thousands/millions of years ago
yes. But only now the size of the output is starting to reverse human ageing...
I'm not sure if you meant not but just wanted to say it.
--
Artificial General Intelligence List: AGI
On Tue, Jul 4, 2023, 9:22 AM wrote:
> Exponential progress has been happing since thousands/millions of years
> ago yes. But only now the size of the output is starting to reverse human
> ageing... I'm not sure if you meant not but just wanted to say it.
>
We are not close to reversing human agi
Off topic, and I haven't followed this thread, but...
On Tue, Jul 4, 2023 at 10:21 PM Matt Mahoney wrote:
>...
>
> We are not close to reversing human aging. The global rate of increase in
> life expectancy has dropped slightly after peaking at 0.2 years per year in
> the 1990s. We have 0 drugs
Let's assume the best case, that the various drugs that are proposed to
slow aging by mimicking the metabolism slowing effects of calorie
restriction (resveratol, metformin, etc) actually work in humans, have no
long term side effects, and everyone starts taking them as a lifetime
regimen from birt
On Wed, Jul 5, 2023 at 7:05 PM Matt Mahoney wrote:
>...
> LLMs do have something to say about consciousness. If a machine passes the
> Turing test, then it is conscious as far as you can tell.
I see no reason to accept the Turning test as a definition of
consciousness. Who ever suggested that? E
Exponential is nice, it's a line of a sort, steady, black and white.
But, life isn't all 1+1. AGI will be robotic, it will suddenly exist. It will
have freezable memory. It will have a weird yellow dot on its pinky toe who
knows. Life is weird. It isn't a line or curb so nicely.
Humans may not
I am still on the Hutter prize committee and just recently helped evaluate
a submission. It uses 1 GB of text because that is how much a human can
process over a lifetime. We have much larger LLMs, of course. Their
knowledge is equivalent to thousands or millions of humans, which makes
them much mo
On Thu, Jul 6, 2023 at 3:51 AM Matt Mahoney wrote:
>
> I am still on the Hutter prize committee and just recently helped evaluate a
> submission. It uses 1 GB of text because that is how much a human can process
> over a lifetime. We have much larger LLMs, of course. Their knowledge is
> equiva
Just to be clear I meant we won't die by repair etc, when I said transfer I did
not mean upload. I certainty don't talk about uploading in the context:
I/humans don't want to be uploaded to avoid death. Uploads do however still
help backup data if need to fetch some missing memories or organs.
And, uh, yes I am still giving it a new shot, why not? Just in case. I'm
currently narrowing down the code to be simpler and give a strong result to
match byron's a little closer than I used to, then will add more my other
functions next. Then I'm gonna try to run it on GPU to get all the matche
On Thu, Jul 6, 2023 at 11:30 AM wrote:
>...
> Hold on. The Lossless Compression evaluation tests not just compression, but
> expansion!
It's easy to get lost in word definitions.
It sounds like you're using "expansion" in a sense of recovering an
original from a compression.
I'm using "expansi
You have to predict "_" after the context ABC>_
GPT-4 does that great.
This makes the hurricane, AGI has to predict parts/bits.
Forget the LC benchmark, the AI is learning representations is all. It doesn't
store all data of course in the brain.
You said above large is the way, expand is the w
So yes you are right, in a way as shown by the end of my reply it seems.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T42db51de471cbcb9-M49f4e6b45721c72569ddd171
Delivery options: https://agi.topicbox.com/groups
On Thu, Jul 6, 2023 at 1:09 AM Rob Freeman
wrote:
> ...
I just always believed the goal of compression was wrong.
You're really confused.
First, you're conflating lossless compression with lossy compression.
Lossless compression is simply Occam's Razor.
Lossy compression is Confirmation Bias o
On Thu, Jul 6, 2023, 2:09 AM Rob Freeman wrote:
>
> Did the Hutter Prize move the field? Well, I was drawn to it as a rare
> data based benchmark.
>
Not much. I disagreed with Hutter on the contest rules, but he was funding
the prize. (I once applied for an NSF grant but it was rejected like 90%
On Thu, Jul 6, 2023 at 12:00 PM Matt Mahoney
wrote:
> On Thu, Jul 6, 2023, 2:09 AM Rob Freeman
> wrote:
>
>>
>> Did the Hutter Prize move the field? Well, I was drawn to it as a rare
>> data based benchmark.
>>
>
> Not much. I disagreed with Hutter on the contest rules, but he was funding
> the
Since the Hutter prize was expanded 10x to 5000 € per 1% improvement in
2019, the historical rate of improvement has been 0.5% per year. Meanwhile
the top 5 entries on LTCB beat the Hutter prize, with the best at 6%
smaller.
I think that anyone with the technical skills to submit a winning entry i
On Thu, Jul 6, 2023 at 2:03 PM Matt Mahoney wrote:
> Since the Hutter prize was expanded 10x to 5000 € per 1% improvement in
> 2019, the historical rate of improvement has been 0.5% per year. Meanwhile
> the top 5 entries on LTCB beat the Hutter prize, with the best at 6%
> smaller.
>
> I think t
On Thu, Jul 6, 2023 at 7:54 PM James Bowery wrote:
> On Thu, Jul 6, 2023 at 1:09 AM Rob Freeman wrote:
>>
>> I just always believed the goal of compression was wrong.
>
> You're really confused.
I'm confused? Maybe. But I have examples. You don't address my
examples. You just enumerate a list of
On Thu, Jul 6, 2023 at 7:58 PM Matt Mahoney wrote:
> ...
> The LTCB and Hutter prize entries model grammar and semantics to some extent
> but never developed to the point of constructing world models enabling them
> to reason about physics or psychology or solve novel math and coding
> problems
You don't need to define science. Occam's Razor works as the basis for
choosing theories in all branches of science because all possible
probability distributions over the countably infinite set of strings must
favor shorter strings.
--
Artificial General In
One needs to define the ability to predict consequences of various actions
in order to program an AGI. If you don't want to call that "science"
that's fine. I don't care what you call it. But Ron Freeman certainly
takes that as attempting to "define science".
On Sat, Jul 8, 2023 at 8:42 AM Matt
When I see the word "science," I think of something more specific than the
ability to predict the outcomes of actions.
I don't use the formal scientific method to determine how a friend will react
to a particular gift, for example.
--
Artificial General Int
But you do use a model of your environment to predict how your friend will
react. You *embody* that model as *learned. *The* learning
algorithm* involves
encoding the knowledge in your personal (neurons) and phylogenetic
(DNA) *history
of experience*.
We already have examples of combinations of
On Wednesday, July 05, 2023, at 6:05 PM, Matt Mahoney wrote:
> In reality, we are doing much worse. We still don't understand after 40 years
> why we have an obesity and diabetes epidemic. It took almost that long to
> learn that low fat diets make you fatter, but it is more than that. We are
>
28 matches
Mail list logo