[agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-06-20 Thread immortal . discoveries
He's basically saying the real test for AGI is not just doing a lot of diverse tasks like answering questions using text or even video, and not even also doing diverse tasks robotically or botically (bots surf internet), but to actually measure it using money. ---

[agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-02 Thread ivan . moony
How about: how much people the AI made happier? -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T42db51de471cbcb9-Mc885eb26b089a5599f28a807 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-03 Thread Matt Mahoney
On Sun, Jul 2, 2023, 5:18 PM wrote: > How about: how much people the AI made happier? > Happiness is hard to measure. We don't know if people are happier today than 100 or 1000 years ago. We don't know if people are happier than animals. We don't know if farm animals are happier than wild animal

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-04 Thread immortal . discoveries
Exponential progress has been happing since thousands/millions of years ago yes. But only now the size of the output is starting to reverse human ageing... I'm not sure if you meant not but just wanted to say it. -- Artificial General Intelligence List: AGI

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-04 Thread Matt Mahoney
On Tue, Jul 4, 2023, 9:22 AM wrote: > Exponential progress has been happing since thousands/millions of years > ago yes. But only now the size of the output is starting to reverse human > ageing... I'm not sure if you meant not but just wanted to say it. > We are not close to reversing human agi

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-04 Thread Rob Freeman
Off topic, and I haven't followed this thread, but... On Tue, Jul 4, 2023 at 10:21 PM Matt Mahoney wrote: >... > > We are not close to reversing human aging. The global rate of increase in > life expectancy has dropped slightly after peaking at 0.2 years per year in > the 1990s. We have 0 drugs

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-05 Thread Matt Mahoney
Let's assume the best case, that the various drugs that are proposed to slow aging by mimicking the metabolism slowing effects of calorie restriction (resveratol, metformin, etc) actually work in humans, have no long term side effects, and everyone starts taking them as a lifetime regimen from birt

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-05 Thread Rob Freeman
On Wed, Jul 5, 2023 at 7:05 PM Matt Mahoney wrote: >... > LLMs do have something to say about consciousness. If a machine passes the > Turing test, then it is conscious as far as you can tell. I see no reason to accept the Turning test as a definition of consciousness. Who ever suggested that? E

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-05 Thread immortal . discoveries
Exponential is nice, it's a line of a sort, steady, black and white. But, life isn't all 1+1. AGI will be robotic, it will suddenly exist. It will have freezable memory. It will have a weird yellow dot on its pinky toe who knows. Life is weird. It isn't a line or curb so nicely. Humans may not

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-05 Thread Matt Mahoney
I am still on the Hutter prize committee and just recently helped evaluate a submission. It uses 1 GB of text because that is how much a human can process over a lifetime. We have much larger LLMs, of course. Their knowledge is equivalent to thousands or millions of humans, which makes them much mo

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-05 Thread Rob Freeman
On Thu, Jul 6, 2023 at 3:51 AM Matt Mahoney wrote: > > I am still on the Hutter prize committee and just recently helped evaluate a > submission. It uses 1 GB of text because that is how much a human can process > over a lifetime. We have much larger LLMs, of course. Their knowledge is > equiva

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-06 Thread immortal . discoveries
Just to be clear I meant we won't die by repair etc, when I said transfer I did not mean upload. I certainty don't talk about uploading in the context: I/humans don't want to be uploaded to avoid death. Uploads do however still help backup data if need to fetch some missing memories or organs.

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-06 Thread immortal . discoveries
And, uh, yes I am still giving it a new shot, why not? Just in case. I'm currently narrowing down the code to be simpler and give a strong result to match byron's a little closer than I used to, then will add more my other functions next. Then I'm gonna try to run it on GPU to get all the matche

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-06 Thread Rob Freeman
On Thu, Jul 6, 2023 at 11:30 AM wrote: >... > Hold on. The Lossless Compression evaluation tests not just compression, but > expansion! It's easy to get lost in word definitions. It sounds like you're using "expansion" in a sense of recovering an original from a compression. I'm using "expansi

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-06 Thread immortal . discoveries
You have to predict "_" after the context ABC>_ GPT-4 does that great. This makes the hurricane, AGI has to predict parts/bits. Forget the LC benchmark, the AI is learning representations is all. It doesn't store all data of course in the brain. You said above large is the way, expand is the w

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-06 Thread immortal . discoveries
So yes you are right, in a way as shown by the end of my reply it seems. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T42db51de471cbcb9-M49f4e6b45721c72569ddd171 Delivery options: https://agi.topicbox.com/groups

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-06 Thread James Bowery
On Thu, Jul 6, 2023 at 1:09 AM Rob Freeman wrote: > ... I just always believed the goal of compression was wrong. You're really confused. First, you're conflating lossless compression with lossy compression. Lossless compression is simply Occam's Razor. Lossy compression is Confirmation Bias o

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-06 Thread Matt Mahoney
On Thu, Jul 6, 2023, 2:09 AM Rob Freeman wrote: > > Did the Hutter Prize move the field? Well, I was drawn to it as a rare > data based benchmark. > Not much. I disagreed with Hutter on the contest rules, but he was funding the prize. (I once applied for an NSF grant but it was rejected like 90%

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-06 Thread James Bowery
On Thu, Jul 6, 2023 at 12:00 PM Matt Mahoney wrote: > On Thu, Jul 6, 2023, 2:09 AM Rob Freeman > wrote: > >> >> Did the Hutter Prize move the field? Well, I was drawn to it as a rare >> data based benchmark. >> > > Not much. I disagreed with Hutter on the contest rules, but he was funding > the

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-06 Thread Matt Mahoney
Since the Hutter prize was expanded 10x to 5000 € per 1% improvement in 2019, the historical rate of improvement has been 0.5% per year. Meanwhile the top 5 entries on LTCB beat the Hutter prize, with the best at 6% smaller. I think that anyone with the technical skills to submit a winning entry i

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-06 Thread James Bowery
On Thu, Jul 6, 2023 at 2:03 PM Matt Mahoney wrote: > Since the Hutter prize was expanded 10x to 5000 € per 1% improvement in > 2019, the historical rate of improvement has been 0.5% per year. Meanwhile > the top 5 entries on LTCB beat the Hutter prize, with the best at 6% > smaller. > > I think t

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-06 Thread Rob Freeman
On Thu, Jul 6, 2023 at 7:54 PM James Bowery wrote: > On Thu, Jul 6, 2023 at 1:09 AM Rob Freeman wrote: >> >> I just always believed the goal of compression was wrong. > > You're really confused. I'm confused? Maybe. But I have examples. You don't address my examples. You just enumerate a list of

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-06 Thread Rob Freeman
On Thu, Jul 6, 2023 at 7:58 PM Matt Mahoney wrote: > ... > The LTCB and Hutter prize entries model grammar and semantics to some extent > but never developed to the point of constructing world models enabling them > to reason about physics or psychology or solve novel math and coding > problems

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-08 Thread Matt Mahoney
You don't need to define science. Occam's Razor works as the basis for choosing theories in all branches of science because all possible probability distributions over the countably infinite set of strings must favor shorter strings. -- Artificial General In

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-08 Thread James Bowery
One needs to define the ability to predict consequences of various actions in order to program an AGI. If you don't want to call that "science" that's fine. I don't care what you call it. But Ron Freeman certainly takes that as attempting to "define science". On Sat, Jul 8, 2023 at 8:42 AM Matt

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-08 Thread WriterOfMinds
When I see the word "science," I think of something more specific than the ability to predict the outcomes of actions. I don't use the formal scientific method to determine how a friend will react to a particular gift, for example. -- Artificial General Int

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-08 Thread James Bowery
But you do use a model of your environment to predict how your friend will react. You *embody* that model as *learned. *The* learning algorithm* involves encoding the knowledge in your personal (neurons) and phylogenetic (DNA) *history of experience*. We already have examples of combinations of

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-09 Thread stefan.reich.maker.of.eye via AGI
On Wednesday, July 05, 2023, at 6:05 PM, Matt Mahoney wrote: > In reality, we are doing much worse. We still don't understand after 40 years > why we have an obesity and diabetes epidemic. It took almost that long to > learn that low fat diets make you fatter, but it is more than that. We are >