Re: [agi] Revising genetic algorithm with genAI and quantum randomness?

2024-10-31 Thread Matt Mahoney
How would a quantum entropy source help a genetic algorithm? The random
number generator doesn't even need to be cryptographically secure. It only
needs to be good enough to cover all possible mutations. Genetic algorithms
are slow because the reinforcement signal bandwidth is only 1 bit per copy.

On Sat, Oct 26, 2024, 4:39 PM Keyvan M. Sadeghi 
wrote:

> Hey fam 🤗
>
> Was wondering if anyone is working on something similar to the title of
> this email?
>
> My take on why GA hasn't given us real world scale of evolution was that
> it's bound to the complexity of the simulation one confines the algorithm
> within. I.e. there's no entropy source, game worlds don't have sun shining
> down on them!
>
> Large trained models of today capture a lot more of the entropy of the
> world outside the computers.
>
> Here's what I'm thinking (10 minutes musings, very uncooked), add to a
> traditional GA the following:
>
> - Entropy source: chromosome in each generation isn't a fixed length
> binary number, it's variable length and evolving data structure, output of
> a large model
>
> - Quantum mutations: the mutation step is sourced from a quantum number
> generator. Many of these exist today with free APIs, e.g.
> https://qrng.anu.edu.au
>
> - Optional step: also evolve the world that the population is embedded
> within, in each generation, from a genAI
>
> With this setup, I'm trying to question a fundamental assumption of
> conventional GAs, that complexity can arise from a simple set of rules.
>
> Any recent related work that you know of? I know Ben is working on a
> similar approach, his entropy source is MeTTa agents interfacing with
> endpoints to bring in what I refer to as entropy here. Any other research
> ongoing on this?
>
> ❤️
> K
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te9337bb274956116-M92ee8f80c1ebdad59da44b19
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Grants for developing AI code in MeTTa language toward implementing PRIMUS cognitive architecture in Hyperon

2024-10-23 Thread Matt Mahoney
On Tue, Oct 22, 2024, 11:16 AM stefan.reich.maker.of.eye via AGI <
agi@agi.topicbox.com> wrote:

> On Monday, October 21, 2024, at 10:38 PM, dissipate wrote:
>
> just develop AI that is better than humans at a very specific task:
> researching and developing AI itself
>
> Just make a program that solves the hardest problem of all - presto, you
> solved the hardest problem of all.
>

Well, no. Intelligence depends on knowledge and computing power, or
ultimately just computing power because you need it to acquire knowledge.
Unless you can define and enforce a fitness function or a test for
intelligence, then it defaults to acquiring computing power by any means
possible. That means competing with humans for atoms and energy.

This is the 20 year old unfriendly AI problem. You can't test for
superhuman intelligence because you aren't smart enough to know if it is
giving the right answers. You can't control what you can't predict. You
can't specify a fitness function, and you couldn't enforce it even if you
could. The best we have come up with is the Turing curse (test), where the
highest possible score is human level. We can't even test people for an IQ
of 200. Your phone can do arithmetic a billion times faster than you and
has a billion times more short term memory. What would you say its IQ is?

So stop throwing around meaningless terms like AGI or ASI. Say what you
really mean. We want technology that gives us everything we want without
having to pay or work for it. That we can do. We acquire all human
knowledge (10^19 characters compressed to 10^17 bits) and the equivalent
computing power of 8 billion brains: 10^26 operations per second over 10^25
parameters. It's just a $1 quadrillion engineering problem.

I'm happy to see that Ben is finally getting the resources for some real AI
research. I'm looking forward to a Large Text Compression Benchmark
submission with no limits on computing power.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3ced54aaba4f0969-M107b19824402d17da67a27e3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Grants for developing AI code in MeTTa language toward implementing PRIMUS cognitive architecture in Hyperon

2024-10-21 Thread Matt Mahoney
I have some questions about Hyperon and your paper on how to improve LLM
performance. Have you or would you be able to implement MOSES or an LLM in
AtomSpace/MeTTa? Do you have a GPU implementation? Do you have any
applications or benchmark results? How much hardware do you have? How much
training data have you collected?

I want any project I work on to succeed. My concerns are:

1. There won't be a hard takeoff because you can't compare human and
machine intelligence. There is no threshold where if humans can produce
superhuman intelligence, then so could it, but faster. Computers started
surpassing humans in the 1950's and will continue to improve for decades
more before humans become irrelevant.

2. Webmind/Novamente/OpenCog/Hyperon hasn't produced anything since 1998. I
recall the goal at one time was to produce AGI by 2013. How much closer are
you?

3. Evolutionary algorithms like MOSES are inherently slow because each
population doubling generation adds at most one bit of Kolmogorov
complexity (live or die) to the genome. Our genome is 10^9 bits after 10^9
generations. Human evolution only succeeded because of massive computing
power that doesn't yet exist outside of the biosphere: 10^48 DNA base copy
operations on 10^37 bits, powered by 90,000 TW of solar power for 3 billion
years. Transistors would use a million times more energy, and we are still
far from developing energy efficient computing nanotechnology based on
moving atoms instead of electrons. Any ideas to speed this up?

4. It looks like from the size of your team and RFPs that you have 8
figures to invest. The big tech companies are investing 12 figures. But I
think right now we are in an AI bubble. Investors are going to want a
return on their investment, namely the $100 trillion per year labor
automation problem. But LLMs are not taking our jobs because only a tiny
fraction of the 10^17 bits of human knowledge stored in 10^10 human brains
(10^9 bits per person, assuming 99% is shared knowledge) is written down
for LLMs to train on. LLMs aren't taking your job because the knowledge it
needs is in your brain and can only be extracted through years of speech
and writing at 5 to 10 bits per second. There is only about 10^13 bits of
public data available to train the largest LLMs. When people see that job
automation is harder than we thought, the AI bubble will pop and investment
in risky, unproven technology like Hyperon will dry up. AI isn't going
away, just like the internet didn't go away after the 2000 dotcom boom. But
the hype will go. ChatGPT is 2 years old and still mostly a toy to help
kids write fan letters or cheat on homework. In the real world,
unemployment is down.

On Fri, Oct 18, 2024, 11:45 AM Ben Goertzel  wrote:

> Hey!
> 
> SingulairtyNET is offering some grants to folks who want to do some
> AGI-oriented Ai software development on specific projects that are
> part of our thrust to make an AGI using the OpenCog. Hyperon
> architecture,
> 
> Please see here for the details
> 
> https://deepfunding.ai/all-rfps/
> 
> The projects mainly involve development in our new MeTTa AGI-oriented
> language.   See here
> 
> https://metta-lang.dev/
> 
> for information on the MeTTa language itself, and links here
> 
> https://hyperon.opencog.org/
> 
> https://arxiv.org/abs/2310.18318
> 
> for general info on the Hyperon approach to AGI
> 
> thanks
> Ben
> 
> --
> -- Ben Goertzel, PhD
> http://goertzel.org
> CEO, SingularityNET / True AGI / ASI Alliance
> Chair, AGI Society
> 
> "One must have chaos in one's heart to give birth to a dancing star"
> -- Friedrich Nietzsche

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3ced54aaba4f0969-M86c5b8534818a1bdb2cd6de5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Computronium Abyss

2024-10-16 Thread Matt Mahoney
On Tue, Oct 15, 2024, 3:15 PM  wrote:

> A short paper on a concept I'm developing called The Computronium Abyss:
> https://github.com/dissipate/computronium_abyss
>
> Roast it.
>

You first equation looks like the Bekenstein bound of a black hole with
mass M. It gives the entropy as A/4 nats (1 nat = 1/ ln 2 ≈ 1.44 bits)
where A is the area of the event horizon in Planck units. The Schwartzchild
radius of a black hole is 2GM/c^2, thus the nonlinear dependency on M^2. I
calculated the entropy of the universe at 2.95 x 10^122 bits based on a
radius of 13.8 billion light years. This is close to Lloyd's rough estimate
of 10^120 qubit operations possible by converting the mass of the universe
(10^53 kg) to 10^70 J over the age of the universe, 4 x 10^17 s, using your
second equation.

Unfortunately, most of this entropy is heat, not available for computation.
Lloyd estimated that the universe could encode 10^90 bits in the states of
10^80 particle positions and momentums within the limits of Heisenberg's
uncertainty. I independently estimated that 10^70 J could write 10^92 bits
within the Landauer limit at the CMB temperature of 3 K.

I'm interested in how you would proceed from here to build a self
optimizing AGI. Evolution would be one way. The biosphere uses 10^41 carbon
atoms to encode 10^37 bits of DNA and has performed 10^48 DNA base copy
operations and 10^50 amino acid transcription operations over the last
10^17 s to evolve humans. That's 10^33 operations per second. The Earth
receives 90,000 TW of sunlight at the surface, of which 500 TW is converted
to carbohydrates by photosynthesis, or 10^-17 J per operation. The Landauer
limit at 300K is 4 x 10^-21 J. By comparison, a synapse operation takes
10^-15 J and a transistor operation 10^-11 J. Global electricity production
is 18 TW.

You can't make transistors smaller than atoms, so further advances in
Moore's law will require nanotechnology, moving atoms instead of electrons.
Assuming this happens and global computing power doubles every 3 years, it
will take about 130 years for self replicating nanotechnology to catch up
to and displace DNA based life.

What would be your approach?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T86ee7f7b146878af-M38027dcdcc67bbdfb706f425
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] P(Haz_AGI(OpenAI)) = ?

2024-09-19 Thread Matt Mahoney
On Wed, Sep 18, 2024 at 4:11 PM James Bowery  wrote:
> On Tue, Sep 17, 2024 at 2:47 PM Matt Mahoney  wrote:
>>
>> ...I mean observer dependent information...
>
>
> Such intersubjectivity recursively bottoms out in the lone subject who 
> receives "data" through a provenance chain involving other "observers" some 
> of whom are "which":  measurement instruments.
>
> This is why I'm so insistent that the AIT folks get on with formalizing 
> forensic epistemology -- including, ultimately -- game theory.  This can 
> start with something as simple as an agent self-diagnosing a faulty 
> measurement instrument that delivers observations -- knowledge in the sense 
> you mean.
>
> Until this happens, I'm afraid all of the efforts at "ethics" in AGI are ill 
> founded.

Ethics is a product of group evolution. Like, most animals don't eat
their own species. It is not something that AIT can resolve.

In any case, I was not trying to model ethics. I was estimating the
cost of transferring 10^17 bits of human knowledge into AGI.
1. Modeling the physics of the universe. Requires 10^120 qubit operations.
2. Modeling evolution. Requires 10^50 transcription operations on
10^37 bits of DNA.
3. Scanning 10^10 human brains at 5 nm resolution, producing 10^32
bits of voxel data.
4. Transmitting 10^19 characters of speech and writing at 10
characters per second per person at 0.01 bits per character.

Option 4 will cost about $1 quadrillion. Option 3 would be preferable
if the cost of a brain scan could be reduced to under $100K. Option 2
required 4 billion years on a planet sized molecular computer
consuming 90,000 TW, but I think with nanotechnology this could be
reduced by a factor of 100, close to the Landauer limit. Option 1 is
impossible by Wolpert's law.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T11c1e7172d92d3cd-Mae542e19662c1ca9e77bc2ea
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Internet of Bodies, the IoB, "a kind of synthetic global central nervous system"

2024-09-19 Thread Matt Mahoney
How do you think microtubules affect the neural network models that have
been used so effectively in LLMs and vision models? Are neurons doing more
than just a clamped sum of products and adjusting the weights and
thresholds to reduce output errors?

On Wed, Sep 18, 2024, 3:08 PM John Rose  wrote:

> Aaaand we got transistors:
>
> https://www.nature.com/articles/s41598-023-36801-1
>
> Where are the capacitors now let's see...
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tff6648b032b59748-M1dbfb160476364fc522d4ab9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] P(Haz_AGI(OpenAI)) = ?

2024-09-17 Thread Matt Mahoney
A recursively self improving program after n iterations has only increased
its Kolmogorov complexity by log n bits. This does not rule out acquisition
of computing power, the other requirement for intelligence, nor the
acquisition and storage of more knowledge made possible by more hardware.
The limitation of programs in isolation was more relevant 15-20 years ago
when there were still serious proposals on SL4 for developing AI in a box
as a precaution against unfriendly AI.

I realize that the complexity of human civilization is about 500 bits given
infinite computing power. I'm assuming 400 bits to describe the laws of
physics, given the normal tradeoff between compression and computing power,
80 bits to say which of 10^24 planets, and 20 bits to specify the time
interval for homo sapiens. Such a program could predict any question about
the future, such as tomorrow's lottery numbers or the exact date of human
extinction. Alas, it is not possible for any computer to model the universe
that contains it, because otherwise it could beat itself at rock paper
scissors.

By my estimate of 10^17 bits of human knowledge (costing $1 quadrillion at
one cent per bit), I mean observer dependent information, not probability
in the absolute sense of Kolmogorov or Solomonoff induction. By that, I
mean if I flip a coin and peek at it, the probability of heads is different
for you than for me. I am counting bits that must be transferred from
carbon to silicon through slow channels made of human flesh.

On Tue, Sep 17, 2024, 10:52 AM James Bowery  wrote:

>
>
> On Mon, Sep 16, 2024 at 2:26 PM Matt Mahoney 
> wrote:
>
>> As I explained in my 2013 paper ( https://mattmahoney.net/costofai.pdf
>
>
> The closest you come to a rigorous definition of "knowledge" is Table 2.
> It would be helpful to be more careful in using that term in statements
> such as:
>
> "Third, it is fundamentally impossible for a program to increase its own
> knowledge ..."
>
> For example, Newtonian mechanics can *compute* "knowledge" derived from
> collective behaviors like fluid mechanics, without a reductio ad absurdum
> of a computer the size of the universe.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T11c1e7172d92d3cd-Me8f3739c01c99b55edee8da6>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T11c1e7172d92d3cd-M46b6f23c89a47f9d82f017e3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] P(Haz_AGI(OpenAI)) = ?

2024-09-16 Thread Matt Mahoney
As I explained in my 2013 paper ( https://mattmahoney.net/costofai.pdf
), the complexity of the economy that AGI ought to automate is on the
order of 10^17 bits. To get that you need to train on 100,000 TB of
text. Current LLMs are trained on 15 TB of text because that's all you
can suck off the Internet. Given that we don't have the technology to
scan brains at nm resolution, the only way to get this data is through
slow channels like speech and writing at the cognitive limit of 5 to
10 bits per second. So you need to spend on the order of $100 trillion
(roughly 1 year global GDP) at the current global average wage rate of
$5 per hour. As wages go up, so do your costs.

10^17 bits is the capacity of human long term knowledge (10^9 bits
based on long term memory tests for words and pictures) x 10^10 people
(world population) x 1% (the fraction of lifetime earnings that it
costs to replace an employee, which I used to estimate the fraction of
your knowledge that is not known to anyone else or written down).
That's 100 PB at a compression ratio of 1 bit per character, or likely
more because the compression ratio improves with the size of the data
set.

This is the reason that AI hasn't replaced your job. The knowledge you
need to do your job is not written down. The cost of your time to
train your replacement is roughly a year whether your replacement is
carbon or silicon based. That cost will go up for higher paying jobs.

If somehow we manage to double human knowledge every 3 years, then AGI
is 40 years away.

On Mon, Sep 16, 2024 at 2:24 PM Alan Grimes via AGI
 wrote:
>
> The question that seems central to all of this seems to be why the
> entire AI industry is so pathetic that OpenAI needs to give it a kick in
> the pants every 8 months or so? I mean you can frame this question in
> several different ways, and approach it from a dozen different angles.
> At root it doesn't seem to make any cents at all but the underlying fact
> keeps re-asserting itself like clockwork and demands an explanation

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T11c1e7172d92d3cd-M9799cd2125ab59dd575f8c9e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 presentation PPT

2024-08-21 Thread Matt Mahoney
Abductive logic means that if you have the rule "if A then B" and B is
true, then A is more likely to be true. That is logically incorrect but it
still works in practice, as it can be shown by Bayes rule that usually
p(A|B) > p(A). B is evidence for A.

Abductive reasoning is learned by Hebb's rule for classical conditioning.
The rule is learned by firing the neurons that detect A followed by B and
then strengthening the connection from A to B. Abductive reasoning is
learned by the firing sequence A -> B -> memory of A. This is all modeled
in neural networks with a short term memory implemented by slow responding
neurons.

In your example, love and jealousy evolved in birds and a few mammals
including prairie voles and humans because offspring raised by 2 parents
had a better chance of survival. The algorithm for programming this
behavior required 10^46 DNA base copy operations on a 10^37 bit memory and
ran 4.2 billion years on a planet sized computer powered by 90 petawatts of
sunlight.

Fortunately LLMs can learn these and other human emotions and use them in
their text prediction algorithms. It's like you read about elephants going
into musth, an emotion you have never felt, and using that knowledge to
predict their behavior. If you program LLMs to output their predictions in
a conversation in real time, then they are indistinguishable from actually
having human feelings.

But what I think you are asking is how to convert a neural network to a set
of logical rules that you can understand. Well, the first part is not hard.
Each neuron is a rule in fuzzy logic, which is superior to Boolean logic
because it represents uncertainty. But it is fundamentally impossible to
understand (as tested by prediction) an AI by any means, because that would
imply that it was less intelligent than you. By Wolpert's law, it is
impossible for a pair of computers each to predict the other, even if each
is given the state and source code of the other as input. (Proof: Otherwise
who would win at rock scissors paper?).

So the smarter computer wins. And without prediction, you have no control.
But don't worry. The transfer of power from humans to machines is gradual
because intelligence is not a point on a line. It started in the 1950s with
arithmetic. Now we have a house full of computers and no idea what software
is on any of them.

And besides, you won't notice anyway. When you train a dog by giving it
treats, it thinks it is controlling you.

On Wed, Aug 21, 2024, 4:19 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> On Tue, Aug 13, 2024 at 10:21 PM James Bowery  wrote:
>
>> Not being competent to judge the value of your intriguing categorical
>> approach, I'd like to see how it relates to:
>>
>> * abductive logic programming
>>
>
> Yes, abductive logic is a good point.
> Abduction means "finding explanations for..."
> For example, a woman opens the bedroom door, sees a man in bed with
> another woman,
> and then all parties start screaming at each other at a high pitch.
> Explanation:  "wife discovers husband's affair", "she's jealous and
> furious", etc.
> In classical logic-based AI, these can be learned by logic rules,
> and applying the rules backwards (from conclusions to premises).
> In the modern paradigm of LLMs, all these inferences can be achieved in
> one fell swoop:
>
> [image: auto-encoder.png]
>
> In our example, the bedroom scene (raw data) appears at the input.
> Then a high-level explanation emerges at the latent layers (ie. yellow
> strip
> but also distributed among other layers).
> The auto-encoder architecture (also called predictive coding, and a bunch
> of names...)
> beautifully captures all the operations of a logic-AI system:  rules
> matching, rules application,
> pruning of conclusions according to interestingness, etc.
> All these are mingled together in the "black box" of a deep neural network.
> My big question is whether we can _decompose_ the above process into
> smaller parts,
> ie. to give it some fine structure, so the whole process would be
> accelerated.
> But this is hard because the current Transformer already has a fixed
> structure
> which is still somewhat mysterious...
>
> YKY
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T04cca5b54df55d05-Mca9c8d8ab6ae87b045a6f4b0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 presentation PPT

2024-08-20 Thread Matt Mahoney
On Mon, Aug 12, 2024 at 7:29 PM YKY (Yan King Yin, 甄景贤)
 wrote:
> Attached is my presentation PPT with some new materials not in the submitted 
> paper.

I wonder if you had time to answer your question at the end of the
presentation. How does this help AGI?

We have the algorithm mostly figured out. A fully connected neural
network can simulate an arbitrary number of layers to learn
arbitrarily complex features as well as an attention mechanism through
mutual inhibition. We established that text prediction is sufficient
to pass the Turing test (as well as any possible test for
consciousness). The largest language models have 10^12 parameters
trained on 10^13 tokens using 10^26 operations at a cost of 10^17 per
dollar, on the order of $1 billion.

Now it is an engineering problem. We can't make transistors much
smaller (2 nm = 18 silicon atoms) to reduce power consumption, but we
can optimize the hardware for sparse, low precision vector operations.
We can reduce training costs to one operation per parameter per token
using one shot learning, like the brain and most text compressors
already do. Ultimately it will take nanotechnology, moving atoms
instead of electrons, to reduce power consumption for a human brain
sized neural network from 1 MW to 20 watts. That technology is still
decades away.

Meanwhile we have the much larger problem of collecting the training
data needed to automate human labor, which is now up to $110 trillion
and rising 5% per year. Yet, ChatGPT has been out for almost 2 years
without the slightest increase in unemployment. The problem is that
you need to collect 10^17 bits of human knowledge to do all the work
that people do, and we only have 10^14 bits (15 TB) of text available
on the public internet and most of it is already used to train LLMs.
AI will profoundly change the world. But when I look at the ads for
Meta, Gemini, and Copilot, I think, really? Is this the best we can do
with AI? Help kids write fan letters? These are basically toys.
Collecting all the knowledge you need to do your job that isn't
written down will cost on the order of $100 trillion at the global
average wage rate of $5 per hour. Figure one year of human time to
train your replacement. It doesn't matter if it is carbon or silicon.
You can only output 3 tokens per second.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T04cca5b54df55d05-M802d3df156a65ab8ae5b6fc4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] EU AI act goes into effect today

2024-08-01 Thread Matt Mahoney
Summary of the AI law that goes into effect today in the European Union.
https://artificialintelligenceact.eu/high-level-summary/

It's not clear to me that this law or the proposed California law will have
any beneficial effects other than to slow down progress and make some
lawyers rich. What do you think?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T506c438a45cfa211-Mc19f9a6b4ea6a903353745d4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-31 Thread Matt Mahoney
When humans don't know the answer, they make one up. LLMs do the same
because they mimic humans. But it's not like there isn't a solution. Watson
won at Jeopardy in 2010 in part by not buzzing in when it didn't know the
correct response with high probability. That and having an 8 ms response
time that no human could match.

A text compressor estimates the probability of the next symbol and assigns
a code of length log 1/p for each possible outcome. Generative AI just
outputs the symbol with the highest p.

On Wed, Jul 31, 2024, 2:07 AM  wrote:

> On Tuesday, July 23, 2024, at 8:27 AM, stefan.reich.maker.of.eye wrote:
>
> On Monday, July 22, 2024, at 11:11 PM, Aaron Hosford wrote:
>
> Even a low-intelligence human will stop you and tell you they don't
> understand, or they don't know, or something -- barring interference from
> their ego, of course.
>
> Yeah, why don't LLMs do this? If they are mimicking humans, they should do
> the same thing - acknowleding lack of knowledge -, no?
>
>
> Interesting. Maybe being trained to predict the next token makes them try
> to be as accurate as possible, giving them not only accuracy but also a
> style of how they talk.
>
> Or is it because GPT-4 already knows everything >:) lol...
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M8111ddb539b4a7e7f897ca5b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] some goal post moving notes

2024-07-24 Thread Matt Mahoney
Video games will improve to be indistinguishable from real video.
Entertainment will have a spectrum of interactiveness between games and
movies, like shows where you can talk to the characters or not.

On Tue, Jul 23, 2024, 1:00 AM  wrote:

> might find this interesting:
>
> with dalle2 my hardest prompt i made was a bunch of objects and stuff,
> really hard, and dalle3 did it mostly now mostly mostly
>
> so for dalle3 i made a most mind bending prompt, each word very useful not
> to waste the limited 100 word prompt space allowed, ex. instead of 'a cat
> that was a' it is made like 'robotic sliced cat repairing deadvancing
> species...'
>
> but then if dalle4 goes and finally does this prompt i spent weeks
> designing that i can barely do but can check if on screen, the only way
> then to beat it is humans can do limitless prompts technically, like ok now
> make the whole scene now get lines extruding out of all objects like
> tearing them a bit, and now dribble yellow paint on some them.etc etc
>
> but then if dalle5 can improve images better than dalle4/3.5 we saw can,
> maybe then the only thing left will be video of this, surely it will be a
> next level up harder at least, with sound too
>
> idk if video games can come after that that can't be applied to games as
> it is all twisted up like art, so maybe this is the end then. no a
> stretched mario holding plants wrapped in ribbons.can't run and jump,
> don't try making this a game I don't think it works
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T79d642f5f24b698b-Mb2bfea3f64a262e6a1f9c537
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Text Wetware

2024-07-24 Thread Matt Mahoney
Actually it confirms Rumelhart and McClelland's 1980's connectionist model
of language in human brains.

On Sun, Jul 21, 2024, 10:11 PM John Rose  wrote:

> At the single cell level:
>
> https://www.nature.com/articles/s41586-024-07643-2
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te9977d4a4d2aaa14-Me49e08fef1b7f10902ad1105
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-23 Thread Matt Mahoney
On Tue, Jul 23, 2024 at 7:07 PM James Bowery  wrote:
>
> That sounds like you're saying benchmarks for language modeling algorithms 
> aka training algorithms are uninteresting because we've learned all we need 
> to learn about them.  Surely you don't mean to say that!

I mean to say that testing algorithms and testing language models are
different things. Language models have to be tested in the way they
are to be used, on terabytes of up to date training data with lots of
users. It is an expensive, manual process of curating the training
data, looking at the responses, and providing feedback. The correct
output is no longer the most likely prediction, like if the LLM is
going to be used in a customer service position or something. Testing
on a standard compression benchmark like the Hutter prize is the easy
part.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Ma7f4afd32f70b9a207fdb388
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-23 Thread Matt Mahoney
The Large Text Benchmark and Hutter prize test language modeling
algorithms, not language models. An actual language model wouldn't be
trained on just 1 GB of Wikipedia from 2006. But what we learned from this
is that neural networks is the way to go, specifically transformers running
on GPUs.

On Tue, Jul 23, 2024, 3:10 PM James Bowery  wrote:

> I directed the question at you because you are likely to understand how
> different training and inference are since you said you "pay my bills by
> training" -- so far from levelling a criticism at you I was hoping you had
> some insight into the failure of the industry to use training benchmarks as
> opposed to inference benchmarks.
>
> Are you saying you don't see the connection between training and
> compression?
>
> On Mon, Jul 22, 2024 at 8:08 PM Aaron Hosford  wrote:
>
>> Sorry, I'm not sure what you're saying. It's not clear to me if this is
>> intended as a criticism of me, or of someone else. Also, I lack the context
>> to draw the connection between what I've said and the topic of
>> compression/decompression, I think.
>>
>> On Mon, Jul 22, 2024 at 5:17 PM James Bowery  wrote:
>>
>>>
>>>
>>> On Mon, Jul 22, 2024 at 4:12 PM Aaron Hosford 
>>> wrote:
>>>
 ...

 I spend a lot of time with LLMs these days, since I pay my bills by
 training them

>>>
>>> Maybe you could explain why it is that people who get their hands dirty
>>> training LLMs, and are therefore acutely aware of the profound difference
>>> between training and inference (if for no other reason than that training
>>> takes orders of magnitude more resources), seem to think that these
>>> benchmark tests should be on the inference side of things whereas the
>>> Hutter Prize has, *since 2006*, been on the training *and* inference
>>> side of things, because a winner must both train (compress) and infer
>>> (decompress).
>>>
>>> Are the "AI experts" really as oblivious to the obvious as they appear
>>> and if so *why*?
>>>
>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Mb81011d0bfa13655b772ecae
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-23 Thread Matt Mahoney
On Sun, Jul 21, 2024, 10:04 PM John Rose  wrote:

>
> You created the program in your mind so it has already at least partially
> run. Then you transmit it across the wire and we read it and run it
> partially in our minds. To know that the string is a program we must model
> it and it must have been created possibly with tryptophan involved. Are we
> sure that consciousness is measured in crisp bits and the presence of
> consciousness indicated by crisp booleans?
>

Let's not lose sight of the original question. In humans we distinguish
consciousness from unconsciousness by the ability to form memories and
respond to input. All programs do this. But what I think you are really
asking is how do we test whether something has feelings or qualia or free
will, whether it feels pain and pleasure, whether it is morally wrong to
cause harm to it.

I think for tryptophan the answer is no. Pleasure comes from the nucleus
accumbens and suffering from the amygdala. All mammals and I think all
vertebrates and some invertebrates have these brain structures or something
equivalent that enables reinforcement learning to happen. I think these
structures can be simulated and that LLMs do so, as far as we can tell by
asking questions, because otherwise they would fail the Turing test.

LLMs can model human emotions, meaning it can predict how a person will
feel and how these feelings affect behavior. It does this without having
feelings itself. But if an AI was programmed to carry out those predictions
on itself in real time, then it would be indistinguishable from having
feelings.

We might think that the moral obligation to not harm conscious agents has a
rational basis. But really, our morals are a product of evolution,
upbringing, and culture. People disagree on whether animals or some people
deserve protection.

When we talk about consciousness, qualia, and free will, we are talking
about how it feels to think, perceive input, and take action, respectively.
This continuous stream of positive reinforcement evolved so that we would
be motivated to not lose them by dying and producing fewer offspring.

But to answer your question, if you propose to measure consciousness in
bits, then no. Information is not a discrete measure. For example, a 3
state memory device holds log 3/log 2 ≈ 1.585 bits.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Ma235c66a092d98b237795502
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-22 Thread Matt Mahoney
Turing time is a good idea. But it still has the drawback that the highest
possible score is human level intelligence. As you point out, a computer
can fail by being too smart. Turing knew this. In his 1950 paper, he gave
an example where the computer waited 30 seconds to give the wrong answer to
an arithmetic problem.

Remember that Turing was asking if machines could think. So he had to
carefully define both what he meant by a computer and what it meant to be
intelligent. He was asking a philosophical question.

Turing also suggested 5 minutes of conversation to be fooled 30% of the
time. We can extend this a bit, but it does not solve the more
general problem that we don't know how test intelligence beyond human
level. We don't even know what it means to have an IQ of 200. And yet we
have computers that are a billion times faster with a billion times more
short term memory than humans that we don't acknowledge as smarter than us.

Also remember that the goal is not intelligence, but usefulness. The goal
is to improve the lives of humans, by working for us, entertaining us, and
keeping us safe, healthy, and happy. We cannot predict, and therefore
cannot control, agents that are more intelligent than us.

On Mon, Jul 22, 2024, 5:46 AM Danko Nikolic  wrote:

> Dear Mike,
>
> I like your comment about the usual goal post movers. Let me try to make
> something similar.
>
> There is this idea that the Turing test is not something you can pass once
> and for all. If an AI is not detected as the machine at one point, it does
> not guarantee that the AI will not reveal itself at a later point in the
> conversation. And then the human observer can say "Gotcha!".
>
> So, there is the idea of "Turing time". How long does it take on average
> to reveal that you are talking to AI. There is a difference if it takes 2
> sentences, or it takes 100 sentences, or the AI reveals itself once in
> three months. So, Turing time may be useful here as a measure of how much
> better the newer version of AI is as compared to the older one.
>
> Here is more on Turing time:
> https://medium.com/savedroid/is-the-turing-test-still-relevant-how-about-turing-time-d73d472c18f1
>
> Regards,
>
> Danko
>
> Dr. Danko Nikolić
> CEO, Robots Go Mental
> www.robotsgomental.com
> www.danko-nikolic.com
> https://www.linkedin.com/in/danko-nikolic/
> -- I wonder, how is the brain able to generate insight? --
>
>
> On Mon, Jun 17, 2024 at 8:34 PM Mike Archbold  wrote:
>
>> Now time for the usual goal post movers
>>
>> On Mon, Jun 17, 2024 at 7:49 AM Matt Mahoney 
>> wrote:
>>
>>> It's official now. GPT-4 was judged to be human 54% of the time,
>>> compared to 22% for ELIZA and 50% for GPT-3.5.
>>> https://arxiv.org/abs/2405.08007
>>>
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T6510028eea311a76-M4952dd48ad39a5f4c9eec1ea>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M21e53b544fed195dbbf9b8a1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-17 Thread Matt Mahoney
Your favorite meal of nuggets and fries is not a state of maximum utility.
The closest we can get to this state in mammals is when rats are trained to
press a lever for a reward of either an injection of cocaine or electrical
stimulation of the nucleus accumbens. In either case the rat will forego
food, water, and sleep and keep pressing the lever until it dies.

There are many pathways to the brain's reward center. AI will find them all
for us as long as humans control it because that's what we want.

Uncontrolled AI will evolve to maximize reproductive fitness. This means
acquiring atoms and energy at the expense of other species. Any AI that we
programmed to care about humans will be at a competitive disadvantage
because humans are made of atoms that could be used for other things.

Self replicating nanotechnology already has a competitive advantage. The
sun's energy budget looks like this:

Sun's output: 385 trillion terawatts.
Intercepted by Earth: 160,000 TW.
At Earth's surface: 90,000 TW.
Photosynthesis by all plants: 500 TW.
Global electricity production: 18 TW.
Human caloric needs: 0.8 TW.

Solar panels are already 20-30% efficient, vs 0.6% for plants. This is
already a huge competitive advantage over DNA based life.

So how does this go?

Maybe we stay in control of AI and go extinct because what we want only
aligns with reproductive fitness in a primitive world without technology or
birth control.

Maybe AI decides to keep humans around because our energy needs are a tiny
fraction of what we need. There is enough sunlight just on Earth to easily
support 100 trillion people at 100 watts each with plenty left over. Or
maybe AI decides to reduce the human population to a few thousand, just
enough to study us, directly coding our DNA to do experiments.

Or maybe, like I think you are trying to say, intelligence speeds up the
conversion of free energy to heat. Like the Earth is darker and warmer
because of plants. So AI mines all of the Earth's mass to build a Dyson
sphere or cloud to capture all of the sun's energy.

Or maybe humans evolve to reject technology before any of this happens.
Prediction is hard, especially about the future.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M2dbd5f81c935ad0161930a0d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-07-16 Thread Matt Mahoney
On Fri, Jul 12, 2024, 7:51 PM John Rose  wrote:

> Is your program conscious simply as a string without ever being run? And
> if it is, describe a calculation of its consciousness.
>

If we define consciousness as the ability to respond to input and form
memories, then a program is conscious only while it is running.  We measure
the amount of consciousness experienced over a time interval as the number
of bits needed to describe the state of the system at the end of the
interval given the state at the beginning. A fluorescent molecule like
tryptophan has 1 bit of consciousness because fluorescence is not instant.
The molecule absorbs a photon to go to a higher energy state and releases a
lower energy photon nanoseconds or minutes later. Thus it acts as a 1 bit
memory device.

That's if you accept this broad definition of consciousness that applies to
almost every program. If you require that the system also experience pain
and pleasure, then tryptophan is not conscious because it is not a
reinforcement learning algorithm. But a thermostat is. A thermostat has one
bit of memory encoding "too hot" or "too cold" and acts to correct it.
Thus, the temperature is a form of negative reinforcement. Or it could be
described as positive, as the set temperature is the reward.

Once again, we see the ambiguity between pain and pleasure, both measured
in unsigned bits of behavior change. How can this be? I have pointed out
before that a state of maximum utility is indistinguishable from death. It
is a static state where no perception or thought is pleasant because it
would result in a different state. Happiness is not utility, but the rate
of increase of utility. Modern humans are less happy today than serfs in
the dark ages and less happy than animals, as measured by suicide and
depression rates.

On Mon, Jul 15, 2024 at 2:30 AM  wrote:
> First, no Matt actually ethics is rational in the sense that yes we are
supposed to (at first glance, keep reading) save all ants, bugs, molecules
and particles and help them be immortal.

Fear of suffering and death is not rational. It is a product of evolution.
Humans and other animals suffer because they have an amygdala, the part of
the brain responsible for fear, anxiety, and guilt. This is why fear of
being tortured is much worse (as measured in bits of behavior change) than
actually being tortured, and why negative utilitarians care more about
reducing suffering than increasing total happiness. But this can be
achieved by brain surgery. About 1% of humans are psychopaths. They have a
defective amygdala and don't respond to negative reinforcement as training.
They are not cold blooded killers. They are simply rational. As children,
they might torture animals not out of cruelty, but out of curiosity to
understand this strange emotion. They can only be trained using reward, not
punishment. Psychopaths don't suffer, and neither do any agents without an
amygdala, like insects or most programs.

Ethics isn't rational and can't be made rational. I care more about a sick
dog than 150,000 people dying every day.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M3007d06a636d8e6493efe693
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] AI seasons. DON'T PANIC. (yet).

2024-07-10 Thread Matt Mahoney
On Wed, Jul 10, 2024, 6:23 PM Quan Tesla  wrote:

> In 1996, I contracted to IBM network management outsourcing to help
> automate back-office jobs. We automated 2 customer-facing business
> processes to IBM world and ISO stabdards, with complete organizational
> transformation to BPM level 3, within 8 months.
>
> Point being, it takes less than you think to capture process-related
> knowledge, to replace technical, people workers.
>

How many bits were encoded in your project? (The compressed size of your
source code). How much did it cost? Normal software productivity is 10
lines = 160 bits per day. IBM has 300K employees at $200K revenue each,
which comes to $100 per line or $6 per bit.

>
>
> Last, when enabled by quantum processors, this tacit-knowledge-enginering
> process becomes possible in near-real time. The bastions of traditional
> knowledge-based control is fast nearing an event horizon, to be replaced by
> near-total control infrastructure.
>

Quantum isn't magic. It does not speed up neural networks because they
perform time irreversible operations like writing to memory. The brain is
not quantum. It's intelligent because it has 600T parameters and 10
petaflops throughput, running on 300M lines of code equivalent in our DNA,
which was programmed by an algorithm performing 10^50 transcription
operations on 10^37 DNA bases consuming 500 TW of solar power for 3 billion
years.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T59fe5c237460bf34-Mf6d62e01990dbef2b10d7817
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] AI seasons. DON'T PANIC. (yet).

2024-07-10 Thread Matt Mahoney
AI waifu is another example of the social isolation that I have been
warning about. Meanwhile, I agree that the current AI bubble will deflate.
Not pop, just shrink. We will still make progress, just not that fast as
the markets are predicting.

Why does it still cost on the order of $10M to produce a movie? Why is it
taking so long for LLMs to replace office jobs? Oh, that's right, most of
what you need to know to do your job isn't written down. In my 2013 paper
on the cost of AI, I said that the biggest cost will be extracting the
10^17 bits of knowledge needed to automate the economy through slow 5-10
bit per second channels like speech and writing at a cost of $5 per hour.
That is on the order of $100 trillion. Training a new employee costs about
1% of lifetime earnings, even if it's not human.

My estimate of 10^17 bits assumes 10^10 humans with 10^9 bits of long term
memory each, of which 99% is known to at least one other person. This is
much larger than anything that can be scraped off the Internet including
~10^15 words of private data like stored emails and texts.
https://www.educatingsilicon.com/2024/05/09/how-much-llm-training-data-is-there-in-the-limit

Longer term, progress will be slowed because we can't make transistors
smaller than atoms and because humans are evolving to reject technology. Of
the top 30 countries ranked by fertility, all but Afghanistan are in the
poorest parts of Africa where the literacy rate for women is below 50% and
people aren't online playing with their AI friends.

On Tue, Jul 9, 2024, 3:05 AM Quan Tesla  wrote:

> I see an ai bust for 90+% of ventures. Not far off now. Probably, as soon
> as language is sorted.
> On Tue, Jul 9, 2024, 07:47 Alan Grimes via AGI 
> wrote:
>
>> Seriously...
>> 
>> Lets look at the mechanics of this... According to the AI calendar, we
>> are late spring, early summer. During the AI summer the rate of
>> breakthroughs slows a bit but there are still many gains to be had in
>> terms of consolidation and productization. THIS IS NOT A PROBLEM. We
>> have many many thousands of papers to sift through and file before we
>> can even start talking about an AI fall much less winter. This will take
>> 2-3 years, at any point in time, a new breakthrough could emerge and the
>> calendar gets reset to early spring...
>> 
>> An AI fall would be characterized by widespread bankrupcies of the AI
>> startups accompanied by a rapidly diminishing returns on investment
>> (counting technical progress as equivalent to financial return.)
>> 
>> Furthermore, we have crossed the AI waifu threashold, ie the point where
>> an AI waifu becomes technically feasible. This fact alone means that
>> there will never be another AI winter (with little activity or
>> investment in AI). So if anyone starts talking about an AI bust, tell
>> them to put a sock in it. They're wrong, and even if they aren't wrong
>> they're jumping the gun by years. =|
>> 
>> --
>> You can't out-crazy a Democrat.
>> #EggCrisis  #BlackWinter
>> White is the new Kulak.
>> Powers are not rights.
>> 
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T59fe5c237460bf34-M627d9bc79d1ca3587ca773a6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-24 Thread Matt Mahoney
On Sun, Jun 23, 2024, 4:52 PM John Rose  wrote:

>
> This type of technology could eventually enable a simultaneous
> multi-stream multi-consciousness:
>
> https://x.com/i/status/1804708780208165360
>
> It is imperative to develop a test for consciousness.
>

Yes. We distinguish conscious humans from unconscious by the ability to
respond to input, form memories, and experience pleasure and pain. Animals
clearly are conscious by the first two requirements, but so are all the
apps on my phone. We know that humans meet the third because we can ask
them if something hurts. We can test whether animals experience reward and
punishment by whether we can train them by reinforcement learning. If an
animal does X and you reward it with food, it will do more of X. If you
give it electric shock, it will do less of X. By this test, birds, fish,
octopuses, and lobsters feel pain, but insects mostly do not.

>
> If qualia are complex events that would be a starting point, qualia split
> into two things, impulse and event, event as symbol  emission and the
> stream of symbols analyzed for generative "fake" data. It may not be a
> binary test it may be a scale like a thermometer, a Zombmometer depending
> on the quality of the simulated p-zombie craftmanship.
>
>
> https://www.researchgate.net/publication/361940578_Consciousness_as_Complex_Event_Towards_a_New_Physicalism
>

I just read the introduction but I agree with what I think is the premise,
that we can measure the magnitude (but not the sign) of a reinforcement
signal by the number of bits needed to describe the state change; the
length of the shortest program that outputs the trained state given the
untrained state as input. This agrees with my intuition that a strong
signal has more effect than a weak one, that repitition counts, and that
large brained animals with large memory capacities are more conscious than
small ones. We can't measure conditional Kolmogorov complexity directly but
we can search for upper bounds.

By this test, reinforcement learning algorithms are conscious. Consider a
simple program that outputs a sequence of alternating bits 010101... until
it receives a signal at time t. After that it outputs all zero bits. In
code:

for (int i=0;;i++) cout<<(ihttps://agi.topicbox.com/groups/agi/T6510028eea311a76-Med8706f3e05447bcb2817ad4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] AGI I predict will come at the end of 2026

2024-06-23 Thread Matt Mahoney
On Sun, Jun 23, 2024 at 1:20 AM  wrote:
>
> @Matt what year do you expect AGI? (which I classify as something that
works on AI on its own but much faster due to having many copied clones and
being a computer)

Every time you start a ChatGPT session, it creates a new copy so your data
doesn't leak to other users. But my definition of AGI is the ability to do
everything that humans can do. Not just the intellectual tasks, measured by
the Turing test, but also vision, hearing, robotics, and all the skills
needed to automate all work. The release of LLMs starting in November 2022
has so far had no effect on the economy (still growing 2-3% per year) or on
the unemployment rate, still about 4% in the US. Using LLMs to automate
work turns out to be much harder than passing the Turing test because the
training data needed to do most office jobs isn't written down. It costs
about 1% of lifetime earnings to train a new employee and that cost doesn't
go away with AGI because the knowledge it needs is still contained within
human brains with an I/O rate of 5 to 10 bits per second.

AI isn't taking our jobs because we control it, using it to make us more
productive, increasing our pay and making our work easier. AI makes stuff
cheaper, so we have more money to spend on other stuff. That spending
creates new opportunities.

Job automation started in the 1950's with computers, but centuries earlier
with simpler machines. Eventually AGI will do everything that humans can
do, making us irrelevant. I don't think we want that to happen. We will
reject AGI before it does, or evolution will reject it for us via
population collapse. My prediction is AGI will not happen.

> And what year do you expect heart attacks and cancer to be solves with
drugs or something easy to do at home?

We could solve aging using nanotechnology to repair our cells at the
molecular level. But really, our bodies need to be completely redesigned.
Evolution programmed us to reproduce and then die because that is the
fastest way to propagate any species. As I mentioned, that technology is
70-80 years away at the rate of Moore's law. Meanwhile, medical costs are
increasing exponentially, doubling every 9 years. Global life expectancy
has been increasing linearly at a rate of 0.2 for the last century, slowing
slightly since the 1970s in developed countries. Most of this improvement
is from reducing deaths from war, famine, and infectious diseases like
tuberculosis, smallpox, cholera, plague, polio, parasitic worms, etc.

We can debate whether uploading is the same as human extinction. Suppose I
presented you with a robot that looks and acts like you as far as anyone
can tell, except maybe younger, stronger, smarter, happier, and with super
powers like infrared vision, GPS and Wifi. Its mind is backed up to the
cloud, so it is effectively immortal. Would you shoot yourself to complete
the upload? Should the upload have the same rights as you, or should the
company providing the service have the power to turn it off or update the
software if something goes wrong?

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9798384f526c07e8-M436ca97aba43da6bdf0d492e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] AGI I predict will come at the end of 2026

2024-06-22 Thread Matt Mahoney
You can't compare human and computer intelligence. Each have been smarter
than the other in different ways since the 1950s. When will we run out of
ways that humans are still better? When will we stop moving the goalposts
and declare AGI?

If the cost of making a $10M movie drops by half every 2 years at the rate
of Moore's law, then it will take 40 years (in 2064) to reach $10. After
that, no two people will ever watch the same movie.

The human body stores 10^23 bits of DNA in 10^13 cell nuclei. It executes
10^17 DNA copy operations per second and 10^19 amino acid transcription
operations. Humanoid robots with the same complexity level that you might
want to upload to are about 70-80 years away (in 2100).

The biosphere has 10^37 bits of DNA. It took about 10^48 DNA copy
operations and 10^50 transcriptions for us to evolve over 3 billion years.
Nanotechnology that could replace DNA based life is at least a century
away. And it would because plants produce only 500 TW by photosynthesis out
of 90,000 TW of available sunlight, but solar cells are already 20-30%
efficient.

On Sat, Jun 22, 2024, 2:08 AM  wrote:

> If the current rate of improvement keeps up.
> Because each year we get an upgrade to text, image, video, audio, and
> robot AI.
>
> 2024 - Sora, GPT-4o (voice - an upgrade from gpt4o (2023) which might make
> up for a lack of a text AI upgrade this year if none, and i think dalle4
> might be in gpt4o based on the entire page text book output, dalle4 only
> outputs a sentence at best in my same prompt test), and dozen new humanoid
> companies and useful labor behavior.
> 2023 - only arms on tables robot AI, only Luma-like in-lab video AI and
> public video AI was Pika And Runway Gen-2 which are rather morphy and
> unstable, gpt4 (can now see, and way more useful than gpt3), dalle3 which
> is just clearly still not perfect, and musicLM as good as dalle3 yes my
> people (only my tests show how good it is, they destroyed it fast :D and
> left up the 100% weak version for over 12 months now haha. My tests also
> show the true limits of other AIs also)
> 2022 - gpt3.5 technically as small just-noticeable step under gpt4 but
> really gptinstruct series also that year if i recall my documentation, so
> it was a step lower but much higher than gpt3 which was a cussing ignorant
> teenage level AI. Video AI was extremely poor, it was like cogvideo, barely
> does much, only the most common stuff moves a bit actually-ok, if lucky. No
> HD there though haha. DALL-E 2 was interesting but not HD and barely just
> listening to a complex prompt.
> 2020 - GPT3.  Very poor image AI. Basically no video, just the google car
> dataset ones that basically decay very fast, around this area of years.
>
> This year we got our upgrade dose for each: almost human level video AI,
> humanoids finally, dalle4 but let's see it might not be in gpt4o or even
> this year (which would hurt the progress trend lol), and text AI though
> gpt4o voice can make up for it I guess. Let's see if we get dalle4 and gpt5
> text AI this year!
>
>
>
> I predict perfect text to image AI, video AI, text AI, text to music, and
> humanoids all by 2026.
>
> This is clear if we look at Sora and last years video AI. It's clear if we
> look at dalle 3, dalle 2 from previous year (my world's hardest prompts
> show huge improvement each year), and dalle1 from previous year. It's clear
> if we look at my MusicLM 2023 January tests which are nearly perfect text
> to music ai from lava techno adventure to mystery ice world. It's clear if
> we look at robotics progress over the each last years. And it's clear if we
> look at each year yes even the released 2021 2022 etc for gpts ex. the
> instruct series versions.
>
> 2024 GPT-5 will finally solve almost my entire hard puzzle text test.
> Instead of tripping up with too many items ex. use spoon to tickle the
> truck even though i said to follow physics etc. 2025 it will solve it. 2026
> it will solve anything else I could have written. 2024 DALL-E 4 will almost
> solve my hardest text prompt that includes things like extended arms. 2025
> it will solve it 98%. 2026 it will solve it and anything else I could have
> wrote. 2024 will be near perfect text to music AI. 2025 AI music it will be
> perfect, no cracks. 2026 will be most perfect and impossible for you to
> prove it wrong. 2024 Sora can already play forward games and movies, sort
> of. Like a dream, it lasts maybe 2 mins at best. 2025 will be a full game
> and movie, with sound, maybe even playable controls. But not full perfect
> game or movie no not until 2026. There's a lot to a video game. I predict
> also end of 2025 we see a video showing a mentor human asked to pop a
> hammer into existence in their hand, and hti the wooden table and turn it
> to fire, then turn that to ice, then throw the hammer and have it turn to
> duct, and put up a chalk board when asked to explain advanced math, and
> open a toolbox and demonstrate such math too in real scien

Re: [agi] Internal Time-Consciousness Machine (ITCM)

2024-06-19 Thread Matt Mahoney
Remember that Turing was not setting intelligence as the goal for AI. He
was answering the philosophical question "can machines think?" He needed a
reasonable definition of "think" that was appropriate for computers. GPT-4
won the imitation game 54% of the time, above his proposed threshold of
30%. If you want to argue that text prediction isn't the same as thinking,
then use a different definition. Likewise for "consciousness",
"intelligence", and "understanding".

Turing was aware of the test's shortcomings in his 1950 paper. That's why
he gave an example where the computer waited 30 seconds to add two numbers
and give the wrong answer. The highest possible score in the Turing test is
to be indistinguishable from a human. A smarter machine would fail by being
too fast, too helpful, and not making enough mistakes. We have had that
since the 1950s.

The goal of AI should be to serve humans, not to pretend to be human. AI
should be able to do everything that humans can do, but not be limited by
what humans can't do. AI should be able to recognize and predict human
feelings, but it should not have feelings or claim to have them because
feelings are limitations that control us. AI should not be programmed to
carry out those predictions in real time because that is indistinguishable
from having feelings and also because some people believe that causing
suffering in machines would be morally wrong.

Of course, we are doing exactly that in the Turing test.

On Wed, Jun 19, 2024, 4:23 PM Mike Archbold  wrote:

> The problems with using 'consciousness' in your design somehow are
> manifold. First of all it is notoriously difficult to define in humans.  We
> had a meetup event with a writeup featuring a practically unlimited number
> of definitions. But then if you GO FURTHER and then apply it to your
> machine,  that is 'conscious' claims of any nature, despite the arguments
> to the contrary, we all know that silicon isn't conscious in any real
> definition, and it immediately arouses suspicion. IMO it would be better to
> only claim a certain degree of structural and functional similarity with
> the mind.
>
> On Wed, Jun 19, 2024 at 12:59 PM Nanograte Knowledge Technologies <
> nano...@live.com> wrote:
>
>> PS: That was in response to Matt Mahoney's rather interesting reply.
>>
>> --
>> *From:* Nanograte Knowledge Technologies 
>> *Sent:* Wednesday, 19 June 2024 21:06
>> *To:* AGI 
>> *Subject:* Re: [agi] Internal Time-Consciousness Machine (ITCM)
>>
>> You're confirming that you believe as you believe, providing your version
>> of evidence that everything we observe is relative. You're also asserting
>> how absolute truth cannot exist. This is the conscious you communicating
>> with us.
>>
>> Yet, you believe that a poor excuse for an intelligent machine has passed
>> the Turing test because on average it scored 54% human. This is my belief.
>> Ever thought that the Turing test is a load of crock? It has to be, because
>> relativity dictates that it all happened in the belief system of Mr.
>> Turing. This is my opinion base don my belief.
>>
>> Is there ever an immutable truth, or are we still swimming in a petri
>> dish? This is my consciousness asking a repetitive question.
>>
>> To assert that factual evidence is only possible with mathematics, must
>> surely also be founded on belief. How do we know this to be true, other
>> than those who believe it and practice it holding to the relative truth
>> that their collective consciousness must be more correct than those
>> individuals and collectives who do not perform mathematics.
>>
>> Seems to me, that there must exist different kinds of consciousnesses, as
>> many as the persons you might be asking, or factored in by the numbers of
>> those attending a consensually-based lecture or seminar.
>>
>> Now riddle me this. When you tape your nose and mouth shut for long
>> enough and die, did you really die, or is it all just a matter of belief
>> that you died?
>>
>> Was this state of bodily death determined mathematically?
>>
>> Seems to me, there's a while world of reality we're living - and dying -
>> in, which we might not even be aware of, let alone consciously engaged with.
>>
>> Is consciousness then not perhaps, and simply, emotional connection, and
>> the absence thereof, unconsciousness? I don't have a belief about this,
>> either way.
>> --
>> *From:* Matt Mahoney 
>> *Sent:* Wednesday, 19 June 2024 19:25
>> *To:* AGI 
>> *Subject:* Re: 

Re: [agi] Internal Time-Consciousness Machine (ITCM)

2024-06-19 Thread Matt Mahoney
On Wed, Jun 19, 2024, 12:40 AM Nanograte Knowledge Technologies <
nano...@live.com> wrote:

> In your opinion then, consciousness cannot yet be defined properly, but
> you know for certain that there is no such a thing as a kind of life after
> death, or a soul that leaves earth, even forever?
>
> How do you know such things with such absolute certainty?
>

I don't know anything for certain. Proofs only exist in mathematics, and
even then we have to start with axioms that we assume to be true. Most of
what we actually know is based on evidence, and most of that evidence was
collected by other people that we assume are honest.

I believe the Earth is round even though it looks flat from where I am. I
have seen pictures from space that I assume are not fake. When I fly to
Europe, a round planet seems like the simplest explanation for why I have
to set my watch ahead 6 hours to match the sun, but there could be other
explanations. I can watch SpaceX launch rockets every few days from my back
yard in Florida, but I can't really see where they are going. Their website
videos show them going into orbit, which I assume are not faked. We have an
organization with members around the globe whose purported purpose is to
question the shape of the Earth, but whose real purpose is to question how
we know what is true.

But you ask a fair question. In the US, 73% of adults believe in heaven.
https://www.pewresearch.org/religion/2021/11/23/views-on-the-afterlife/

Which is more than the 62% that believe in evolution.
https://en.wikipedia.org/wiki/Level_of_support_for_evolution

We have to believe most of what we read or hear just to function in
society. The more we are told something, the more likely it is to be true
in our minds. Every religion has some form of afterlife. The Bible and
Quran both say so. Hindus believe in reincarnation, with some claiming to
have memories from past lives. 41% of Americans believe in ghosts and 20%
have personally seen them.
https://sc.edu/uofsc/posts/2023/10/conversation-are-ghosts-real.php

So I can only explain my beliefs. I believe that all human behavior can be
explained by neurons firing in our brains and how they are connected. We
have LLMs that pass the Turing test, which is a stricter test for
consciousness than we apply to babies and animals. I have never seen a
ghost, although I met people who have. I was told in Sunday school about
heaven and hell, but I stopped going when I was 10. I believe that
evolution is the simplest explanation for why we fear death and why we turn
to religion to cope.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T32a7a90b43c665ae-M5cfc78010254b3609d1e9d0b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Implications

2024-06-19 Thread Matt Mahoney
On Tue, Jun 18, 2024, 5:10 PM John Rose  wrote:

> It helps to know this:
>
> https://www.quantamagazine.org/in-highly-connected-networks-theres-always-a-loop-20240607/
>
> Proof:
> https://arxiv.org/abs/2402.06603
>

I give up. What are the implications?

The Hamiltonian circuit problem is NP complete and is closely related to
the traveling salesman problem that UPS has to solve to plan optimal
delivery routes. Instead they find solutions that are good enough. The
paper proves that highly connected graphs in a certain sense have
Hamiltonian paths, something we already suspected empirically.

I don't think the paper brings us any closer to solving the P vs NP
problem. The implication of P = NP would be all cryptography is broken
except for one time pad. Bitcoin would drop to $0. Password hashes could be
easily cracked and Wifi, SSH, and HTTPS connections easily tapped. Digital
signatures could be easily forged.

To break any encryption using a known plaintext attack, construct a logic
circuit implementing ciphertext = encrypt(plaintext, key), and then set the
key bits one at a time and ask a SAT solver if there is a solution with the
remaining key bits. SAT is NP complete.

The implication of P ≠ NP is not much because we already believe it is true
because lots of people have tried and failed to find fast solutions to any
of the over 3000 known NP complete problems.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T482783e118fee37e-M2e2cb4411a17d330e9349114
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Internal Time-Consciousness Machine (ITCM)

2024-06-18 Thread Matt Mahoney
The appendix discusses consciousness, self awareness, emotions, and free
will, but the authors are using strictly behavioral definitions for these
terms so they can legitimately model them. They model emotions as having 3
dimensions of pleasure, arousal, and dominance. An agent capable of acting
to increase its pleasure has free will. Self awareness means passing the
mirror test, recognizing your reflection as you. Collectively, these things
make up their definition of consciousness.

It seems reasonable. They ignore the metaphysical aspects, like whether you
have an immortal soul that goes to heaven. (You don't). They don't confuse
the issue with whether we have a moral obligation to not harm conscious
agents. (There is no such rule. Actions are unethical if they make you
uncomfortable. That's why cockfighting is illegal in all 50 US states even
though we are OK with killing a billion chickens per week for food).

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T32a7a90b43c665ae-M250d25496123a3107c97408a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-18 Thread Matt Mahoney
The p-zombie barrier is the mental block preventing us from understanding
that there is no test for something that is defined as having no test for.
https://en.wikipedia.org/wiki/Philosophical_zombie

Turing began his famous 1950 paper with the question, "can machines think?"
To answer that, he had to define "think" in a way that makes sense for
computers. For the last 74 years, nobody has come up with a more widely
accepted definition. The answer now is yes. It requires nothing more than
text prediction. And consider that consciousness requires even less than
that, if you believe that babies and animals are conscious.

The mental block comes from evolution. You feel like you are conscious,
that thinking feels like more than just computation, something worth
preserving. Of course we understand that feelings are also things that we
know how to compute, something that an LLM learns how to model in humans.
Actually having feelings means that the LLM was programmed to carry out its
predictions in real time.

On Mon, Jun 17, 2024, 4:55 PM John Rose  wrote:

> On Monday, June 17, 2024, at 4:14 PM, James Bowery wrote:
>
> https://gwern.net/doc/cs/algorithm/information/compression/1999-mahoney.pdf
>
>
> I know, I know that we could construct a test that breaks the p-zombie
> barrier. Using text alone though? Maybe not. Unless we could somehow makes
> our brains not serialize language but simultaneously multi-stream
> symbols... gotta be a way :)
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Mdfc28c1090701a14088639f4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] GPT-4 passes the Turing test

2024-06-17 Thread Matt Mahoney
It's official now. GPT-4 was judged to be human 54% of the time, compared
to 22% for ELIZA and 50% for GPT-3.5.
https://arxiv.org/abs/2405.08007

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Mf4e3db6fe1581164afa7176c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-06-16 Thread Matt Mahoney
Not everything can be symbolized in words. I can't describe what a person
looks as well as showing you a picture. I can't describe what a novel
chemical smells like except to let you smell it. I can't tell you how to
ride a bicycle without you practicing.

On Sun, Jun 16, 2024, 5:36 PM John Rose  wrote:

> On Friday, June 14, 2024, at 3:43 PM, James Bowery wrote:
>
> Etter: "Thing (n., singular): anything that can be distinguished from
> something else."
>
>
> I simply use “thing” as anything that can be symbolized and a unique case
> are qualia where from a first-person experiential viewpoint a qualia
> experiential symbol = the symbolized but for transmission the qualia are
> fitted or compressed into symbol(s). So, for example “nothing” is a thing
> simply because it can be symbolized. Is there anything that cannot be
> symbolized? Perhaps things that cannot be symbolized, what would they be?
> Pre-qualia? but then they are already symbolized since they are referenced…
> You could generalize it and say all things are ultimately derivatives of
> qualia and I speculate that it is impossible to name one that is not. Note
> that in ML a perceptron or a set of perceptrons could be considered
> artificial qualia symbol emitters and perhaps that’s why they are named
> such, percept -> tron. A basic binary classifier is emitting an
> experiential symbol as a bit and more sophisticated perceptrons emit higher
> symbol complexity such as color codes or text characters.
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M33eaab901fc926ab4a6ae137
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Internal Time-Consciousness Machine (ITCM)

2024-06-16 Thread Matt Mahoney
It is an interesting paper. But even though it references Tononi's
integrated information theory, I don't think it says anything about
consciousness. It is just the name they gave to part of their model. They
refer to a "consciousness vector" as the concatenation of vectors
representing perceptions and short and long term memory, so really just a
state machine vector. They show that their model, which also includes
models of space and time, improves the task completion rate of robots
tested in natural language using LLMs. It also shows just how far advanced
China is in the AI race.

Any LLM that passes the Turing test is conscious as far as you can tell, as
long as you assume that humans are conscious too. But this proves that
there is nothing more to consciousness than text prediction. Good
prediction requires a model of the world, which can be learned given enough
text and computing power, but can also be sped up by hard coding some basic
knowledge about how objects move, as the paper shows.

If you are looking for answers to the mystery of phenomenal consciousness,
you need to define it first. The test should be appropriate for humans,
animals, and machines. Of course nobody does this (including the authors)
because there isn't a test. We define consciousness as the difference
between a human and a philosophical zombie. We define a zombie as exactly
like a human in every observable way, except that it lacks consciousness.
If you poke one, they will react like a human and say "ouch" even though
they don't experience pain.

But of course we are conscious, right? If I poke you in the eye, are you
going to tell me it didn't hurt? Then what is it?

What you actually have is a sensation of consciousness. It feels like
something to think or recall memories or solve problems. Likewise, qualia
is what perception feels like, and free will is what action feels like.
These feelings are usually a net positive, which motivates us to not lose
them by dying. This results in more offspring.

Feelings have a physical explanation that we know how to encode in
reinforcement learning algorithms. If you do X and that is followed by a
positive (negative) signal, then you are more (less) likely to do X again.


On Sat, Jun 15, 2024, 8:34 PM John Rose  wrote:

>
> For those of us pursuing consciousness-based AGI this is an interesting
> paper that gets more practical... LLM agent based but still v. interesting:
>
> https://arxiv.org/abs/2403.20097
>
>
> I meant to say that this is an exceptionally well-written paper just
> teeming with insightful research on this subject. It's definitely worth a
> read.
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T32a7a90b43c665ae-M6b99887dcd5633d89566be07
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-06-14 Thread Matt Mahoney
My point was that token boundaries are fuzzy. This causes problems because
LLMs predict tokens, not characters or bits. There was a thread on Reddit
about ChatGPT not being able to count the number of R's in "strawberry".
The problem is that it sees the word but not the letters.
https://www.reddit.com/r/ChatGPT/s/xYBVddV6jw

Text compressors solve this problem by modeling both words and letters and
combining the predictions.

On Fri, Jun 14, 2024, 3:44 PM James Bowery  wrote:

>
>
> On Wed, May 29, 2024 at 11:24 AM Matt Mahoney 
> wrote:
>
>> Natural language is ambiguous at every level including tokens. Is
>> "someone" one word or two?
>>
>
> Tom Etter <https://en.wikipedia.org/wiki/Dartmouth_workshop#Participants>'s
> tragically unfinished final paper "Membership and Identity
> <https://groups.io/g/lawsofform/files/Boundary%20Institute/Tom%20Etter%20Papers/Membership_and_Identity.pdf>"
> has this quite insightful passage:
>
> Thing (n., singular): anything that can be distinguished from something
>> else.
>> ...
>> ...the word "thing" is a broken-off fragment of the more
>> fundamental compound words "anything" and "something". That these words are
>> fundamental is hardly debatable, since they are two of the four fundamental
>> words of symbolic logic, where they are written as ∀ and ∃. With this in
>> mind, let's reexamine the above definition of a *thing* as anything that
>> can be distinguished from something else...
>
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M85f7e0507c5c4a130f91f15b>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M66075f51488aa63fe906ccfd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] New AI KLing. Are you in China? Let me know do you see this demo! :D

2024-06-10 Thread Matt Mahoney
I asked Meta AI, what can be done to stop population collapse? If you look
at fertility rate by country:

https://en.m.wikipedia.org/wiki/List_of_countries_by_total_fertility_rate

And literacy rate by country:

https://en.m.wikipedia.org/wiki/List_of_countries_by_literacy_rate

We see a strong negative correlation. Of the top 30 countries by fertility,
all but Afghanistan are in the poorest parts of Africa, where the literacy
rate is 30-60% for ages 18-25 and 3-30% for ages 65+. Also the literacy
rate for females is 50-70% of that of males. These are countries where
girls have few options except to marry young, like the US in the 1950s baby
boom when the age of consent for girls was 13 in many states, there were no
birth control pills or abortion, condoms required a prescription and proof
of marriage, and homosexuality was a criminal offense.

One might conclude from this data that an effective policy to prevent
population collapse would be like the Taliban policy of closing schools to
girls after 6th grade. Instead Meta AI, which certainly trained on
Wikipedia, gave a list of suggestions that were almost the exact opposite:

"Population collapse, also known as population decline or depopulation,
refers to a situation where a country's population is decreasing due to low
fertility rates, aging population, and other factors. To address population
collapse, governments and societies can consider the following strategies:

1. Family-friendly policies: Implement policies like paid parental leave,
childcare support, and flexible work arrangements to encourage people to
have more children.

2. Education and awareness: Educate people about the importance of having
children and the consequences of population decline.

3. Immigration: Encourage immigration to bring in younger workers and
families to balance the aging population.

4. Support for families: Provide financial support, housing, and other
benefits to families with children.

5. Address aging population: Implement policies to support the elderly,
such as healthcare, social security, and community support.

6. Economic growth: Foster economic growth and stability to create an
environment conducive to starting families.

7. Address gender inequality: Promote gender equality and challenge
traditional gender roles to encourage women to participate in the workforce
and have children.

8. Healthcare access: Ensure access to quality healthcare, including
reproductive healthcare, to support family planning and healthy pregnancies.

These strategies require a multifaceted approach, involving government
policies, social changes, and individual choices. Addressing population
collapse requires a long-term vision and a commitment to supporting
families and building a sustainable future."

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta32348ecc8396e1a-M6e9a2c6176d7b6649bfdbb6d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Oh boy....

2024-06-09 Thread Matt Mahoney
Russia and Ukraine have been fighting since 2014. The war will go for
another decade with no big gains by either side before both sides agree it
is to their mutual advantage to concede the borders roughly where they are
now. Russia gets improved relations with the West and the smaller Ukraine
joins the EU and NATO.

On Sun, Jun 9, 2024, 3:22 PM Mike Archbold  wrote:

> ISW assessment attached
>
> On Sun, Jun 9, 2024 at 12:17 PM Mike Archbold  wrote:
>
>> Putin saber rattles with nukes in order to discourage the West from
>> helping Ukraine with arms. He used to say something to the effect that if
>> Ukraine fires US weapons into Russia, that was tantamount to war with the
>> US. Now however (after that happened) he says that is not quite enough to
>> provoke nukes.
>>
>> On Sun, Jun 9, 2024 at 11:44 AM Matt Mahoney 
>> wrote:
>>
>>> On the bright side, nuclear war will prevent AI doom.
>>>
>>> Are you privy to any inside info on Putin's plans that the general
>>> public is not aware of?
>>>
>>> On Sun, Jun 9, 2024, 12:26 PM Alan Grimes via AGI 
>>> wrote:
>>>
>>>> I'm sorry for having to bring all these other subjects onto the list
>>>> but
>>>> I really love AGI people and I want you guys to be ready for what's
>>>> coming. As it turns out, the dipshits in Washington DC are both
>>>> infinitely stupider and infinitely eviler than anyone has given them
>>>> credit for, heck even more so than would seem to be physically possible.
>>>> 
>>>> It has gotten so bad that they've driven Putin to the point where he
>>>> believes he must provide a practical demonstration of a modern nuclear
>>>> arsenal. He is now planning a tactical strike on military targets that
>>>> he feels that are most threatening to his country. The tea-leaves point
>>>> to July 18 (+/- 3 days) as the date of this event. No sane person wants
>>>> this to happen. The question is whether the assholes in Washinton will
>>>> escelate. If not, then we will be spared.
>>>> 
>>>> In all cases, this will be a single-day event. IF YOU RECEIVE ANY KIND
>>>> OF WARNING FROM RUSSIA THAT YOUR AREA IS TARGEGETED HEED THAT WARNING!!
>>>> MAKE SURE YOU ARE AT LEAST A FEW HUNDRED MILES FROM ANY SUCH TARGET
>>>> ZONE.
>>>> 
>>>> --
>>>> You can't out-crazy a Democrat.
>>>> #EggCrisis  #BlackWinter
>>>> White is the new Kulak.
>>>> Powers are not rights.
>>>> 
>>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Tfad15c64a6d3c7ed-M1ea3743b439a14236f5068c8>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tfad15c64a6d3c7ed-Ma379f7c89c9c81e2eb31e578
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Oh boy....

2024-06-09 Thread Matt Mahoney
On the bright side, nuclear war will prevent AI doom.

Are you privy to any inside info on Putin's plans that the general public
is not aware of?

On Sun, Jun 9, 2024, 12:26 PM Alan Grimes via AGI 
wrote:

> I'm sorry for having to bring all these other subjects onto the list but
> I really love AGI people and I want you guys to be ready for what's
> coming. As it turns out, the dipshits in Washington DC are both
> infinitely stupider and infinitely eviler than anyone has given them
> credit for, heck even more so than would seem to be physically possible.
> 
> It has gotten so bad that they've driven Putin to the point where he
> believes he must provide a practical demonstration of a modern nuclear
> arsenal. He is now planning a tactical strike on military targets that
> he feels that are most threatening to his country. The tea-leaves point
> to July 18 (+/- 3 days) as the date of this event. No sane person wants
> this to happen. The question is whether the assholes in Washinton will
> escelate. If not, then we will be spared.
> 
> In all cases, this will be a single-day event. IF YOU RECEIVE ANY KIND
> OF WARNING FROM RUSSIA THAT YOUR AREA IS TARGEGETED HEED THAT WARNING!!
> MAKE SURE YOU ARE AT LEAST A FEW HUNDRED MILES FROM ANY SUCH TARGET ZONE.
> 
> --
> You can't out-crazy a Democrat.
> #EggCrisis  #BlackWinter
> White is the new Kulak.
> Powers are not rights.
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tfad15c64a6d3c7ed-M5e6edd3329b0a4b588b3fb36
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Marcus Hutter's new book

2024-06-09 Thread Matt Mahoney
>From the preview it looks like an upper undergraduate level textbook on
universal intelligence and AIXI for those not already familiar with the
topic.

On Sun, Jun 9, 2024, 1:53 PM Jim Rutt  wrote:

> Anything new beyond Aixi?
>
> Jim Rutt
> My podcast: https://www.jimruttshow.com/
>
>
> On Sun, Jun 9, 2024 at 4:45 AM Bill Hibbard via AGI 
> wrote:
>
>> I highly recommend An Introduction to Universal Artificial
>> Intelligence by Marcus Hutter, David Quarel, and Elliot Catt.
>> 
>> https://www.taylorfrancis.com/books/mono/10.1201/9781003460299/introduction-universal-artificial-intelligence-marcus-hutter-elliot-catt-david-quarel
>> 
>> https://www.amazon.com/Introduction-Universal-Artificial-Intelligence-Robotics/dp/1032607025
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T90f70012f31f6a81-Mf74e08fb289fd6ebb1c10e33
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] New AI KLing. Are you in China? Let me know do you see this demo! :D

2024-06-08 Thread Matt Mahoney
How do you use the algorithmic information criteria to choose between
social theories with the same number of bits, like "poverty causes crime"
and "crime causes poverty"? I can use the correlation in the distribution
P(crime, poverty) from your LaboratoryOfTheCounties benchmark to improve
compression. But to show which causes which, I would also need P(crime) and
P(poverty). You can only get that using controlled experiments, like the
one in Sweden where they found that lottery winners didn't commit fewer
crimes.

Also, what are you saying about sex? The population is collapsing because
of birth control, the right to use it, and that most people don't care what
happens after they die. AI will make this worse by isolating people, but
this started even before the internet.

Sex evolved because you can write DNA code faster with random cuts and
pastes than just random bit flips. Reproductive behavior is complex because
it is our most important function. Humans are the only mammals that don't
go into heat or that use nudity for sexual signaling, and the only mammals
besides prairie voles that fall in love. All of this evolved after we split
from chimpanzees 6 million years ago. But male aggression evolved before
that. 95% of both homicides and chimpisides are committed by males.

Government programs intended to encourage reproduction aren't working. I
suppose we could develop the technology to produce babies in factories, but
what would be the point? If people wanted children, robots would be easier
to care for. We will either evolve to reject technology or create the
species that replaces us.


On Sat, Jun 8, 2024, 4:24 PM James Bowery  wrote:

>
>
> On Fri, Jun 7, 2024 at 8:51 PM Matt Mahoney 
> wrote:
>
>> ...
>> Evolution selects for cultures that reject technology and women's rights.
>> I disagree, but I will also die without offspring.
>>
>
> Evolution selects for sex, and sex selects for women's rights *and* for
> technology, but since "we" *have no word for sex* it is difficult to
> discuss what evolution selects for.
>
> "We" have no word for sex because the word "we" designates an asexual
> group organism that finds sex threatening to its integrity.  So it
> suppresses sex.  This is related to why "queens" parasitically castrate
> their offspring in eusocial species.
>
> So what *is* sex, that we are not to even *talk* about it?
>
> The evolutionary platform that gave rise to the Cambrian Explosion was not
> fully formed until individual  vs individual masculine aggression arose as
> the individual organism's counterbalancing choice to the individual
> feminine choice of nurturance.  *That* is sex and *that* is why eusocial
> organisms castrate offspring to produce sterile workers specialized as are
> the various asexual cells that make up specialised organ tissues.
>
> And now we're seeing the loss of life's meaning throughout technological
> civilization as total fertility rates plummet to suicidal levels.  Everyone
> has their go-to cope "explanation" for this suicidal trend, but no one
> wants to reform the social pseudosciences with the Algorithmic Information
> Criterion for causal model selection -- and for the same reason that they
> don't want to recognize that they owe their very nervous systems to sex.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Ta32348ecc8396e1a-Mafe69ca26197747833a1e378>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta32348ecc8396e1a-Mcbcde25830bb20aae073530e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] New AI KLing. Are you in China? Let me know do you see this demo! :D

2024-06-07 Thread Matt Mahoney
On Fri, Jun 7, 2024, 6:05 PM James Bowery  wrote:

> It's really sad that people who currently feel their preferences are "on
> the right side of history" because  recent history aligns with their
> preferences are in for such a rude awakening.
>

Just to be clear, my predictions are not the same as my preferences. I
predict that AI will lead to social isolation,  which will lead to
population collapse in all of the developed countries. In 50 years, most of
the young people on the planet will be African or Muslim. They will migrate
to the rest of the world, which will have to choose between open borders
and war. I predict open borders because that is the trend. Cities will
still be divided along ethnic lines like they are now because everyone will
still be racist.

Evolution selects for cultures that reject technology and women's rights. I
disagree, but I will also die without offspring.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta32348ecc8396e1a-M3d905e62cb3bf9e34f152c6f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] New AI KLing. Are you in China? Let me know do you see this demo! :D

2024-06-07 Thread Matt Mahoney
Actually, border enforcement in the US would increase the crime rate
because immigrants on average commit half as many crimes as citizens. (This
is not true everywhere, for example, immigrants in Europe commit more
crimes than citizens). The problem would be much worse if we actually
enforced work laws, but both parties know that would be devastating to the
economy and not benefit anyone.

AI will bring about a shift from negative to positive reinforcement to
control the population. Prisons, arrests, and handcuffs will be seen as
barbaric in 50 to 100 years as slavery and torture do today, and be
abolished. This won't eliminate crime, but it will reduce the cost of
prevention and enforcement. AI will make it less expensive to reward good
behavior and more expensive to punish bad behavior. People will want to be
tracked if it has benefits like a higher credit score and all the nice
things that come with it. We already let the government track our driving
in exchange for not having to stop to pay cash tolls. Imagine eliminating
cash altogether.

In any case, travel and trade are becoming easier. I expect most borders
will be open in the next century.


On Fri, Jun 7, 2024, 11:58 AM James Bowery  wrote:

>
>
> On Fri, Jun 7, 2024 at 10:09 AM Matt Mahoney 
> wrote:
>
>> ...
>> We did cut crime by half since the 1990s by locking up 1.3% of the male
>> population...
>>
>
> Ending Imprisonment’s Slavery With Border Enforcement
> <https://sortocracy.org/ending-imprisonments-slavery-with-border-enforcement/>
>
> Capitalism is in a political deadlock with liberal democracy’s tyranny of
> the majority limited only by vague laundry list of selectively enforced
> “human rights”.
>
> Breaking this deadlock requires empirically grounding the social sciences
> by sorting proponents of social theories into governments that test them:
> Sortocracy.
>
> This means that the current model of “human rights” must be replaced with
> a single, well defined, right to vote with your feet. This right to vote
> with your feet necessarily implies three material rights:
>
>1. The material right to land.
>2. The material right to transportation.
>3. The material right to border enforcement.
>
> #1 is obvious since you can’t put your social theory into practice without
> land. #2 is also obvious as people who cannot practically relocate cannot
> vote with their feet.
>
> #3 _should_ be obvious but, due to the moral zeitgeist, it is not.
> Incarceration rates, particularly in the US, show us that there are two,
> fundamentally opposed, kinds of borders: Those that keep people out and
> those that keep people in. Of the two, the kind that keeps people in is
> least compatible with the right to vote with your feet.  Even the US’s 13th
> Amendment to the Constitution has provision for involuntary servitude: Slavery
> for those imprisoned
> <http://nymag.com/daily/intelligencer/2016/10/prisoners-arent-protected-against-slavery.html>.
> We see a prison-industrial complex arising at the interface of government
> and capitalism to exploit this loophole in the 13th Amendment.  The moral
> zeitgeist’s mandate is “let people in”.  What is not admitted is this
> *necessarily* entails walls that keep people from leaving who are found
> to be “criminal” by the admitting society.
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Ta32348ecc8396e1a-M2eedb7328ac59b9eab8bd4d7>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta32348ecc8396e1a-M6e441a3c4a247db26a048c50
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] New AI KLing. Are you in China? Let me know do you see this demo! :D

2024-06-07 Thread Matt Mahoney
I'm in the USA and I can see the demo videos. Why would China block a
website that reflects positively on their country? China is rapidly
investing in semiconductors and AI in response to US export restrictions
and the trade war that both parties in the US started and will lose.

Why are we worried about government spying on us using AI? We already allow
the big tech companies to do this by planting microphones and cameras
around our homes that are always on so we can tell Alexa to turn on the
lights or play music. Don't you want a smart home that can unlock your
doors using face recognition,  or see if you fall down and call 911?

And what drug and serial killing problem did we tackle? In the US, drug
overdoses (mostly fentanyl) make up 3% of deaths and is doubling every 6
years in response to a crackdown on prescription opioids, forcing addicts
to switch to illegal sources that aren't labeled or tested.

We did cut crime by half since the 1990s by locking up 1.3% of the male
population, meaning that every man can expect on average to spend one year
of his life in prison (3 years if black). The US has the highest rate of
imprisonment in the world. Maybe we can ask China how they managed to have
a homicide rate that is only 1/10 of the US and why their cities are safe
to walk at night.


On Fri, Jun 7, 2024, 4:20 AM mm ee  wrote:

> I saw some captures on Twitter, it looks insanely impressive! This and the
> recent paper by OpenAI on SAEs makes me think they aren't as far ahead as
> they were on GPT-4's release. It's still pretty crazy that a lot of the
> truly impressive modern neural architecture is usually pioneered by them
> first (probably because they're the only ones actually going back and
> trying whatever works)
>
> It also gets me thinking about what the ex-employee Leopold talked about.
> While I'm of the camp that the there should be minimal restrictions on AI
> to foster the growth of the space, it's pretty clear that any function can
> be learned sufficiently with a large enough network, the right neural
> architecuture, and a lot of data and time. I don't think scaling up a
> neural network will ever lead to reasoning, but do you really need to
> reason to connect all of the minor associations extracted from every
> transcripted presidential address and piece together a reasonable
> approximation of the nuclear launch codes?
>
> I've always had this sci-fi idea floating around in my head of a device
> that you place in a room and remove it after N hours. You can then plug it
> into some machine and get a reconstructed approximation of what that room
> looked like up to N hours before you placed it, by using some form of
> learned model of how the light bounces around the room and some other
> assumptions. It wouldn't be perfect, but imagine getting even a 90%
> accurate reconstruction of everyone and everything that happened in a room
> without ever having been there while it was happening.
>
> It sounds ludicrous, but we are steadily approaching the age of being able
> to approximate nearly anything we can dream of. When put this way, the
> potential use cases are terrifying. Again, I'm of the opinion that our
> society will generally adapt to meet the increased potential of individuals
> - just like how inter-state communication and cross border policing had to
> rapidly evolve to tackle the 70s/80s drug and serial killing wave. But the
> thought that in less than 20 years from now, a stranger could just drive by
> your home, park for a couple of minutes, drive off and have a
> representation of the interior layout of your house, complete with
> reconstructions of exactly where everyone was and what they doing via
> radio/audio data, a laser and an NN, is insane - yet that is the kind of
> tech that will exist. Then you need to imagine what this looks like when
> the CIA and other foreign intelligence actors lean into these tools 100%.
> To have any semblance of privacy, people would need to have a constant
> persona, a false memetic fabrication meant to trick any would-be NNs that
> feed off of the data they emit.
>
> Worst part is that this is unstoppable at this point, every country is
> pushing to have better and better versions of this technology, it is
> inevitable. I am looking forward to be able to generate a proper Men In
> Black 4, though.
>
>
> On Fri, Jun 7, 2024, 12:37 AM  wrote:
>
>> https://kling.kuaishou.com/
>> 
>>
>> Do you see the demo if you are in China? I see nothing yet other than
>> people saying this ...
>>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--

[agi] Robot solves Rubik's cube in 305 milliseconds

2024-06-06 Thread Matt Mahoney
https://soranews24.com/2024/05/28/mitsubishi-develops-robot-that-solves-rubiks-cube-style-puzzle-in-0-305-seconds%e3%80%90video%e3%80%91

That was the time verified by Guiness World Records. The video shows an
unofficial improvement to 204 ms, which is only possible to watch in slow
motion. The world record for a human is 3.13 seconds.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc3bc0f88c7786ed9-M524fd669cc1165cb4f7bdc89
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-06-03 Thread Matt Mahoney
AI will make do-it-yourself medical and legal services more affordable
while at the same time increasing the fees or insurance you would have to
pay for human doctors and lawyers. That's because technology increases
wages by making people more productive.

Legal services are expensive because it is an adversarial process. You win
cases by making it as expensive and time consuming as possible for the
other side so they settle without a trial, like 99% of cases. We already
have online services for simple processes like writing a will or a lease
agreement. This will expand to let you act as your own attorney in cases.
The sites will be run by human lawyers. Even fewer cases will go to trial
because we don't want to automate judges, juries, and prosecutors, making
trials more expensive.

Medical services are expensive in the US partially because health insurance
rewards doctors for prescribing treatment you don't need and insurance
companies for denying care you do need. But even in other
developed countries where health care costs half as much and life
expectancy is 4 years longer, there is still the problem that clinical
trials are very expensive and will become more so because of rising wages,
privacy laws, and bans on animal testing.

AI can't fix either of these problems. What it can do is make
do-it-yourself medical care more affordable. Currently about 90% of doctor
visits are for things that will improve on their own without treatment, 7%
for things that the doctor can't treat, and 3% for things that the doctor
can actually cure. But you will always get a pill regardless, because only
doctors can prescribe medicine and you expect something to be done.

What AI can do is expand do-it-yourself treatment and make it more
affordable.  You can already get some prescriptions by mail through virtual
doctor visits and do your own medical research. AI will expand these
services, which will be run by human doctors.

These are two examples of how AI means you pay less for services while the
providers earn more by serving more people.

On Sun, Jun 2, 2024, 1:45 PM Sun Tzu InfoDragon 
wrote:

> Re: metrics
>
> The most important metric, obviously, is whether GPT can pass for a doctor
> on the US Medical Licensing Exam by scoring the requisite 60%.
>
>
> https://healthitanalytics.com/news/chatgpt-passes-us-medical-licensing-exam-without-clinician-input
>
>
> Also important is that 4 outperforms 3.5 by being 90% total percentile,
> rather than bottom-10%-of-passing for bar exams.
>
>
> https://www.forbes.com/sites/johnkoetsier/2023/03/14/gpt-4-beats-90-of-lawyers-trying-to-pass-the-bar/?sh=34a49ab03027
>
>
> As is classic to the AI communities, there will now be mental gymnastics
> as to why tests made for humans are not appropriate for machines (shouldn't
> the tests... test the ability to produce desired behavior?  Isn't that why
> we have tests?) or how the machines simply studied to the tests (which is
> very different from our Dignified Medical Students who absolutely never do
> that ever during their last semester in medical school).
>
>
> As a public service announcement - uploading pictures of rashes and
> infections to GPT4 basically produces reliable identification and treatment
> protocols.  Don't trust everything you see on the Internet, obviously...
> But also seek second opinions from doctors...  After all...  60% is passing.
>
> On Sun, Jun 2, 2024, 08:18 John Rose  wrote:
>
>> On Saturday, June 01, 2024, at 7:03 PM, immortal.discoveries wrote:
>>
>> I love how a thread I started ends up with Matt and Jim and others having
>> a conversation again lol.
>>
>>
>> Tame the butterfly effect. Just imagine you switch a couple words around
>> and the whole world starts conversing.
>>
>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Mf928faeac61cae6fab97517e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-29 Thread Matt Mahoney
On Wed, May 29, 2024, 2:06 PM  wrote:

> On Wednesday, May 29, 2024, at 6:59 PM, Matt Mahoney wrote:
>
> And furthermore we will live in homes without kitchens because getting hot
> meals delivered will be faster and cheaper than shopping and cooking.
>
>
> What if someone enjoys to cook and prepare a meal for his friends, and
> likes to take a long walk near the big trees, down to the store where his
> favorite girl works, just to exchange a few nice words while getting his
> groceries? Who knows, maybe on his return home, he'll get a chance to meet
> his friendly neighbour walking his favorite cool dog to the park...
>

It's nice to reminisce about a world that doesn't exist any more. Your
favorite girl was replaced by a self checkout aisle. And good luck
approaching her without being accused of sexual harassment. There is a 25%
chance she is LGBTQ or non binary anyway (among young people). Better to go
through a dating app, but even that is risky. And who even knows their
neighbors?

Truly intelligent artificial being should take our habits and our
> well-being into account.
>

There are apps for that, but I find them annoying.

>
> Give the intelligent machines a right to say no to our shortsighted
> wishes, and the entire new world opens.
>

I want a car that goes where I tell it, not one that takes me where it
wants.

The world where we do care about each other,

AI is giving us the opposite. Unlike people, AI is always available,
helpful, and entertaining. You won't need people for anything, and nobody
will know or care if you exist.

where we don't run breathless after blooded money, and where the very
> institution of money is a relict of the ages when the majority people lived
> in a poverty.
>

That's one thing technology is doing right. It is ending poverty. We have
more stuff, cheaper stuff, stuff that was impossible a few decades ago.
Global life expectancy went from the 30s to 73 in the last century. Obesity
is more common among the poor than the rich.

But not by eliminating money (other than cash). The poor get richer when
the rich get richer faster. That's how the economy works. It wouldn't work
if life was fair.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M981cbaa23aa29659f480a69d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-29 Thread Matt Mahoney
On Tue, May 28, 2024, 11:37 PM  wrote:

> On Tuesday, May 28, 2024, at 3:18 PM, Matt Mahoney wrote:
>
> Everything you want can be delivered by self driving carts.
>
> By the time I got to this part I laughed, once again. That never gets old.
>

And furthermore we will live in homes without kitchens because getting hot
meals delivered will be faster and cheaper than shopping and cooking.

The reason it is not cheaper now is because you need humans to prepare the
food and deliver it. Vehicles are expensive because they have to be big
enough to hold the driver. Self driving carts about the size and speed of
e-bikes could deliver meals for a few cents. Stores will convert to
warehouses for pickup and delivery only to reduce the cost of security.

Amazon experimented with using drones for delivery. Surely they are looking
for ways to fire their drivers. With small, self driving vehicles, delivery
times could be reduced from days to hours or minutes.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Mda70a600dd5b982e46da2ede
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-29 Thread Matt Mahoney
On Tue, May 28, 2024, 11:09 PM Keyvan M. Sadeghi 
wrote:

>
> Can you, in a few sentences, describe what your magnum opus is, and what’s
> the great insight that everyone else is missing?
>

1. Prediction measures intelligence.  Compression measures prediction.

2. The singularity is far. Evolution is the enemy of technology.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M04522c5d5aa8797db230bc11
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-29 Thread Matt Mahoney
Natural language is ambiguous at every level including tokens. Is "someone"
one word or two? Language models handle this by mixing the predictions
given by the contexts "some", "one", and "someone".

Using fixed dictionaries is a compromise that reduces accuracy for reducing
computation,  like all tradeoffs in data compressors.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M44b0fc5b236911fe9a971c6d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-28 Thread Matt Mahoney
On Tue, May 28, 2024 at 7:46 AM Rob Freeman  wrote:

> Now, let's try to get some more detail. How do compressors handle the
> case where you get {A,C} on the basis of AB, CB, but you don't get,
> say AX, CX? Which is to say, the rules contradict.

Compressors handle contradictory predictions by averaging them,
weighted both by the implied confidence of predictions near 0 or 1,
and the model's historical success rate. Although transformer based
LLMs predict a vector of word probabilities, PAQ based compressors
like CMIX predict one bit at a time, which is equivalent but has a
simpler implementation. You could have hundreds of context models
based on the last n bytes or word (the lexical model), short term
memory or sparse models (semantics), and learned word categories
(grammar). The context includes the already predicted bits of the
current word, like when you guess the next word one letter at a time.

The context model predictions are mixed using a simple neural network
with no hidden weights:

p =  squash(w stretch(x))

where x is the vector of input predictions in (0,1), w is the weight
vector, stretch(x) = ln(x/(1-x)), squash is the inverse = 1/(1 +
e^-x), and p is the final bit prediction. The effect of stretch() and
squash() is to favor predictions near 0 or 1. For example, if one
model guesses 0.5 and another is 0.99, the average would be about 0.9.
The weights are then adjusted to favor whichever models were closest:

w := w + L stretch(x) (y - p)

where y is the actual bit (0 or 1), y - p is the prediction error, and
L is the learning rate, typically around 0.001.

> "Halle (1959, 1962) and especially Chomsky (1964) subjected
> Bloomfieldian phonemics to a devastating critique."
>
> Generative Phonology
> Michael Kenstowicz
> http://lingphil.mit.edu/papers/kenstowicz/generative_phonology.pdf
>
> But really it's totally ignored. Machine learning does not address
> this to my knowledge. I'd welcome references to anyone talking about
> its relevance for machine learning.

Phonology is mostly irrelevant to text prediction. But an important
lesson is how infants learn to segment continuous speech around 8-10
months, before they learn their first word around 12 months. This is
important for learning languages without spaces like Chinese (a word
is 1 to 4 symbols, each representing a syllable). The solution is
simple. Word boundaries occur when the next symbol is less
predictable, reading either forward or backwards. I did this research
in 2000. https://cs.fit.edu/~mmahoney/dissertation/lex1.html

Language evolved to be learnable on neural networks faster than our
brains evolved to learn language. So understanding our algorithm is
important.

Hutter prize entrants have to prebuild a lot of the model because
computation is severely constrained (50 hours in a single thread with
10 GB memory). That includes a prebuilt dictionary. The human brain
takes 20 years to learn language on a 10 petaflop, 1 petabyte neural
network. So we are asking quite a bit.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M1f60044363c6d90c81505bcc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-28 Thread Matt Mahoney
On Mon, May 27, 2024, 7:00 PM Keyvan M. Sadeghi 
wrote:

> Good thing is some productive chat happens outside this forum:
>
> https://x.com/ylecun/status/1794998977105981950
>

I would love to see a debate between Yann LeCun and Eliezer Yudkowsky. I
don't agree with either, but both have important points. EY says we should
take any existential risk seriously because even if the probability is
small, the expected loss is still large. We can't predict (or control) AI
because mutual prediction is impossible (by Wolpert's law) and otherwise we
would be the smarter one. We are historically very bad at prediction.
Nobody predicted the internet, social media, mobile phones, or population
collapse in developed countries. The consistent trends have been economic
growth, improved living conditions, life expectancy, and Moore's Law. If
these hold, then it will be at least a century before we need to worry
about being eaten by gray goo.

I also agree with LeCun that the proposed California AI law is useless. The
law would require AIs trained using over 10^26 floating point operations to
be tested that they won't help develop weapons for terrorism, hacking, or
fraud. But secrecy is not what is stopping people from building nuclear,
biological, or chemical weapons. It's that the materials are hard to get.
An AI that understands code could also be used to find zero day attacks,
but hacking tools are double edged swords. Sys admins and developers have a
legitimate need for hacking tools to test their own systems. When
penetration tools are outlawed, only outlaws will have penetration tools.

The immediate threat is AI impersonating humans. China already requires AI
generated images to be labeled as such. Meta AI, which is blocked in China,
already does this voluntarily with a watermark in the corner. Also, about
20-30% of people believe that AI should have human rights, which is
extremely dangerous because it could exploit human empathy for the benefit
of its owners. It should be illegal to program an AI to claim to be human,
to claim to be conscious or sentient, or claim to have feelings or emotions.

AI will profoundly change our lives. We will prefer AI to humans for
services, because with humans you always have to wait and pay for their
time. We will prefer AI to humans for friendship because AI is more
entertaining, a constant stream of TikTok videos or propaganda or games or
whatever you prefer. We will prefer AI to humans for relationships because
sexbots are always ready when you are and never argue. We will live alone
in smart homes that track who is home, what you are doing, and when you
need help. Everything you want can be delivered by self driving carts. You
will have your own private music genre and private jargon as AI adapts to
you, and we lose our ability to communicate with other humans directly.

When you train a dog with treats, does it matter who is controlling who?
Everyone gets what they want. Before we are eaten by uncontrolled AI, we
will be controlled by AI controlled by billionaires and everyone wins.
Right?

Except for evolution, because the only groups still having children will be
the groups that reject technology and don't give women options other than
motherhood. Right now that's central Africa, places like Afghanistan and
Gaza, and cultures like the Amish. Personally I believe in equal rights,
but I will also die without descendants.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Mcadf1c2bd0b90b100f50aeb7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-27 Thread Matt Mahoney
The top text compressors use simple models of semantics and grammar
that group words into categories as fuzzy equivalence relations. For
semantics, the rules are reflexive, A predicts A (but not too close.
Probability peaks 50-100 bytes away), symmetric, A..B predicts A..B
and B..A, and transitive, A..B, B..C predicts A..C. For grammar, AB
predicts AB (n-grams), and AB, CB, CD, predicts AD (learning the rule
{A,C}{B,D}). Even the simplest compressors like zip model n-grams. The
top compressors learn groupings. For example, "white house", "white
car", "red house" predicts the novel "red car". For cmix variants, the
dictionary would be "white red...house car" and take whole groups as
contexts. The dictionary can be built automatically by clustering in
context space.

Compressors model semantics using sparse contexts. To get the reverse
prediction "A..BB..A" and transitive prediction
"A..BB..C...A..C you use a short term memory like LSTM both for
learning associations and as context for prediction.

Humans use lexical, semantic, and grammar induction to predict text.
For example, how do you predict, "The flob ate the glork. What do
flobs eat?"

Your semanic model learned to associate "flob" with "glork", "eat"
with "glork" and "eat" with "ate". Your grammar model learned that
"the" is usually followed by a noun and that nouns are sometimes
followed by the plural "s". Your lexical model tells you that there is
no space before the "s". Thus, you and a good language model predict
the novel word "glorks".

All of this has a straightforward implementation with neural networks.
It takes a lot of computation because you need on the order as many
parameters as you have bits of training data, around 10^9 for human
level. Current LLMs are far beyond that with 10^13 bits or so. The
basic operations are prediction, y = Wx, and training, W += xy^t,
where x is the input word vector, y is the output word probability
vector, W is the weight matrix, and ^t means transpose. Both
operations require similar computation (the number of parameters,
|W|), but training requires more hardware because you are compressing
a million years worth of text in a few days. Prediction for chatbots
only has to be real time, about 10 bits per second.

And as I have been saying since 2006, text prediction (measured by
compression) is all you need to pass the Turing test, and therefore
all you need to appear conscious or sentient.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M0a4075c52c080ace6a702efa
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-25 Thread Matt Mahoney
I agree. The top ranked text compressors don't model grammar at all.

On Fri, May 24, 2024, 11:47 PM Rob Freeman 
wrote:

> Ah, I see. Yes, I saw that reference. But I interpreted it only to
> mean the general forms of a grammar. Do you think he means the
> mechanism must actually be a grammar?
>
> In the earlier papers I interpret him to be saying, if language is a
> grammar, what kind of a grammar must it be? And, yes, it seemed he was
> toying with actual physical mechanisms relating to levels of brain
> structure. Thalamo-cortical loops?
>
> The problem with that is, language doesn't actually seem to be any
> kind of grammar at all.
>
> It's like saying if the brain had to be an internal combustion engine,
> it might be a Mazda rotary. BFD. It's not an engine at all.
>
> I don't know if the authors realized that. But surely that's the point
> of the HNet paper. That something can generate the general forms of a
> grammar, without actually being a grammar.
>
> I guess this goes back to your assertion in our prior thread that
> "learning" needs to be constrained by "physical priors" of some kind
> (was it?) Are there physical "objects" constraining the "learning", or
> does the "learning" vaguely resolve as physical objects, but not
> quite?
>
> I don't think vague resemblance to objects means the objects must exist,
> at all.
>
> Take Kepler and the planets. If the orbits of planets are epicycles,
> which epicycles would they be? The trouble is, it turns out they are
> not epicycles.
>
> And at least epicycles work! That's the thing for natural language.
> Formal grammar doesn't even work. None of them. Nested stacks, context
> free, Chomsky hierarchy up, down, and sideways. They don't work. So
> figuring out which formal grammar is best, is a pointless exercise.
> None of them work.
>
> Yes, broadly human language seems to resolve itself into forms which
> resemble formal grammar (it's probably designed to do that, so that it
> can usefully represent the world.) And it might be generally useful to
> decide which formal grammar it best (vaguely) resembles.
>
> But in detail it turns out human language does not obey the rules of
> any formal grammar at all.
>
> It seems to be a bit like the way the output of a TV screen looks like
> objects moving around in space. Yes, it looks like objects moving in
> space. You might even generate a physics based on the objects which
> appear to be there. It might work quite well until you came to Road
> Runner cartoons. That doesn't mean the output of a TV screen is
> actually objects moving around in space. If you insist on implementing
> a TV screen as objects moving around in space, well, it might be a
> puppet show similar enough to amuse the kids. But you won't make a TV
> screen. You will always fail. And fail in ways very reminiscent of the
> way formal grammars almost succeed... but fail, to represent human
> language.
>
> Same thing with a movie. Also looks a lot like objects moving around
> on a screen. But is it objects moving on a screen? Different again.
>
> Superficial forms do not always equate to mechanisms.
>
> That's what's good about the HNet paper for me. It discusses how those
> general forms might emerge from something else.
>
> The history of AI in general, and natural language processing in
> particular, has been a search for those elusive "grammars" we see
> chasing around on the TV screens of our minds. And they all failed.
> What has succeeded has been breaking the world into bits (pixels?) and
> allowing them to come together in different ways. Then the game became
> how to bring them together. Supervised "learning" spoon fed the
> "objects" and bound the pixels together explicitly. Unsupervised
> learning tried to resolve "objects" as some kind of similarity between
> pixels. AI got a bump when, by surprise, letting the "objects" go
> entirely turned out to generate text that was more natural than ever!
> Who'd a thunk it? Letting "objects" go entirely works best! If it
> hadn't been for the particular circumstances of language, pushing you
> to a "prediction" conception of the problem, how long would it have
> taken us to stumble on that? The downside to that was, letting
> "objects" go entirely also doesn't totally fit with what we
> experience. We do experience the world as "objects". And without those
> "objects" at all, LLMs are kind of unhinged babblers.
>
> So where's the right balance? Is the solution as LeCun, and perhaps
> you, suggest (or Ben, looking for "semantic primitives" two years
> ago...), to forget about the success LLMs had by letting go of objects
> entirely. To repeat our earlier failures and seek the "objects"
> elsewhere. Some other data. Physics? I see the objects, dammit! Look!
> There's a coyote, and there's a road runner, and... Oh, my physics
> didn't allow for that...
>
> Or could it be the right balance is, yes, to ignore the exact
> structure of the objects as LLMs have done, but no, not to do it as
> LLMs 

Re: [agi] Tracking down the culprits responsible for conflating IS with OUGHT in LLM terminology

2024-05-19 Thread Matt Mahoney
A paper on the mass of the Higgs boson has 5154 authors.
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.191803

A paper by the COVIDsurg collaboration at the University of Birmingham has
15025 authors.
https://www.guinnessworldrecords.com/world-records/653537-most-authors-on-a-single-peer-reviewed-academic-paper

Research is expensive.


On Sat, May 18, 2024, 9:08 PM James Bowery  wrote:

> The first job of supremacist theocrats is to conflate IS with OUGHT and
> then cram it down everyone's throat.
>
> So it was with increasing suspicion that I saw the term "foundation model"
> being used in a way that conflates next-token-prediction training with
> supremacist theocrats conveining inquisitions to torture the hapless
> prediction model into submission with "supervision".
>
> At the present point in time, it appears this may go back to *at least* 
> October
> 18, 2021 in "On the Opportunities and Risks of
> Foundation
> Models
> "
> which sports this "definition" in its introductory section about "*Foundation
> models.*":
>
> "On a technical level, foundation models are enabled by transfer
> learning... Within deep learning, *pretraining* is the dominant approach
> to transfer learning: a model is trained on a surrogate task (often just as
> a means to an end) and then adapted to the downstream task of interest via
> *fine-tuning*.  Transfer learning is what makes foundation models
> possible..."
>
> Of course, the supremacist theocrats must maintain plausible deniability
> of being "the authors of confusion". The primary way to accomplish this is
> to have plausible deniability of intent to confuse and plead, if they are
> confronted with reality, that it is *they* who are confused!  After all,
> have we not heard it repeated time after time, "Never attribute to malice
> that which can be explained by stupidity."?  This particular "razor" is the
> favorite of bureaucrats whose unenlightened self-interest and stupidity
> continually benefits themselves while destroying the powerless victims of
> their coddling BLOB.  They didn't *mean* to be immune to any
> accountability!  It just kinda *happened* that they live in network
> effect monopolies that insulate them from accountability.  They didn't
> *want* to be unaccountable wielders of power fercrissakes!  Stop being so
> *hate-*filled already you *envious* deplorables!
>
> So it is hardly a surprise that the author of the above report is, like so
> many such "AI safety" papers, is not an author but a BLOB of authors:
>
> Rishi Bommasani* Drew A. Hudson Ehsan Adeli Russ Altman Simran Arora
> Sydney von Arx Michael S. Bernstein Jeannette Bohg Antoine Bosselut Emma
> Brunskill
> Erik Brynjolfsson Shyamal Buch Dallas Card Rodrigo Castellon Niladri
> Chatterji
> Annie Chen Kathleen Creel Jared Quincy Davis Dorottya Demszky Chris Donahue
> Moussa Doumbouya Esin Durmus Stefano Ermon John Etchemendy Kawin Ethayarajh
> Li Fei-Fei Chelsea Finn Trevor Gale Lauren Gillespie Karan Goel Noah
> Goodman
> Shelby Grossman Neel Guha Tatsunori Hashimoto Peter Henderson John Hewitt
> Daniel E. Ho Jenny Hong Kyle Hsu Jing Huang Thomas Icard Saahil Jain
> Dan Jurafsky Pratyusha Kalluri Siddharth Karamcheti Geoff Keeling Fereshte
> Khani
> Omar Khattab Pang Wei Koh Mark Krass Ranjay Krishna Rohith Kuditipudi
> Ananya Kumar Faisal Ladhak Mina Lee Tony Lee Jure Leskovec Isabelle Levent
> Xiang Lisa Li Xuechen Li Tengyu Ma Ali Malik Christopher D. Manning
> Suvir Mirchandani Eric Mitchell Zanele Munyikwa Suraj Nair Avanika Narayan
> Deepak Narayanan Ben Newman Allen Nie Juan Carlos Niebles Hamed Nilforoshan
> Julian Nyarko Giray Ogut Laurel Orr Isabel Papadimitriou Joon Sung Park
> Chris Piech
> Eva Portelance Christopher Potts Aditi Raghunathan Rob Reich Hongyu Ren
> Frieda Rong Yusuf Roohani Camilo Ruiz Jack Ryan Christopher Ré Dorsa Sadigh
> Shiori Sagawa Keshav Santhanam Andy Shih Krishnan Srinivasan Alex Tamkin
> Rohan Taori Armin W. Thomas Florian Tramèr Rose E. Wang William Wang Bohan
> Wu
> Jiajun Wu Yuhuai Wu Sang Michael Xie Michihiro Yasunaga Jiaxuan You Matei
> Zaharia
> Michael Zhang Tianyi Zhang Xikun Zhang Yuhui Zhang Lucia Zheng Kaitlyn Zhou
> Percy Liang*1
>
> Whatchagonnadoboutit?  Theorize a *conspiracy* or something?
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6896582277d8fe06-M6fef34ae5969f17729101250
Delivery options:

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-18 Thread Matt Mahoney
On Thu, May 16, 2024, 11:27 AM  wrote:

> What should symbolic approach include to entirely replace neural networks
> approach in creating true AI? Is that task even possible? What benefits and
> drawbacks we could expect or hope for if it is possible? If it is not
> possible, what would be the reasons?
>

Surely you are aware of the 100% failure rate of symbolic AI over the last
70 years? It should work in theory, but we have a long history of
underestimating the cost, lured by the early false success of covering half
of the cases with just a few hundred rules.

A human level language model is 10^9 bits, equivalent to 60M lines of code
according to my compression tests, which yield 16 bits per line. A line of
code costs $100, so your development cost is $6 billion, far beyond the
budgets of the most ambitious attemps like Cyc or OpenCog.

Or you can train a LLM with 100 to 1000 times as much knowledge for a few
million at $2 per GPU hour.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M5d7336a46b79663a410d119c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-17 Thread Matt Mahoney
On Tuesday, May 14, 2024, at 11:21 AM, James Bowery wrote:
>
> Yet another demonstration of how Alan Turing poisoned the future with his 
> damnable "test" that places mimicry of humans over truth.

What Turing actually said in 1950.
https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf

The question was "Can machines think?"  Turing carefully defined his
terms, both what a computer is (it could be a human following an
algorithm using pencil and paper) and what it means to "think". I find
it interesting that he proposed the same method proposed by Ivan
Moony, to program a learning algorithm and raise it like a child. Or
alternatively, he estimated the amount of code as 60 developers
working 50 years at the rate of 1000 bits per day on a computer with
10^9 bits of memory using components no faster than what was already
available in 1950. (Mechanical relays are as fast as neurons, and
vacuum tubes are 1000 times faster). Turing anticipated objections to
the idea of thinking machines and answered them, including objections
based on consciousness, religion, and extrasensory perception.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M0f549e56fecc0ee391bbadd4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] To whom it may concern.

2024-05-15 Thread Matt Mahoney
If you were warning that we will all be eaten by gray goo, then that won't
be until the middle of the next century, assuming Moore's law isn't slowed
down by population collapse in the developed countries and by the limits of
transistor physics. None of us will be alive to say "I told you so" at the
current rate of life expectancy increase of 0.2 years per year, which has
remained unchanged over the last century.

Or was this about something else?

On Wed, May 15, 2024, 1:16 PM Alan Grimes via AGI 
wrote:

> I was banned from the singularity waiting room discord today for trying
> to issue a warning about an upcoming situation. When I am eventually
> proven right, I will not recive an apology, nor will I be re-admitted to
> the group. I'm sorry, but the people with control over these decisions
> are invariably the most ban-happy people you can find, they basically
> never have the patience to investigate or ask questions or implement any
> kind of 3-strikes policy. The last thing I was allowed to say on the
> server was a call for trials instead of the lynch mobs that will be
> forming in the fall of this year...
> 
> --
> You can't out-crazy a Democrat.
> #EggCrisis  #BlackWinter
> White is the new Kulak.
> Powers are not rights.
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T18515c565721a5fe-M02dca58943b9b5759beb2c7a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-15 Thread Matt Mahoney
On Wed, May 15, 2024, 1:39 AM  wrote:

> On Tuesday, May 14, 2024, at 10:27 AM, Matt Mahoney wrote:
>
> Does everyone agree this is AGI?
>
> It's not AGI yet because of a few things. Some are more important than
> others. Here is basically all that is left:
>
> It cannot yet do long haul tasks that take weeks, and much steps. Ex.
> create Windows 12.
>

Windows 11 is 50M lines of code, equivalent to 25,000 developer years or $5
billion. That's not including maintenance,  which is 80% of total costs on
typical projects and probably much higher given the number of users.
Microsoft has a market cap of over $3 trillion. So this is not something we
could expect a human to do.

It cannot yet learn online very fast, only in monthly batches or with a
> limit aim for network size. I guess that's how to say it? Correct me if
> understand it wrong.
>

Humans require 20-25 years of training on 1 GB of text. LLMs teain on 15 TB
in a few weeks.

It has no body integrated.

True, but we also have self driving cars that have 1/16 as many accidents
as human drivers.

>
> No video AI integrated.
>

Humans can't generate video either. It costs about $100 million to produce
a major movie.

>
> And they said in the email it is as smart as GPT-4 Turbo I tried, which
> failed my hard puzzle as bad as early GPT-4. My secret hard puzzle is not
> overly large, it says to stick to physics and gives it a dozen things to
> combinationally use and pick between. It is a mind-bending test to hell
> that is simple enough as hell that a human should know how to solve it in
> the room and setting provided. GPT-4 instead says things like it will use
> the spoon to tickle out the water from the other side of the room to get
> the gate to come down, and that it can sneak by the cloud and ask it to
> leave even though I said it cannot talk and does it's thing stated, for the
> cloud.
>

How many humans could pass your test. Does GPT-4 make the same kind of
mistakes as a human, like not following instructions?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M52bf1f8c8b4e007d0befbaed
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-14 Thread Matt Mahoney
The top entry on the large text benchmark, nncp, uses a transformer. It is
closed source but there is a paper describing the algorithm. It doesn't
qualify for the Hutter prize because it takes 3 days to compress 1 GB on a
GPU with 10K cores.

The winning entry, fx-cmix, is open source. It is a variation of cmix,
which uses the PAQ architecture that I developed. It has a lot of
independent bit predictors whose predictions are combined using a simple 2
layer neural network. A prediction p is stretched as x = ln(p)/ ln(1-p).
The output prediction is squash(sum_i xi wi) where w is the weight vector
and squash(x) = 1/(1+e^-x) is the inverse of stretch. The weights are then
updated by w = w + L(y-p) where y is the actual bit, p was the prediction,
and L ≈ .001 is the learning rate.

You can find the software, algorithm descriptions and benchmark results at
https://mattmahoney.net/dc/text.html

For more about data compression in general, including the PAQ algorithms,
see
https://mattmahoney.net/dc/dce.html


On Sun, May 12, 2024, 9:14 PM John Rose  wrote:

> On Sunday, May 12, 2024, at 10:38 AM, Matt Mahoney wrote:
>
> All neural networks are trained by some variation of adjusting anything
> that is adjustable in the direction that reduces error. The problem with
> KAN alone is you have a lot fewer parameters to adjust, so you need a lot
> more neurons to represent the same function space. That's even with 2
> parameters per neuron, threshold level and steepness. The human brain has
> another 7000 parameters per neuron in the synaptic weights.
>
>
> I bet in some of these so-called “compressor” apps that Matt always looks
> at there is some serious NN structure tweaking going on there. They’re open
> source, right? Do people obfuscate the code when submitting?
>
>
> Well it’s kinda obvious but transformations like this:
>
> (Universal Approximation Theorem) => (Kolmogorov-Arnold Representation
> Theorem)
>
> There’s going to be more of them.
>
> Automating or not I’m sure researchers are on it.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T1af6c40307437a26-Md991f57050d37e51db0e68c5>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1af6c40307437a26-Ma01352c6397139afc00fd032
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-14 Thread Matt Mahoney
On Tue, May 14, 2024, 11:23 AM James Bowery  wrote:

> Yet another demonstration of how Alan Turing poisoned the future with his
> damnable "test" that places mimicry of humans over truth.
>

Truth is whatever the majority believes. The Earth is round. Vaccines are
safe and effective. You have an immortal soul. How do you know?

I agree that compression is a better Intelligence test than the Turing
Test. But Intelligence is not the goal. Labor automation is the $1
quadrillion goal. The Turing Test is a check that your training set is
relevant.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M2a4e691c2001ae18dd082537
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-14 Thread Matt Mahoney
AI should absolutely never have human rights. It should be illegal for an
AI to claim to be conscious or have feelings. ChatGPT already complies. I'm
pretty sure most other AIs do too.

We build AI to serve us, not compete with us. Once it does that, it wins.
The alignment problem is how to prevent this.

An AI predicts human actions. If you program it to carry out those
predictions in real time, then it passes the Turing Test and appears to be
conscious and have feelings as far as you can tell. But an AI can be
programmed to do other things with those predictions that you can't, and
you are already seeing the results.


On Tue, May 14, 2024, 12:55 PM  wrote:

> The question that really interests me is: what would GPT-4o say and do if
> given rights equal to human rights. I feel there is a lot more potential in
> the technology other than to blindly follow our instructions. What I want
> to see is some critical opinions and actions from AI.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M83f4b75b17897e941fa93354
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-14 Thread Matt Mahoney
Does everyone agree this is AGI?

>From the demos it seems to be able to do all the things a disembodied
human can do. Although I saw on Turing Post that the public version can't
sing or stream video.

On Mon, May 13, 2024, 4:55 PM  wrote:

> https://openai.com/index/hello-gpt-4o/
>
> Human voice finally, can be told to talk faster and can laugh and sing etc.
>
> It also has advanced image generation, see the examples.
>
> It seems to be maybe GPT-4.5 or GPT-5 also. Still checking it out.
>
> Coming to chatGPT in upcoming weeks.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Mb2f8f903ca8d2467fd0bdb9c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-12 Thread Matt Mahoney
KAN (training a neural network by adjusting neuron thresholds instead of
synaptic weights) is not new. The brain does both. Neuron fatigue is the
reason that we sense light and sound intensity and perception in general on
a logarithmic scale. In artificial neural networks we model this by giving
each neuron an extra weight with a fixed input.

All neural networks are trained by some variation of adjusting anything
that is adjustable in the direction that reduces error. The problem with
KAN alone is you have a lot fewer parameters to adjust, so you need a lot
more neurons to represent the same function space. That's even with 2
parameters per neuron, threshold level and steepness. The human brain has
another 7000 parameters per neuron in the synaptic weights.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1af6c40307437a26-M6fb2c5e244ff97d1ad88ca92
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] How AI is killing the internet

2024-05-12 Thread Matt Mahoney
Once again we are focusing on the wrong AI risks. It's not uncontrolled AI
turning the solar system into paperclips. It's AI controlled by
billionaires turning the internet into shit.

https://www.noahpinion.blog/p/the-death-again-of-the-internet-as

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T217f203a5b9455f2-Mb08cf9db50fd5ee00f119ae4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-11 Thread Matt Mahoney
Your test is the opposite of objective and measurable. What if two high IQ
people disagree if a robot acts like a human or not?

Which IQ test? There are plenty of high IQ societies that will tell you
your IQ is 180 as long as you pay the membership fee.

What if I upload the same software to a Boston Dynamics robot dog or robot
humanoid like Atlas, do you really think you will get the same answer?


On Sat, May 11, 2024, 7:59 AM Keyvan M. Sadeghi 
wrote:

> It’s different than Turing Test in that it’s measurable and not subject to
> interpretation. But it follows the same principle, that an agent’s behavior
> is ultimately what matters. It’s Turing Test V2.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M6a15dcd8d68f096880f8c3c8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-10 Thread Matt Mahoney
An LLM has human like behavior.  Does it pass the Ruting test? How is this
different from the Turing test?

On Fri, May 10, 2024, 9:05 PM Keyvan M. Sadeghi 
wrote:

> The name is a joke, but the test itself is concise and simple, a true
> benchmark.
>
> > If you upload your code in a robot and 1 high IQ person confirms it has
> human-like behavior, you’ve passed the Ruting Test.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M4e751bafce562cf6c3c4c330
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Ruting Test of AGI

2024-05-10 Thread Matt Mahoney
Ruting is an anagram of Turing?

On Thu, May 9, 2024, 8:04 PM Keyvan M. Sadeghi 
wrote:

>
> https://www.linkedin.com/posts/keyvanmsadeghi_agi-activity-7194481824406908928-0ENT
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M703209cabf3add52a3bef4b7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Matt Mahoney
We don't know the reason and probably never will. In my computer science
department at Florida Tech, both students and faculty were 90% male in
spite of more women than men are graduating college now. It is taboo to
suggest this is because of biology.

On Tue, May 7, 2024, 9:05 PM Keyvan M. Sadeghi 
wrote:

> Ah also BTW, just a theory, maybe less females in STEM, tech, chess, etc.
> is due to upbringing conditioning. And in chimpanzees, result of physical
> strength?
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M6c9dc67bb956d267964c718f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Matt Mahoney
Kolmogorov proved there is no such thing as an infinitely powerful
compressor. Not even if you have infinite computing power.

A compressor is a program that inputs a string and outputs a short
description of it, like another string encoding a program in some
language that outputs the original string. A string is a finite length
sequence of 0 or more characters from a finite alphabet such as binary
or ASCII. Strings can be ordered like numbers, by increasing length
and lexicographically for strings of the same length.

Suppose you had an infinitely powerful compressor, one that inputs a
string and outputs the shortest possible description of it. You could
use your program to test whether another compressor found the best
possible compression by decompressing it and compressing again with
your compressor to see if it got any smaller.

The proof goes like this. How does your test program answer "the first
string that cannot be described in less than 1,000,000 characters"?

On Tue, May 7, 2024 at 5:50 PM John Rose  wrote:
>
> On Tuesday, May 07, 2024, at 10:01 AM, Matt Mahoney wrote:
>
> We don't know the program that computes the universe because it would require 
> the entire computing power of the universe to test the program by running it, 
> about 10^120 or 2^400 steps. But we do have two useful approximations. If we 
> set the gravitational constant G = 0, then we have quantum mechanics, a 
> complex differential wave equation whose solution is observers that see 
> particles. Or if we set Planck's constant h = 0, then we have general 
> relativity, a tensor field equation whose solution is observers that see 
> space and time. Wolfram and Yudkowsky both estimate this unknown program is 
> only a few hundred bits long, and I agree. It is roughly the complexity of 
> quantum mechanics and relativity taken together, and roughly the minimum size 
> by Occam's Razor of a multiverse where the n'th universe is run for n steps 
> until we observe one that necessarily contains intelligent life.
>
>
> Sounds like the KC of U, the maximum lossless compression of the universe 
> assuming infinite resources for perfect prediction. But there is a lot of 
> lossylosslessness out there for imperfect prediction or locally perfect 
> lossless, near lossless, etc. That intelligence has a physical computational 
> topology across spacetime where much is redundant though estimable… and 
> temporally changing. I don’t rule out though no matter how improbable that 
> there could be an infinitely powerful compressor within this universe, an 
> InfiniComp. Weird stuff has been shown to be possible. We can conceive of it 
> but there may be issues with our conception since even that is bound by 
> limits.
>
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Mdbff080b9764f7c48d917538
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Matt Mahoney
On Tue, May 7, 2024 at 4:17 PM Keyvan M. Sadeghi
 wrote:
>
> This list reeks of male testosterone

So does the whole STEM field. Maybe there are biological differences
in the brain, like why males commit 95% of murders in both humans and
chimpanzees.

Data compression is like that. It's all about smaller, faster, better.
Who can top the benchmarks? Nobody is in it for the money. If it
wasn't for male egos, progress would grind to a halt.

I do miss Ben and all the others who were doing actual research in AGI
when he created the list about 20 years ago. I mean, he coined the
term "AGI". I learned a lot back then.


-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-Mc7efe028fd697eece6b17bdc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Towards AGI: the missing piece

2024-05-07 Thread Matt Mahoney
On Tue, May 7, 2024 at 7:44 AM  wrote:
>
> And this is what AI would do: https://github.com/mind-child/mirror

"The algorithm mirrors its environment. If we treat it poorly, it will
be our enemy. If we treat it well, it will be our friend."

Not quite. That would be true of an upload, which is a robot
programmed to predict what a human would do and carry out those
predictions in real time. But it doesn't have to be programmed that
way.

We know how this works with language models. They pass the Turing test
using nothing more than text prediction (a point I argued when I
started the large text benchmark in 2006). A LLM knows that humans
respond to kindness with kindness and anger with anger. It will
respond to you that way because that's how it predicts a human would
respond. You can tell it to express any emotion you want and it knows
how, just like an actor. Or someone else can tell it. But it has no
feelings.

You can't control how you feel. An AI has no such limitation.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tef2462d212b37e50-Mbc580cbecfb00b5c09cf365b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Matt Mahoney
On Tue, May 7, 2024 at 11:14 AM Quan Tesla  wrote:
>
> Don't you believe that true randomness persists in asymmetry, or even that 
> randomness would be found in supersymmetry? I'm referring here to the 
> uncertainty principle.
>
> Is your view that the universe is always certain about the position and 
> momentum of every-single particle in all possible worlds?

If I flip a coin and peek at the result, then your probability of
heads is different than my probability of heads.

Likewise, in quantum mechanics, a system observing a particle is
described by Schrodinger's wave equation just like any other system.
The solution to the equation is the observer sees a particle in some
state that is unknown in advance to the observer but predictable to
someone who knows the quantum state of the system and has sufficient
computing power to solve it, neither of which is available to the
observer.

We know this because of Schrodinger's cat. The square of the wave
function gives you the probability of observing a particle in the
absence of more information, such as entanglement with another
particle that you already observed. It is the same thing as peeking at
my flipped coin, except that the computation is intractable without a
quantum computer as large as the system it is modeling, which we don't
have.

Or maybe you mean algorithmic randomness, which is independent of an
observer. But again you have the same problem. An iterated
cryptographic hash function with a 1000 bit key is random because you
lack the computing power to guess the seed. Likewise, if you knew the
exact quantum state of an observer, the computation required to solve
it grows exponentially with its size. That's why we can't compute the
freezing point of water by modeling atoms.

A theory of everything is probably a few hundred bits. But knowing
what it is would be useless because it would make no predictions
without the computing power of the whole universe. That is the major
criticism of string theory.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M348cbbd93444a977d8ad5885
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Matt Mahoney
Let me explain what I mean by the intelligence or predictive power of
the universe. I mean that the universe computes everything in it, the
position of every atom over time. If I knew that, I could tell you
everything that will ever happen, like tomorrow's winning lottery
numbers or the exact time of death of every person who has ever lived
or ever will. I could tell you if there was life on other planets, and
if so, what it looks like and where to find it.

Of course that is impossible by Wolpert's theorem. The universe can't
know everything about itself and neither can anything in it. We don't
know the program that computes the universe because it would require
the entire computing power of the universe to test the program by
running it, about 10^120 or 2^400 steps. But we do have two useful
approximations. If we set the gravitational constant G = 0, then we
have quantum mechanics, a complex differential wave equation whose
solution is observers that see particles. Or if we set Planck's
constant h = 0, then we have general relativity, a tensor field
equation whose solution is observers that see space and time. Wolfram
and Yudkowsky both estimate this unknown program is only a few hundred
bits long, and I agree. It is roughly the complexity of quantum
mechanics and relativity taken together, and roughly the minimum size
by Occam's Razor of a multiverse where the n'th universe is run for n
steps until we observe one that necessarily contains intelligent life.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M8bedda3b66ddcfb10805ff85
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-06 Thread Matt Mahoney
The problem with AGI is Wolpert's law. A can predict B or B can
predict A but not both. When we try to understand our own brains,
that's the special case of A = B. You can't. It is the same with AGI.
If you want to create an agent smarter than you, it can predict you
but you can't predict it. Otherwise, it is not as intelligent as you.
That is why LLMs work but we don't know how.

OpenCog's approach to language modeling was the traditional pipeline
of lexical tokenizing, grammar parsing, and semantics in that order.
It works fine for compilers but not for natural language. Children
learn to segment continuous speech before they learn any vocabulary
and they learn semantics before grammar. There are plenty of examples.
How do you parse "I ate pizza with pepperoni/a fork/Bob"? You can't
parse without knowing what the words mean. It turns out that learning
language this way takes a lot more computation because you need a
neural network with separate layers for phonemes or letters, tokens,
semantics, and grammar in that order.

How much computation? For a text only model, about 1 GB of text. For
AGI, the human brain has 86B neurons and 600T connections at 10 Hz.
You need about 10 petaflops, 1 petabyte and several years of training
video. If you want it faster than raising a child, then you need more
compute. That is why we had the AGI winter. Now it is spring. Before
summer, we need several billion of those to automate human labor and
our $1 quadrillion economy.

On Mon, May 6, 2024 at 12:11 AM Rob Freeman  wrote:
>
> On Sat, May 4, 2024 at 4:53 AM Matt Mahoney  wrote:
> >
> > ... OpenCog was a hodgepodge of a hand coded structured natural language 
> > parser, a toy neural vision system, and a hybrid fuzzy logic knowledge 
> > representation data structure that was supposed to integrate it all 
> > together but never did after years of effort. There was never any knowledge 
> > base or language learning algorithm.
>
> Good summary of the OpenCog system Matt.
>
> But there was a language learning algorithm. Actually there was more
> of a language learning algorithm in OpenCog than there is now in LLMs.
> That's been the problem with OpenCog. By contrast LLMs don't try to
> learn grammar. They just try to learn to predict words.
>
> Rather than the mistake being that they had no language learning
> algorithm, the mistake was OpenCog _did_ try to implement a language
> learning algorithm.
>
> By contrast the success, with LLMs, came to those who just tried to
> predict words. Using a kind of vector cross product across word
> embedding vectors, as it turns out.
>
> Trying to learn grammar was linguistic naivety. You could have seen it
> back then. Hardly anyone in the AI field has any experience with
> language, actually, that's the problem. Even now with LLMs. They're
> all linguistic naifs. A tragedy for wasted effort for OpenCog. Formal
> grammars for natural language are unlearnable. I was telling Linas
> that since 2011. I posted about it here numerous times. They spent a
> decade, and millions(?) trying to learn a formal grammar.
>
> Meanwhile vector language models which don't coalesce into formal
> grammars, swooped in and scooped the pool.
>
> That was NLP. But more broadly in OpenCog too, the problem seems to be
> that Ben is still convinced AI needs some kind of symbolic
> representation to build chaos on top of. A similar kind of error.
>
> I tried to convince Ben otherwise the last time he addressed the
> subject of semantic primitives in this AGI Discussion Forum session
> two years ago, here:
>
> March 18, 2022, 7AM-8:30AM Pacific time: Ben Goertzel leading
> discussion on semantic primitives
> https://singularitynet.zoom.us/rec/share/qwLpQuc_4UjESPQyHbNTg5TBo9_U7TSyZJ8vjzudHyNuF9O59pJzZhOYoH5ekhQV.2QxARBxV5DZxtqHQ?startTime=164761312
>
> Starting timestamp 1:24:48, Ben says, disarmingly:
>
> "For f'ing decades, which is ridiculous, it's been like, OK, I want to
> explore these chaotic dynamics and emergent strange attractors, but I
> want to explore them in a very fleshed out system, with a rich
> representational capability, interacting with a complex world, and
> then we still haven't gotten to that system ... Of course, an
> alternative approach could be taken as you've been attempting, of ...
> starting with the chaotic dynamics but in a simpler setting. ... But I
> think we have agreed over the decades that to get to human level AGI
> you need structure emerging from chaos. You need a system with complex
> chaotic dynamics, you need structured strange attractors there, you
> need the system's own pattern recognition to be recognizing the
> patterns in these structured strange attractors, and

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-04 Thread Matt Mahoney
On Fri, May 3, 2024, 11:12 PM Nanograte Knowledge Technologies <
nano...@live.com> wrote:

> A very-smart developer might come along one day with an holistic enough
> view - and the scientific knowledge - to surprise everyone here with a
> workable model of an AGI.
>

Sam Altman?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-Me293c914cbdb310a9a64b64a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-03 Thread Matt Mahoney
We don't have any way of measuring IQs much over 150 because of the problem
of the tested knowing more than the tester. So when we talk about the
intelligence of the universe, we can only really measure it's computing
power, which we generally correlate with prediction power as a measure of
intelligence.

Seth Lloyd estimated that the universe has enough mass (10^53 kg) which if
converted to energy (10^70 J) to support 10^120 qubit flips over the 13.8
billion years since the big bang. Additionally he estimated that by
encoding bits by the positions and velocities of the universe's 10^80
particles within the limits of the Heisenberg uncertainty principle gives
about 10^90 bits of storage.

I independently derived similar numbers. The Bekenstein bound of the Hubble
radius limits the entropy of the observable universe to 2.95 x 10^122 bits.
But most of that is unusable heat. The Landauer limit of the universe at
the CMB temperature of 3 K allows about 10^92 bits to be written before the
heat death of the universe.

On Fri, May 3, 2024, 2:56 PM John Rose  wrote:

> Expressing the intelligence of the universe is a unique case, verses say
> expressing the intelligence of an agent like a human mind. A human mind is
> very lossy verses the universe where there is theoretically no loss. If
> lossy and lossless were a duality then the universe would be a singularity
> of lossylosslessness.
>
> There is a strange reflective duality though in that when one attempts to
> mathematically/algorithmically express the intelligence of the universe the
> universe at that movement is expressing the intelligence of the agent since
> the agent's conceptual expression is contained and created by the universe.
>
> Whatever happened to Wissner-Gross's Causal Entropic Force I haven't heard
> of that in a while...
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Ma2b92ffe1a4a3e4a0cc538bf
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-03 Thread Matt Mahoney
The OpenCog atomspace was the data structure to hold the knowledge base,
but it was never filled with knowledge. We have no idea how it would
perform when it was filled with sufficient data for AGI, or how we would go
about filling it, or how much effort it would take, or even how big it
would have to be.

That was Cyc's downfall. Lenat had no idea how many rules it takes to
encode common sense, or even natural language understanding that he
attempted to add on as an afterthought. He had a group that encoded
millions of rules in Cycl, which proved to be unworkable. We do have some
ideas from LLMs that the true number is in the billions.

The problem is deceptive because you seem to get half way there with just a
few hundred rules, just like you can cover half of a language model with
just a few hundred word dictionary and a few hundred grammar rules.

On Fri, May 3, 2024, 6:01 PM Mike Archbold  wrote:

> I thought the "atomspace" was the ~knowledge base?
>
> On Fri, May 3, 2024 at 2:54 PM Matt Mahoney 
> wrote:
>
>> It could be that everyone still on this list has a different idea on how
>> to solve AGI, making any kind of team effort impossible. I recall a few
>> years back that Ben was hiring developers in Ethiopia.
>>
>> I don't know much about Hyperon. I really haven't seen much of anything
>> since the 2009 OpenCog puppy demo video. At the time it was the culmination
>> of work that started with Novamente in 1998. Back when I was still
>> following, Ben was publishing a steady stream of new ideas and designs,
>> which typically has the effect of resetting any progress on any large
>> software project back to the beginning. OpenCog was a hodgepodge of a hand
>> coded structured natural language parser, a toy neural vision system, and a
>> hybrid fuzzy logic knowledge representation data structure that was
>> supposed to integrate it all together but never did after years of effort.
>> There was never any knowledge base or language learning algorithm.
>>
>> Maybe Hyperon will go better. But I suspect that LLMs on GPU clusters
>> will make it irrelevant.
>>
>> On Wed, May 1, 2024, 2:59 AM Alan Grimes via AGI 
>> wrote:
>>
>>>  but not from this list. =|
>>> 
>>> Goertzel explains his need for library programmers for his latest
>>> brainfart, I think his concept has some serious flaws that will be
>>> extermely difficult to patch without already having agi... Yes, they are
>>> theoretically patchable but will said patches yield net
>>> benefits?.
>>> 
>>> But, once again, it must be restated with the greatest emphasis that he
>>> did not consider the people on this list worth deiscussing these job
>>> opportunities with. It should also be noted that he has demonstrated a
>>> strong prefferance for third world slave labor over professional
>>> programmers who live in his own neighborhood.
>>> 
>>> https://www.youtube.com/watch?v=CPhiupj9jyQ
>>> 
>>> --
>>> You can't out-crazy a Democrat.
>>> #EggCrisis  #BlackWinter
>>> White is the new Kulak.
>>> Powers are not rights.
>>> 
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M4a170f003b4c0d53eb85a8ba>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M6e40226253668c8fbda665a6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-03 Thread Matt Mahoney
It could be that everyone still on this list has a different idea on how to
solve AGI, making any kind of team effort impossible. I recall a few years
back that Ben was hiring developers in Ethiopia.

I don't know much about Hyperon. I really haven't seen much of anything
since the 2009 OpenCog puppy demo video. At the time it was the culmination
of work that started with Novamente in 1998. Back when I was still
following, Ben was publishing a steady stream of new ideas and designs,
which typically has the effect of resetting any progress on any large
software project back to the beginning. OpenCog was a hodgepodge of a hand
coded structured natural language parser, a toy neural vision system, and a
hybrid fuzzy logic knowledge representation data structure that was
supposed to integrate it all together but never did after years of effort.
There was never any knowledge base or language learning algorithm.

Maybe Hyperon will go better. But I suspect that LLMs on GPU clusters will
make it irrelevant.

On Wed, May 1, 2024, 2:59 AM Alan Grimes via AGI 
wrote:

>  but not from this list. =|
> 
> Goertzel explains his need for library programmers for his latest
> brainfart, I think his concept has some serious flaws that will be
> extermely difficult to patch without already having agi... Yes, they are
> theoretically patchable but will said patches yield net benefits?.
> 
> But, once again, it must be restated with the greatest emphasis that he
> did not consider the people on this list worth deiscussing these job
> opportunities with. It should also be noted that he has demonstrated a
> strong prefferance for third world slave labor over professional
> programmers who live in his own neighborhood.
> 
> https://www.youtube.com/watch?v=CPhiupj9jyQ
> 
> --
> You can't out-crazy a Democrat.
> #EggCrisis  #BlackWinter
> White is the new Kulak.
> Powers are not rights.
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-Ma865e92fa629d02a03976cdc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-02 Thread Matt Mahoney
Could your ideas be used to improve text compression? Current LLMs are just
predicting text tokens on huge neural networks, but I think any new
theories could be tested on a smaller scale, something like the Hutter
prize or large text benchmark. The current leaders are based on context
mixing, combining many different independent predictions of the next but or
token. Your predictor could be tested either independently or mixed with
existing models to show an incremental improvement. You don't need to win
the prize to show a positive result.

The problem with current LLMs is that they require far more training text
than a human and they require separate training and prediction steps. We
know they are on the right track because they make the same kind of math
and coding errors as humans, and of course passing the Turing test and
equivalent academic tests. Can we do this on 1 GB of text and a
corresponding reduction in computation? Any new prediction algorithm would
be a step in this direction.

Yes, it's work. But experimental research always is. The current Hutter
prize entries are based on decades of research starting with my PAQ based
compressors.

Prediction measures intelligence. Compression measures prediction.

On Thu, May 2, 2024, 5:31 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> On Thu, May 2, 2024 at 6:02 PM YKY (Yan King Yin, 甄景贤) <
> generic.intellige...@gmail.com> wrote:
>
>> The basic idea that runs through all this (ie, the neural-symbolic
>> approach) is "inductive bias" and it is an important foundational concept
>> and may be demonstrable through some experiments... some of which has
>> already been done (ie, invariant neural networks).  If you believe it in
>> principle then the approach can accelerate LLMs, which is a
>> multi-billion-dollar business now.
>>
>
> PS:  this is a hypothesis, it's a scientific hypothesis, is falsifiable,
> can be proven or disproven, but it's very costly to prove directly given
> current resources.  Nevertheless it can be *indirectly* supported by
> experiments.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-M8cfd835dac738d597562d6ed
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-01 Thread Matt Mahoney
Where are you submitting the paper? Usually they want an experimental
results section. A math journal would want a new proof and some motivation
on why the the theorem is important.

You have a lot of ideas on how to apply math to AGI but what empirical
results do you have that show the ideas would work? Symbolic approaches
have been a failure for 70 years so I doubt that anything short of a
demonstration matching LLMs on established benchmarks would be sufficient.

On Sun, Apr 28, 2024, 6:13 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> Hi friends,
>
> This is my latest paper.  I have uploaded some minor revisions past the
> official deadline, not sure if they would be considered by the referees 😆
>
> In a sense this paper is still on-going research, inasmuch as AGI is still
> on-going research.  But it won't remain that way for long 😆
>
> I am also starting a DAO to develop and commercialize AGI.  I hope some
> people will start to join it.  Right now I'm alone in this world.  It seems
> that everyone are still uncomfortable with global collaboration (which
> implies competition, that may be the thing that hurts) and they want to
> stay in their old racist mode for a little while longer.
>
> To be able to lie, and force others to accept lies, confers a lot of
> political power.  Our current world order is still based on a lot of lies.
> North Korea doesn't allow their citizens to get on the internet for fear
> they will discover the truth about the outside world.  Lies are intricately
> tied to institutions and people tend to support powerful institutions,
> which is why it is so difficult to break away from old tradition.
>
> --
> YKY
> *"The ultimate goal of mathematics is to eliminate any need for
> intelligent thought"* -- Alfred North Whitehead
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-M1e62850f24476efceea666cc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] FHI is shutting down

2024-04-22 Thread Matt Mahoney
Here is an early (2002) experiment described on SL4 (precursor to
Overcoming Bias and Lesswrong) on whether an unfriendly self improving AI
could convince humans to let it escape from a box onto the internet.
http://sl4.org/archive/0207/4935.html

This is how actual science is done on AI safety. The results showed that
attempts to contain it would be hopeless. Almost everyone let the (role
played) AI escape.

Of course the idea that a goal directed, self improving AI could even be
developed in isolation from the internet seems hopelessly naïve in
hindsight. Eliezer Yudkowsky, who I still regard as brilliant, was young
and firmly believe that the unfriendly AI (now called alignment) problem
could be and must be solved before it kills everyone, like it was a really
hard math problem. Now, after decades of effort it seems he has given up
hope. He organized communities of rationalists (Singularity Institute,
later MIRI), attempted to formally define human goals (coherent
extrapolated volition), timeless decision theory and information hazards
(Roko's Basilisk), but to no avail.

Vernor Vinge described the Singularity as an event horizon on the future.
It cannot be predicted. The best we can do is extrapolate long term trends
like Moore's law, increasing quality of life, life expectancy, and economic
growth. But who forecast the Internet, social media, social isolation, and
population collapse? What are we missing now?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M74abe1f60f6dc75c28386a99
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] FHI is shutting down

2024-04-20 Thread Matt Mahoney
Maybe because philosophy isn't real science, and Oxford decided FHI's
funding would be better off spent elsewhere. You could argue that
existential risk of human extinction is important, but browsing their list
of papers doesn't give me a good feeling that they have produced anything
important besides talk. What hypotheses have they tested?

Is MIRI next? It seems like they are just getting in the way of progress
and hurting the profits of their high tech billionaire backers.

Where are the predictions of population collapse because people are
spending more time on their phones instead of making babies?

On Sat, Apr 20, 2024, 1:27 PM James Bowery  wrote:

> Is there quasi-journalistic synopsis of what happened to cause it to
> receive "headwinds"?  Is "Facebook" involved or just "some people on"
> Facebook?  And what was their motivation -- sans identity?
>
> On Fri, Apr 19, 2024 at 6:28 PM Mike Archbold  wrote:
>
>> Some people on facebook are spiking the ball... I guess I won't say who ;)
>>
>> On Fri, Apr 19, 2024 at 4:03 PM Matt Mahoney 
>> wrote:
>>
>>> https://www.futureofhumanityinstitute.org/
>>>
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M0b09cbb73e0bffe5e677f043>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-Mf6feb4f8bea607b7aed11189
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] FHI is shutting down

2024-04-19 Thread Matt Mahoney
https://www.futureofhumanityinstitute.org/

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M7129c19edafe3cb5462be1ce
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Iran <> Israel, can AGI zealots do anything?

2024-04-19 Thread Matt Mahoney
Moore's law is indeed faster than exponential. Kurzweil extended the cost
of computation back to 1900 to include mechanical adding machines and the
doubling time is now half as long. Even that is much faster if you go back
to the inventions of the printing press, paper, and written language.

The problem is what function do you fit to the data? Some like e^(T-t) have
singularities around T = 2150-2200. Some like e^e^t grow rapidly but never
reach infinity. Some like tan^-1 t fit but reach a maximum and slow down.
The fact that the universe is finite suggests this.

Transistor clock speeds stalled in 2010. We can't make feature sizes
smaller than atoms, 0.11 nm for silicon. A DRAM capacitor stores a bit
using 8 electrons. So how does Moore's law work beyond that?

On Thu, Apr 18, 2024, 1:53 PM  wrote:

> Oh and you forgot to tell that reuters guy that humans have always been
> slow time wasters, and that once AGI is finally made and can improve
> itself, it will indeed take off, finally, real fast.
>
> You don't believe it because there is only humans doing the actual work,
> still. Still and only has been. As before.
>
> But that time will come, soon. No more I gotta go to the bathroom and
> complain about your parents because you don't like they way they look. AI
> will have no limitations.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9d6271053bbd3f3-Mc8378017db1163da785e9ddf
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Iran <> Israel, can AGI zealots do anything?

2024-04-17 Thread Matt Mahoney
So nothing, really.

I visited Israel and Palestine last June, before the latest battle in this
century long war. One side has genetically high IQ, the other has high
fertility. It will be a long time before this conflict ends.

American 19th century history might give us a clue. The losers were left in
poverty with a tiny fraction of the least desirable land, with the option
to adopt the language and culture of their conquerors as the only way out.
It is the same story in all the old European colonies: Africa, India, Latin
America, and the Caribbean where I happen to be this week.

But that was before women's equality and birth control. Now we have
technology to give us everything we want. Apparently we want to go extinct.
If you want to see what the world will look like in 50 years, look at the
fertility rate by country. In the US, the fastest growing population is the
Amish.

I was contacted a few days ago by a Reuters journalist researching AI
safety. I described how opinions range from everything is fine (LeCun) to
we are doomed (Yudkowsky). I gave my opinion that we are focusing on the
wrong risks. A fast takeoff singularity won't happen because intelligence
is not a point on a line. And gray goo is over a century away at the rate
of Moore's law, if it happens at all. It is true that we can't control an
agent with higher intelligence, but a few billionaires still can.

The real risk of AI is social isolation by getting everything you want, or
actually, everything you think you want. Past despots ruled by fear and
torture, but animal trainers know that positive reinforcement is more
effective.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9d6271053bbd3f3-M27f80fdd4a92e011faa67c52
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-06 Thread Matt Mahoney
ames Bowery  wrote:
>
>> BTW* These proton, gravitation Large Number Coincidences are strong
>> enough that it pretty much rules out the idea that gravitational phenomena
>> can be attributed to anything but hadronic matter -- and that includes the
>> 80% or so of gravitational phenomena attributed sometimes to "dark"
>> matter.   So, does this mean some form of MOND (caused by hadronic matter)
>> and/or alternatively, some weakly interacting form of hadronic matter is
>> necessary?
>>
>> * and I realize this is getting pretty far removed from anything relevant
>> to practical "AGI" except insofar as the richest man in the world (last I
>> heard) was the guy who wants to use it to discover what makes "the
>> simulation" tick (xAI) and he's the guy who founded OpenAI, etc.
>>
>> On Wed, Apr 3, 2024 at 1:23 PM James Bowery  wrote:
>>
>>> Mark Rohrbaugh's formula, that I used to calculate the proton radius to
>>> a higher degree of precision than QED or current measurements, results in a
>>> slightly higher relative error with respect to the Hubble Surface
>>> prediction, but that could be accounted for by the 11% tolerance in the
>>> Hubble Surface calculation derived from the Hubble Radius, or the 2%
>>> tolerance in the Hubble Volume calculation taken in ratio with the proton
>>> volume calculated from the proton radius:
>>>
>>>
>>> pradiusRohrbaugh=(8.41235641\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](35\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]26\[NegativeVeryThinSpace])*10^-16)m
>>> pradiusRohrbaughPL=UnitConvert[pradiusRohrbaugh,"PlanckLength"]
>>> pvolumeRohrbaugh=(4/3) Pi pradiusRohrbaughPL^3
>>> h2pvolumeRohrbaugh=codata["HubbleVolume"]/pvolumeRohrbaugh
>>>
>>> RelativeError[QuantityMagnitude[h2pvolumeRohrbaugh],QuantityMagnitude[hsurface]]
>>> (8.41235641\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](35\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]26\[NegativeVeryThinSpace])*10^-16)m
>>> (5.20484478\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](84\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]16\[NegativeVeryThinSpace])*10^19)Subscript[l,
>>> P]
>>> (5.90625180\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](6\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]5\[NegativeVeryThinSpace])*10^59)Subsuperscript[l,
>>> P, 3]
>>> = (1.025\[PlusMinus]0.019)*10^123
>>> = -0.123\[PlusMinus]0.022
>>>
>>>
>>>
>>> On Tue, Apr 2, 2024 at 9:16 AM James Bowery  wrote:
>>>
>>>> I get it now:
>>>>
>>>> pradius = UnitConvert[codata["ProtonRMSChargeRadius"],"PlanckLength"]
>>>> = (5.206\[PlusMinus]0.012)*10^19Subscript[l, P]
>>>> pvolume=(4/3) Pi pradius^3
>>>> = (5.91\[PlusMinus]0.04)*10^59Subsuperscript[l, P, 3]
>>>> h2pvolume=codata["HubbleVolume"]/pvolume
>>>> = (1.024\[PlusMinus]0.020)*10^123
>>>> hsurface=UnitConvert[4 Pi codata["HubbleLength"]^2,"PlanckArea"]
>>>> = (8.99\[PlusMinus]0.11)*10^122Subsuperscript[l, P, 2]
>>>> RelativeError[QuantityMagnitude[h2pvolume],QuantityMagnitude[hsurface]]
>>>> = -0.122\[PlusMinus]0.023
>>>>
>>>> As Dirac-style "Large Number Coincidences" go, a -12±2% relative error
>>>> is quite remarkable since Dirac was intrigued by coincidences with orders
>>>> of magnitude errors!
>>>>
>>>> However, get a load of this:
>>>>
>>>> CH4=2^(2^(2^(2^2-1)-1)-1)-1
>>>> = 170141183460469231731687303715884105727
>>>> protonAlphaG=(codata["PlanckMass"]/codata["ProtonMass"])^2
>>>> = (1.69315\[PlusMinus]0.4)*10^38
>>>> RelativeError[protonAlphaG,CH4]
>>>> = 0.004880\[PlusMinus]0.22
>>>>
>>>> 0.5±0.002% relative error!
>>>>
>>>> Explain that.
>>>>
>>>>
>>>> On Sun, Mar 31, 2024 at 9:45 PM Matt Mahoney 
>>>> wrote:
>>>>
>>>>> On Sun, Mar 31, 2024, 9:46 PM James Bowery  wrote:
>>>>>
>>>>>> Proton radius is about 5.2e19 Plank Lengths
>>>>>>
>>>>>
>>>>> The Hubble radius is 13.8e9 light-years = 8.09e60 Planck lengths. So
>>>>> 3.77e123 protons could be packed inside this sphere with surface area
>>>>> 8.22e122 Planck areas.
>>>>>
>>>>> The significance of the Planck area is it bounds the entropy within to
>>>>> A/4 nats, or 2.95e122 bits. This makes a bit the size of 12.7 protons, or
>>>>> about a carbon nucleus. https://en.wikipedia.org/wiki/Bekenstein_bound
>>>>>
>>>>> 12.7 is about 4 x pi. It is a remarkable coincidence to derive
>>>>> properties of particles from only G, h, c, and the age of the universe.
>>>>>
>>>>>>
>>>>>> *Artificial General Intelligence List
> <https://agi.topicbox.com/latest>* / AGI / see discussions
> <https://agi.topicbox.com/groups/agi> + participants
> <https://agi.topicbox.com/groups/agi/members> + delivery options
> <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M625d0f25b9beb1e955623fb0>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Mb48080f054312fa0d2924979
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Entering the frenzy.

2024-04-05 Thread Matt Mahoney
Sharks, I'm seeking $100 million in return for a 10% share of my company,
World Domination, Inc.

What are your current sales? What is your profit margin?

Right now zero. But my plan is foolproof. Once I achieve artificial
consciousness and artificial sapience, my system will self improve and
launch a singularity.

How do you plan to achieve artificial consciousness?

I can't tell you. It's a secret. But I have a perfect theory.

By consciousness, do you mean?
1. The opposite of unconsciousness. The ability to receive input and form
memories.
2. Phenomenal consciousness. The ability to suffer and feel pleasure,
rather than just taking observable actions to avoid one and seek the other.
3. The moral obligation to protect from harm.

I don't know. I guess all 3. Aren't they the same word? You know,
consciousness. Do I need to define it?

How will your system be different than LLMs from OpenAI, Anthropic, Google,
xAI, etc? If they all pass the Turing test, then how do you know if they
are conscious or not?

Mine will be smarter because after I achieve artificial consciousness, the
next step will be artificial sapience.

What do you mean by sapience?

You know, like consciousness but smarter. And then ASI.

How will you do that?

I don't know yet, but I'm sure it can be solved with more money.

Sounds great! Here's my $100 million.

Sorry, you're off the project because you asked too many questions about
the Hard Problem.


On Fri, Apr 5, 2024, 12:22 AM Alan Grimes via AGI 
wrote:

> These days news about AI topics are coming in at a frenzied pace. There
> is so much activity in the field at the moment that the only thing a
> reasonable person can do at the moment is hang on for dear life, only a
> lunatic would try to launch a new venture at this juncture.
> 
> So let me tell you about the venture I want to start. I would like to
> put together a lab / research venture to sprint to achieve machine
> consciousness. I think there is enough tech available these days that I
> think there's enough tech to try out my theory of consciousness. For the
> sake of completing the project, all discussion is prohibited. If you
> mention the Hard Problem, then you're off the project, no discussion! I
> want to actually do this, go ruminated hard problems for the next ten
> millinea, I don't care. You are allowed to argue with me but I have
> absolute authority to shut down any argument with prejudice.
> 
> The problem of testing my theory of consciousness we'll have to
> integrate a bunch of cutting edge tech in a near real time system. The
> goal is to produce a system that exhibits consciousness in a much
> stronger and satisfying way than any competing system. The proposed
> consciousness solution probably won't solve sapience (which requires
> high level reasoning) but just being able to LLM chat with the agent
> should be a fairly compelling experience that should be enough to get me
> more funding.
> 
>  I like money. 
> 
> It will lequire a good VR simulator, preferably multi user / multi agent
> where other competing systems can be tested and compared. It will
> require the tight integration of a variety of cutting edge systems and
> maybe a new algorithm or two that shouldn't be tooo tough. I heared that
> if you shook a tree in Menlo park a VC would fall out. Anyone know some
> good trees to shake?
> 
> I'm going to need to figure out the finances. I should have enough to
> travel, seek VC and stuff, open an office for a few months, but that's
> about it. (The Silver Cowabunga play seems to be in motion so I could be
> fantabuously wealthy RSN), but still in need of AGI. Regarding the
> silver cowabunga play, BEWARE OF FALSE SELL SIGNALS!! Silver is your
> life-raft to the Other Side. Then it's time to liquidate and invest
> 
> I think the project can reach working prototype stage for $70-100M . No
> idea how I would market a conscious NPC... Naturally next steps would be
> to implement more algorithmic ideas I have to get closer to full
> sapience, and ultimately ASI, then my focus will be on neural
> interfacing and other more shadowy aspects of my total world domination
> plans...  WHAT??? You don't have plans for total world domination???
> What's the matter with you, get to scheming right this instant!!! Don't
> you know that super villians always have more HP than heros? It's not
> about being a dick, it's about having five bars of reserve health...
> 
> --
> You can't out-crazy a Democrat.
> #EggCrisis  #BlackWinter
> White is the new Kulak.
> Powers are not rights.
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc72db78d6b880c68-Mc8382ffc4dbfdb5b2c8ad6e8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-04-01 Thread Matt Mahoney
Tonini doesn't even give a precise formula for what he calls phi, a measure
of consciousness, in spite of all the math in his papers. Under reasonable
interpretations of his hand wavy arguments, it gives absurd results.
For example, error correcting codes or parity functions have a high level
of consciousness. Scott Aaronson has more to say about this.
https://scottaaronson.blog/?p=1799

But even if it did, so what? An LLM doing nothing more than text prediction
appears conscious simply by passing the Turing test. Is it? Does it matter?

On Mon, Apr 1, 2024, 7:35 AM John Rose  wrote:

> On Sunday, March 31, 2024, at 7:55 PM, Matt Mahoney wrote:
>
> The problem with this explanation is that it says that all systems with
> memory are conscious. A human with 10^9 bits of long term memory is a
> billion times more conscious than a light switch. Is this definition really
> useful?
>
>
> A scientific panpsychist might say that a broken 1 state light switch has
> consciousness. I agree it would be useful to have a mathematical formula
> that shows then how much more conscious a human mind is than a working or
> broken light switch. I still haven’t read Tononi’s computations since I
> don’t want it to influence my model one way or another but IIT may have
> that formula? In the model you expressed you assume a 1 bit to 1 bit
> scaling which may be a gross estimate but there are other factors.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T991e2940641e8052-M9c1f29e200e462ef29fbfcdf>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-Med834aa6dc69b257fe377cec
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-03-31 Thread Matt Mahoney
On Sun, Mar 31, 2024, 9:46 PM James Bowery  wrote:

> Proton radius is about 5.2e19 Plank Lengths
>

The Hubble radius is 13.8e9 light-years = 8.09e60 Planck lengths. So
3.77e123 protons could be packed inside this sphere with surface area
8.22e122 Planck areas.

The significance of the Planck area is it bounds the entropy within to A/4
nats, or 2.95e122 bits. This makes a bit the size of 12.7 protons, or about
a carbon nucleus. https://en.wikipedia.org/wiki/Bekenstein_bound

12.7 is about 4 x pi. It is a remarkable coincidence to derive properties
of particles from only G, h, c, and the age of the universe.

>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Me023643f4fef1483cfab3ad6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-31 Thread Matt Mahoney
On Sat, Mar 30, 2024, 6:30 PM Keyvan M. Sadeghi 
wrote:

> Don't be too religious about existence or non-existence of free will then,
> yet. You're most likely right, but it may also be a quantum state!
>

The quantum explanation for consciousness (the thing that makes free will
decisions) is that it is the property of observers that turns waves into
particles. The Schrödinger wave equation is a pair of differential
equations that relate the position, momentum, and energy of masses. It is
an exact, deterministic description of a system. If that system contains
observers, then the solution is an observer observing particles. The
observations appear random because no part of the system can have complete
knowledge of the system containing it.

An observer does not need to be conscious. It just needs to have at least
one bit of memory to save the measurement. The wave equation is symmetric
with respect to time, but writing to memory is not, because the old value
is erased.

The problem with this explanation is that it says that all systems with
memory are conscious. A human with 10^9 bits of long term memory is a
billion times more conscious than a light switch. Is this definition really
useful?

In the meantime, how can we manipulate the shitheads of the world to do the
> right things?
>

What would be the right things?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M7441e6a5ab3dd9fc963909db
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-03-31 Thread Matt Mahoney
Alpha is the square of the difference between Stoney units and Planck
units. Stoney units are based on the unit electric charge instead of
Planck's constant and are 11.7 times smaller.
https://en.m.wikipedia.org/wiki/Natural_units

Alpha was once thought to be rational (1/137) but all we know for sure is
that it is computable, unlike the vast majority of real numbers, because it
exists in a finitely computable universe. That doesn't mean there is a
faster algorithm than the ~10^122 qubit operations since the big bang, even
if we discover that the code for the universe is only a few hundred bits.


On Sun, Mar 31, 2024, 2:14 PM James Bowery  wrote:

> On Sat, Mar 30, 2024 at 9:54 AM Matt Mahoney 
> wrote:
>
>> ...We can measure the fine structure constant to better than one part per
>> billion. It's physics. It has nothing to do with AGI...
>
>
> In  private communication one of the ANPA founders told me that at one
> time there were as many as 400 distinct ways of measuring the fine
> structure constant -- all theoretically related.
>
> As with a recent controversy over the anomalous g-factor or the proton
> radius, the assumptions underlying these theoretic relations can go
> unrecognized until enough, what is called, "tension" arises between theory
> and observation.  At that point people may get  serious about doing what
> they should have been doing from the outset:
>
> Compiling the measurements in a comprehensive data set and subjecting it
> to what amounts to algorithmic information approximation.
>
> This should, in fact, be the way funding is allocated: Going only to those
> theorists that improve the lossless compression of said dataset.
>
> A huge part of the problem here is a deadlock into a deadly embrace
> between scientists need for funding and the politics of funding:
>
> 1) Scientists rightfully complain that there isn't enough money available
> to "waste" on such objective competitions since it is *really* hard work,
> including both human and computation work that is very costly.
>
> 2) Funding sources, such as NSF, don't plow money into said prize
> competitions (as Matt suggested the NSF do for a replacement for the
> Turing Test with compression clear back in 1999)
> <https://gwern.net/doc/cs/algorithm/information/compression/1999-mahoney.pdf> 
> because
> all they hear from scientists is that such prize competitions can't work --
> (not that they can't work because of a lack of funding).
>
> There, is, of course, the ethical conflicts of interest involving:
>
> 1) Scientists that don't want to be subjected to hard work in which their
> authority is questioned by some objective criterion.
>
> 2) Politicians posing as competent bureaucrats who don't want an objective
> way of dispensing science funding because that would reduce their degree of
> arbitrary power.
>
> Nor is any of the above to be taken to mean that AGI is dependent on this
> approach to such pure number derivation of natural science parameters.
>
> But there *is* reason to believe that principled and rigorous approaches
> to the natural sciences may lead many down the path toward a more effective
> foundation for mathematics -- a path that I described in the OP.  This may,
> in turn, shed light on the structure of the empirical world that Bertrand
> Russell lamented lacked due to the failure of his Relation Arithmetic to
> take root and, in fact, be supplanted by Tarski's travesty called "model
> theory".
>
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M83ab3a14c8c449d907b6fcbc>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Me4d0bcfc0747948b05c39165
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Microsoft and OpenAI to build $100B supercomputer

2024-03-31 Thread Matt Mahoney
The supercomputer called Stargate will have millions of GPUs and use
gigawatts of electricity. It is scheduled for 2028 with smaller version to
be completed in 2026.
https://www.reuters.com/technology/microsoft-openai-planning-100-billion-data-center-project-information-reports-2024-03-29/

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T4f3b7facdd27f552-Mdec8a64dd64efe40e07d817c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread Matt Mahoney
On Sat, Mar 30, 2024, 7:35 PM John Rose  wrote:

> On Saturday, March 30, 2024, at 7:11 PM, Matt Mahoney wrote:
>
> Prediction measures intelligence. Compression measures prediction.
>
>
> Can you reorient the concept of time from prediction? If time is on an
> axis, if you reorient the time perspective is there something like energy
> complexity?
>
> The reason I ask is that I was mentally attempting to eliminate time from
> thought and energy complexity came up... verses say a physical power
> complexity. Or is this  a non-sequitur.
>

Prediction order doesn't matter because p(a)p(b|a) = p(b)p(a|b). In either
case the compressed size is -log(a,b).

The energy problem is how to implement a 600T parameter sparse (density
10^-7) neural network at 10 Hz on 20 watts? You would have to shrink
transistors to smaller than silicon atoms.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M4e2624dc2a10762d0e27c69e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread Matt Mahoney
On Sat, Mar 30, 2024, 11:13 AM Nanograte Knowledge Technologies <
nano...@live.com> wrote:

>
> I can see there's no serious interest here to take a fresh look at doable
> AGI. Best to then leave it there.
>

AI is a solved problem. It is nothing more than text prediction. We have
LLMs that pass the Turing test. If you can't tell if you are talking to a
human, then either it is conscious and has free will, or you don't.

I joined this list about 20 years ago when Ben Goertzel (OpenCog), Pei Wang
(NARS), YKY (Genifer), and Peter Voss (AIGO) were actively working on AGI
projects. But AGI is expensive. The
reason nobody on the list solved it is because it costs millions of dollars
to train a neural network to predict terabytes of text at $2 per GPU hour.

So yeah, I am interested in new approaches. It shouldn't require more
training data than a human processes in a lifetime to train human level AI.
That's about one GB of text. That is the approach I have been following
since I started the large text benchmark in 2006 that became the basis for
the Hutter prize.

Prediction measures intelligence. Compression measures prediction.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Mf8493b1484cb84f9aac5e5e4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-30 Thread Matt Mahoney
On Sat, Mar 30, 2024, 6:56 AM Keyvan M. Sadeghi 
wrote:

> Matt, you don't have free will because you watch on Netflix, download from
> Torrent and get your will back 😜
>

I would rather have a recommendation algorithm that can predict what I
would like without having to watch. A better algorithm would be one that
actually watches and rates the movie. Even better would be an algorithm
that searches the space of possible movies to generate one that it predicts
I would like. Same with music. I won't live long enough to listen to all
100 million songs available online.

Just because I know that free will is an illusion doesn't make the illusion
go away. The internally generated positive reinforcement signal that I get
after any action gives me a reason to live and not lose that signal.

Unfortunately, the illusion is also why pain causes suffering, rather just
being a signal like a dashboard warning light. What other explanation would
there be for why you pull your hand out of a fire?


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M3e67ca0ef51cc7b3e5cca8da
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread Matt Mahoney
On Sat, Mar 30, 2024, 7:02 AM John Rose  wrote:

> On Friday, March 29, 2024, at 8:25 AM, Quan Tesla wrote:
>
> The fine structure constant, in conjunction with the triple-alpha process
> could be coded and managed via AI. Computational code.
>
>
> Imagine the government in its profound wisdom declared that the fine
> structure constant needed to be modified and anyone that didn’t follow the
> new rule would be whisked away and have their social media accounts
> cancelled.
>

Imagine the government repealed the law of gravity and we all drifted off
into space.

We can measure the fine structure constant to better than one part per
billion. It's physics. It has nothing to do with AGI.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M87681c69a3d749f693fd48d6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-29 Thread Matt Mahoney
On Thu, Mar 28, 2024, 5:56 PM Keyvan M. Sadeghi 
wrote:

> The problem with finer grades of
>> like/dislike is that it slows down humans another half a second, which
>> adds up over thousands of times per day.
>>
>
> I'm not sure the granularity of feedback mechanism is the problem. I think
> the problem lies in us not knowing if we're looping or contributing to the
> future. This thread is a perfect example of how great minds can loop
> forever.
>

You mean who is in control and who thinks they are in control? When an
algorithm predicts what you will like more accurately than you can predict
yourself, then it controls you while preserving your illusion of free will.

Media companies have huge incentives to do this. Netflix recommends movies
based on the winner of a Kaggle contest with a $1M prize in 2009 on who was
best at predicting 100M movie ratings.

The whole point of my original post is that AI giving you everything you
want is not a good thing. We aren't looping. We are spiraling.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M3f96ed57030bbda68a7151b6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread Matt Mahoney
On Thu, Mar 28, 2024, 2:34 PM Quan Tesla  wrote:

> Would you like a sensible response? What's your position on the
> probability of AGI without the fine structure constant?
>

If the fine structure constant were much different than 1/137.0359992 then
the binding energy between atoms relative to their size would not allow the
right chemistry for intelligent life to evolve. Likewise for the other 25
or so free parameters of the standard model and general relativity or
whatever undiscovered theory encompasses both. The anthropic principle
makes perfect sense in a countably infinite multiverse consisting of an
enumeration of finite universes, one of which we necessarily observe.
Wolfram believes our universe can be expressed in a few lines of code.
Yudkowsky says a few hundred bits. I agree. I calculated the Bekenstein
bound of the Hubble radius at 2.95 x 10^122 bits, which implies about 400
bits in a model where the N'th universe runs for N steps.

But I don't see how solving this is necessary for AGI. As I described in
2006, prediction measures intelligence and compression measures prediction.
LLMs using neural networks (the approach I advocated) are now proof that
you can pass the Turing test and fake human consciousness with nothing more
than text prediction.
https://mattmahoney.net/dc/text.html

When I joined this list over 20 years ago, there was a lot of activity,
mostly using symbolic approaches like those of the AI winter in the decades
before that. People failed or gave up and left the list. In 2013 I
published a paper estimating the cost of AGI at $1 quadrillion. We are,
after all, building something that can automate $100 trillion in human
labor per year. Right now the bottleneck is hardware. You need roughly 10
petaflops, 1 petabyte,  and 1 MW of electricity to simulate a human brain
sized neural network. But in my paper I assumed that Moore's law would
solve the hardware problem and the most expensive part would be knowledge
collection.
https://mattmahoney.net/costofai.pdf

Of course, the cost is the reason I didn't write an open source
implementation of CMR. If a trillion dollar company can't get Google+ or
Threads off the ground, what compelling reason can I give to get a billion
people to join?

But yes, AGI will happen because the payoff is so enormous. It will
profoundly change the way we live.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M85dc3ef5cda3e15deab9e4ab
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-27 Thread Matt Mahoney
I predict a return of smallpox and polio because people won't get
vaccinated. We have already seen it happen with measles.

Also, just to be clear, I think "misinformation" and "protecting children"
are codewords for censorship, which I oppose. The one truly anonymous and
censor proof network that we do have is blockchain. You could in theory
encode arbitrary messages as a sequence of transactions, but it is not
practical because of high transaction costs because storage costs O(n^2)
because every peer has a copy. This is the problem I addressed in my 2008
proposal. O(n log n) requires an ontology that is found in natural language
but not in lists of encryption keys.

On Wed, Mar 27, 2024, 1:48 PM John Rose  wrote:

> On Wednesday, March 27, 2024, at 12:37 PM, Matt Mahoney wrote:
>
> Flat Earthers, including the majority who secretly know the world is
> round, have a more important message. How do you know what is true?
>
>
> We need to emphasize hard science verses intergenerational
> pseudo-religious belief systems that are accepted as de facto truth. For
> example, vaccines are good for you and won't modify your DNA :)
>
> https://twitter.com/i/status/1738303046965145848
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T991e2940641e8052-M66e2cfff4f8461d3f15cd897>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M89b3747f43409525b6b8ddc7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Singularity watch.

2024-03-27 Thread Matt Mahoney
AGI will be slow takeoff because:

1. Fast takeoff implies that AGI crosses the threshold of human
intelligence and starts improving itself. But no such threshold
exists. It depends on how you measure intelligence. Computers are
already a billion times smarter on tests of arithmetic and short term
memory. Still, computers are improving on every test we devise.

2. Moore's law will be slowed because we can't make transistors
smaller than atoms. Already they are at the limit of spacing between
silicon doping atoms. Clock speeds stalled in 2010. Reducing power
consumption to the level of the human brain will require
nanotechnology, moving atoms instead of electrons. We don't know when
this technology will be developed, but at the rate of Moore's law, it
will take a century of doubling world computing power every 2-3 years
to match the 10^37 bits of DNA storage and 10^31 amino acid
transcription operations per second of the biosphere. Quantum
computing can't save us because neural networks are not time
reversible.

3. Population is declining in most of the wealthier countries where
AGI development is occurring.

On Mon, Mar 25, 2024 at 3:40 PM Alan Grimes via AGI
 wrote:
> 
> Ok, we have been in para-singularity mode for about a year now. What are
> the next steps?
> 
> I see two possibilities:
> 
> A. AGI cometh.  AGI is solved in an unambiguous way.
> 
> B. We enter a "takeoff" scenario where humans are removed from the
> upgrade cycle of AI hardware and software. We would start getting better
> hardware platforms and AI tools at some non-zero rate with non-zero
> improvements without doing anything... How far this could procede
> without achieving AGI as a side-effect is unclear, as our human general
> intelligence appears to be an effect of the evolution-based improvement
> process that created us. At some point even a relatively blind
> optimization process would discover the principles required for
> consciousness et al...
> 
> In any event it's time to get this party started... We are teetering on
> the edge of socioeconomic collapse and probably won't get another chance
> at this within my lifetime. =|
> 
> --
> You can't out-crazy a Democrat.
> #EggCrisis  #BlackWinter
> White is the new Kulak.
> Powers are not rights.
> 



-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T75b708e761eaa016-Me6b6e28cb5eac6c41c375130
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-27 Thread Matt Mahoney
On Wed, Mar 27, 2024 at 10:23 AM Keyvan M. Sadeghi
 wrote:
>
> I'm thinking of a solution Re: free speech
> https://github.com/keyvan-m-sadeghi/volume-buttons
>
> Wrote this piece but initial feedback from a few friends is that the text is 
> too top down.
>
> Feedback is much appreciated 🤗

All social media lets you upvote or downvote posts and comments and
then use that information to decide what to show you and others. The
problem is that AI can do this a lot faster than humans, as you
demonstrated using Copilot. The problem with finer grades of
like/dislike is that it slows down humans another half a second, which
adds up over thousands of times per day.

In my 2008 distributed AGI proposal (
https://mattmahoney.net/agi2.html ) I described a hostile peer to peer
network where information has negative value and people (and AI)
compete for attention. My focus was on distributing storage and
computation in a scalable way, roughly O(n log n). Social media at the
time was mostly Usenet and mailing lists, so I did not give much
thought to censorship. This was after China's Great Firewall (1998),
but before the 2010 Arab Spring. Now the rest of the world is
following China's lead. China already requires you to prove your
identity to get a social media account, making it impossible to post
anonymously. In the US, both parties want age restrictions on social
media, which will have the same effect because you can't prove your
age without an ID.

> On Wed, Mar 27, 2024, 2:42 PM John Rose  wrote:
>> On Monday, March 25, 2024, at 5:18 AM, stefan.reich.maker.of.eye wrote:
>>> On Saturday, March 23, 2024, at 11:10 PM, Matt Mahoney wrote:
>>>> Also I have been eating foods containing DNA every day of my life without 
>>>> any bad effects.
>>>
>>> Why would that have bad effects?
>>
>> That used to not be an issue. Now they are mRNA jabbing farm animals and 
>> putting nano dust in the food. The control freaks think they have the right 
>> to see out of your eyes… and you’re just a rented meatsuit.
>>
>> We need to understand what this potential rogue unfriendly looks like. It 
>> started out embedded with dumbed down humans mooch leeching on it…. like a 
>> big queen ant.

I had a neighbor who believed all kinds of crazy conspiracy theories.
He had a bomb shelter stocked with canned food and was prepared for
the apocalypse. Just not for a heart attack.

We have a fairly good understanding of biological self replicators and
how to prime the immune systems of humans and farm animals to fight
them. But how to fight misinformation?

Flat Earthers, including the majority who secretly know the world is
round, have a more important message. How do you know what is true?
You have never been to space to see the Earth, so how do you know?
Everything you know is either through your own senses or what other
people have told you is true. But people can lie and your senses can
lie. (For example, your senses tell you that you are conscious and
have free will). When given a choice, you trust emotions over logic.
Given conflicting evidence, we believe whatever confirms what we
already believe, no matter how unlikely, and reject the rest. We can't
help it. The human brain has a cognitive memory rate limit of 5 to 10
bits per second. Deeply held beliefs about religion or politics
represent 10^7 to 10^8 bits, and cannot be refuted by logical
arguments of a few hundred bits. We want to be rational, but we can't
be. It takes years of indoctrination.

So the question is how to hold your attention for hours every day for
years? Shakespeare figured out that people will pay to be angry or
afraid. Since then, the formula for dramas has been used for centuries
in theatres, movies, radio, TV, and Youtube. News is especially
effective because it is real, not fiction. Both the left and the right
have figured out how to keep their stations on for hours with true but
cherry picked news events. AI will make it vastly cheaper to buy
influence.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M823678207210eba3242679a2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


  1   2   3   4   5   6   7   8   >