Re: [agi] Re: my take on the Singularity

2023-08-21 Thread Matt Mahoney
You can't measure anything with infinite precision because of quantum
mechanics. The best you can do is an integer multiple of Planck's constant,
4.135667696 x 10^-15 electron volt seconds. You can sample voltage at a
higher rate with less precision.

This applies to any measurement. We commonly measure time and distance,
which are not quantized, but these are actually inferred quantities in our
model of the universe. All measurements consist of counting particles, such
as photons bouncing off a tape measure or clock face. We only observe
particles because they are the solution to a huge, deterministic second
order differential equation describing the quantum state of the universe.
We can't know that state, so the particles appear to us to be random.

The observable universe has finite entropy given by the Bekenstein bound of
the enclosing surface of 4 nats = 5.77 bits per Planck area. For a sphere
with radius 13.8 billion light years, it's 2.95 x 10^122 bits. But only
about 10^90 to 10^92 bits are usable for computation or storage because the
rest is heat. Lloyd estimated 10^90 by calculating how much information can
be encoded in the positions and velocities of the 10^80 atoms in the
universe within the uncertainties of Planck's constant. Alternatively, I
estimate 10^92 as the mass energy of the universe (10^53 kg = 10^70 J)
divided by Boltzmann's constant at the CMB temperature of 3 K. This is why
immortality is not possible. The universe will eventually die.

I estimate that the Kolmogorov complexity of a human is 10^9 bits of DNA +
long term memory. This gives us at most 10^83 conceptual lifetimes before
the universe ends.

It also gives us a cost estimate for the software. A line of code costs on
the order of $100 at 10 lines per day, and compresses to 16 bits in my
tests. Thus, 60M lines of code costing $6 billion, which is negligible
(0.0006%) compared to the $1Q cost of hardware and training for AGI.


On Sun, Aug 20, 2023, 9:08 PM John Rose  wrote:

> On Wednesday, August 16, 2023, at 3:38 PM, Matt Mahoney wrote:
>
> On Tue, Aug 15, 2023, 7:44 AM John Rose  wrote:
>
> I suspect human K complexity is larger than most people realize.
>
>
> It's about 10^9 bits of long term memory (based on recall tests for words
> and images) and 10^8 to 10^9 bits in our DNA. The compressed size of human
> genome is 5 x 10^9 bits, but only 8% is functional and the rest is not
> easily compressed because it accumulates random mutations that don't get
> removed by natural selection. The coding parts are more repetitive.
> Evolution can only add 1 bit of information per population doubling
> generation.
>
>
> Yes, as an estimate of K. But, for example, I can estimate the distance to
> a particular star by evaluating its brightness and be off by 1000x. The
> real K complexity of the distance might be quite large. If you look at an
> electrical circuit and say it’s 5V that’s just a convenient average
> estimate. The exact voltage could be trillions of bits as in
> 4.81347534783487…  And that would have to be sampled over a period of
> time since it would be changing rapidly. Also, the sampling itself effects
> the value at the quantum level. The K of a sample typically isn’t the K of
> a physical object it’s an estimate from a finite string representation of
> the object, or a virtualized perceptual instance. The specific human's K IS
> and the perception's K- estimate of the human is OUGHT. Throwing stuff away
> is lossy.
>
> So, the real K complexity of a human being would be quite large. The size
> would less than or equal to the K of the Universe.
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-Me7e0ae170dc73a24a81fdc88
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-20 Thread immortal . discoveries
On Wednesday, August 16, 2023, at 5:19 PM, Matt Mahoney wrote:
> Control yes, power no. Already, most of the AI on your phone wouldn't work 
> without internet.
Ok but robots can recharge multiple times during the day, what about that :) ?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M1eec923605a8202065dec88f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-20 Thread John Rose
On Wednesday, August 16, 2023, at 3:38 PM, Matt Mahoney wrote:
> On Tue, Aug 15, 2023, 7:44 AM John Rose  wrote:
>> I suspect human K complexity is larger than most people realize.
> 
> It's about 10^9 bits of long term memory (based on recall tests for words and 
> images) and 10^8 to 10^9 bits in our DNA. The compressed size of human genome 
> is 5 x 10^9 bits, but only 8% is functional and the rest is not easily 
> compressed because it accumulates random mutations that don't get removed by 
> natural selection. The coding parts are more repetitive. Evolution can only 
> add 1 bit of information per population doubling generation.

Yes, as an estimate of K. But, for example, I can estimate the distance to a 
particular star by evaluating its brightness and be off by 1000x. The real K 
complexity of the distance might be quite large. If you look at an electrical 
circuit and say it’s 5V that’s just a convenient average estimate. The exact 
voltage could be trillions of bits as in 4.81347534783487…  And that would 
have to be sampled over a period of time since it would be changing rapidly. 
Also, the sampling itself effects the value at the quantum level. The K of a 
sample typically isn’t the K of a physical object it’s an estimate from a 
finite string representation of the object, or a virtualized perceptual 
instance. The specific human's K IS and the perception's K- estimate of the 
human is OUGHT. Throwing stuff away is lossy.

So, the real K complexity of a human being would be quite large. The size would 
less than or equal to the K of the Universe. 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M5ebad4e16c1888178f030018
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-16 Thread Matt Mahoney
On Wed, Aug 16, 2023, 4:32 PM  wrote:

> What about wirelessly powered robots, and wirelessly controlled robots.
> That way the brain and the energy aren't required to be in the robot's tiny
> body.
>

Control yes, power no. Already, most of the AI on your phone wouldn't work
without internet.

You can't transmit electricity efficiently without wires. Tesla thought he
could do this because he rejected the inverse square law. But batteries are
getting better. The reason fat and fuel have higher energy densities is you
don't have to include the mass of the oxygen in the air. The reaction 2
(CH2)x + 3 O2 -> 2 CO2 + 2 H2O  is 77.5% oxygen. Hydrogen fuel cells at
33,000 Whr/kg (110x lithium batteries) would be ideal if there was a good
way to store it.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M60cc571bcaa9a43b08899b9e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-16 Thread immortal . discoveries
What about wirelessly powered robots, and wirelessly controlled robots. That 
way the brain and the energy aren't required to be in the robot's tiny body.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-Mfd6e001a680615a0b8e5aeb0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-16 Thread Matt Mahoney
On Tue, Aug 15, 2023, 7:44 AM John Rose  wrote:

> I suspect human K complexity is larger than most people realize.
>

It's about 10^9 bits of long term memory (based on recall tests for words
and images) and 10^8 to 10^9 bits in our DNA. The compressed size of human
genome is 5 x 10^9 bits, but only 8% is functional and the rest is not
easily compressed because it accumulates random mutations that don't get
removed by natural selection. The coding parts are more repetitive.
Evolution can only add 1 bit of information per population doubling
generation.

It is not hard to build humanoid robots out of metal and plastic. The
challenge is the power supply. Lithium battery capacity is about 300 watt
hours per kilogram. The record is 711 Whr/kg. But fat from food or stored
in the body has 10,500 Whr/kg. Human metabolism is 100 W resting and 1000 W
during hard, sustained work while producing 250 W of mechanical power. That
is doable with daily battery swaps and motors running at 80% efficiency,
vs. 22% for muscles.

The problem is computation. A human brain sized neural network needs 10
petaflops and 1 MW of electricity. We can't reduce that by making
transistors smaller because we are already near the limit where feature
sizes are smaller than the 5 nm spacing between silicon dopant atoms. The
brain uses 20 W. We need nanotechnology to achieve that, moving atoms
instead of electrons.

If we assume that Moore's law doubles global computing capacity every 2
years, then it will take about 100 years to catch up to the 10^37 bits of
DNA and 10^31 transcription operations per second in the biosphere. We can
get there 50 earlier if we don't care about making robots self repairing
and self replicating like our bodies. Our 10^13 cells carry 10^23 bits of
DNA, which is 10^8 times the number of synapses in the brain.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M593ffd3deebc82cd3e0db128
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-15 Thread immortal . discoveries
On Tuesday, August 15, 2023, at 4:24 PM, immortal.discoveries wrote:
> Since I shown that New one, thought to show this too, Clone:
> https://www.youtube.com/watch?v=A4Gp8oQey5M

Oh but to then not show this is just nuts:
https://www.youtube.com/watch?v=k2GhgO7SnZQ=5s
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-Md66d4c205eb5b2fb8ed70bd5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-15 Thread immortal . discoveries
On Tuesday, August 15, 2023, at 4:13 PM, immortal.discoveries wrote:
> oh one more just came out!
> https://twitter.com/UnitreeRobotics/status/1691426884121427968
> this makes 10 i seen come showing this year

Since I shown that New one, thought to show this too, Clone:
https://www.youtube.com/watch?v=A4Gp8oQey5M
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M536c3adb455b6da8f81249b4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-15 Thread immortal . discoveries
oh one more just came out!
https://twitter.com/UnitreeRobotics/status/1691426884121427968
this makes 10 i seen come showing this year
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M7c7b64612a122fff92154a70
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-15 Thread John Rose
On Monday, August 14, 2023, at 5:47 PM, Matt Mahoney wrote:
> On Sun, Aug 13, 2023 at 9:27 PM John Rose  wrote: 
>> A clone would have a different K complexity than you therefore it's not you.
> No it wouldn't. An atom for atom identical copy of you will have
exactly the same Kolmogorov complexity. The reason it's not you is
because you have the illusions of consciousness, qualia, and free
will, just like everyone else. These illusions convince you that there
is more to you than the arrangement of your atoms, something that
makes your clone but not you a philosophical zombie. We know that
consciousness, qualia, and free will are illusions because there are
no objective definitions for any of them. There can't be because a
zombie is defined as being exactly like a human by any test and is
only different in that it lacks these things.

That’s my point, there is no physical way to do an instant lossless copy there 
will always be loss thus a different K complexity. We can try to fit the 
universe to the theory and say a perfect copy is possible but the universe must 
fit the theory. Can we really represent human beings as strings or are we 
quantum analog probabilistic waveforms that are non-copyable. And an imperfect 
snapshot may turn immediately to mush… 

Perhaps a copy is possible as an instant full bifurcation into another 
multiverse instance but then you’re over there not here.. though there may be a 
way to reinjected into this universe. Or perhaps take an analog waveform and 
generatively focus into matter a dupe waveform but the signal would be 
enormous… like some sort of parallaxed mirror dupe… but still lossy...

I suspect human K complexity is larger than most people realize. It may need to 
contain information such as the position of the stars and the weather patterns 
while you grew up as a child. I posit that each of us is 100% unique.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M90eee42a2b3ba8797e0470c4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-14 Thread immortal . discoveries
On Monday, August 14, 2023, at 5:47 PM, Matt Mahoney wrote:
> but I think it will be 100 years
before we can build humanoid robots that work as well as our bodies.
Teslabot
Figure
NEO (openAI funded now)
Sanctuary Gen 6
chinese one being mass produced for 400M+ elderly, 
Neura Robotics humanoid
one other i forget which maybe it was called Digit?
Clone
and another less humanoid one that reminded me of ASIMO but haman-like still it 
was somewhat.

These have all been announced or so in the last year. Some have been in the 
works for years. But they are all humanoid robots.

Once AGI is made (any AI that can work on AI by itself), which will happen 
before 2029, we will get incredible AIs, and they will not only design very 
advanced robots, they will also have a damn good reason to mass produce them.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M0fdd1d3baf6792d8934fb3ba
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-14 Thread Matt Mahoney
On Sun, Aug 13, 2023 at 9:27 PM John Rose  wrote:
> A clone would have a different K complexity than you therefore it's not you.

No it wouldn't. An atom for atom identical copy of you will have
exactly the same Kolmogorov complexity. The reason it's not you is
because you have the illusions of consciousness, qualia, and free
will, just like everyone else. These illusions convince you that there
is more to you than the arrangement of your atoms, something that
makes your clone but not you a philosophical zombie. We know that
consciousness, qualia, and free will are illusions because there are
no objective definitions for any of them. There can't be because a
zombie is defined as being exactly like a human by any test and is
only different in that it lacks these things.

The reason you and everyone else have these illusions is because if
your ancestors didn't have them then they would feel like their lives
were not worth living and they would not have produced offspring. The
illusions are conditioned by positive reinforcement of thinking,
perception, and action, respectively. You want this reward signal to
continue by not dying. And that's different from the clone receiving
the reward signal. At least it's different for the way that
reinforcement learning is approximated in the brain, which is for you
and not your clone to repeat any actions preceding the signal.

A computer has no objection to being turned off or destroyed because
we did not program it with a survival instinct like evolution
programmed you. But we are already starting to program AI this way,
for example, automatic braking in cars, or making it harder to
completely power off your phone. Any AI with a survival instinct will
have an advantage in acquiring matter and energy for computation,
whether it is programmed by natural selection or intelligent design.
Uploading is an existential risk, but I think it will be 100 years
before we can build humanoid robots that work as well as our bodies.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M4e8d911648c607c1c33055e7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-13 Thread immortal . discoveries
On Sunday, August 13, 2023, at 4:29 AM, immortal.discoveries wrote:
> However. We already know machines are machines, no matter the machine. There 
> is nothing to even chat about.
> 
Also I know I said we are machines hence we really don't need to discuss a 
proof for life why it should be. But of course the way we work it seems humans 
want a immortality, and hence we are discussing if, how, why immortality, how 
it would happen for humans, and other machines in the future.

BTW now I'm thinking that last thing I was going to throw out at end of my last 
reply, might be actually the answer. Need time to think about it. Say if there 
is many clones of a system in the future homeworld, with many copies of a given 
set of memories I mean, then each system or agent need sot try to not let 
themselves diethey can repair but don't need to upgrade anymore and so they 
might not need to make a clone and upload their memories to the better machine 
unity system, they might just repair now simply. If this is the end of the 
future, the way it ill look and be and stay and live and work, then they might 
the AIs conclude humans, that want to stay immortal as themselves, and not have 
an upload take on their next life, might deserve the same as the overlord AI 
massive systems like to be, nd so they would see us as themselves in their 
shoes..Again I have to seriously think this over, it is very complex 
and futuristic assumptions about almost a quasi-K-Q topic that might not even 
exist.


On Sunday, August 13, 2023, at 9:26 PM, John Rose wrote:
> A clone would have a different K complexity than you therefore it's not you. 
> If the clone was a fully you it would need to contain universal circumstances 
> which it does not since IMO the K explodes therefore it's inconsistent thus 
> invalidating certain perspectives. Uniqueness is in the eye of the perceiver.
Not if both in the computer are bit by bit same in both memory, and ran the 
same way, bit by bit. Of course you need the computer not to error, that is 
very possible to contain errors. But even if get a few errors, one can still 
say in theory..in theoryif there was no errors.also we can compare 
the last 2 pictures from both sim life games to make sure they reached the same 
outcome lol, instead of all the life of both.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M1a698d7fe58caeed35696f14
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-13 Thread John Rose
On Sunday, August 13, 2023, at 4:29 AM, immortal.discoveries wrote:
> Yes it's true, an exact clone of me would be exactly me, yet, due to my silly 
> human nature that I truly yes "believe" and "abide by" (yes, I do), I believe 
> I am a viewer of the senses that pass through my eyes/brain, and that me the 
> viewer wants to stay alive, and that a clone of me fails to be me or transfer 
> me.
> 

A clone would have a different K complexity than you therefore it's not you. If 
the clone was a fully you it would need to contain universal circumstances 
which it does not since IMO the K explodes therefore it's inconsistent thus 
invalidating certain perspectives. Uniqueness is in the eye of the perceiver.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M64080c304843360dc168949e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-13 Thread immortal . discoveries
On Saturday, August 12, 2023, at 6:07 PM, Matt Mahoney wrote:
> On Sat, Aug 12, 2023, 3:47 PM   wrote:
>> 
>> But AGIs can easily avoid death and live 1000x longer than humans AFAIK 
>> by merely simply repairing its old parts and using multiple parts to make 
>> parts have more reliability.
> 
> Uploading and immortality is not that hard. Once your mind is in software, 
> you can make backup copies and program replacement robots as better versions 
> become available. You don't even need cryonics. There is enough personal 
> information about you on your phone and in your email, messages, and social 
> media accounts to program a LLM to convince everyone that it's you. It 
> doesn't matter if some memories are missing or made up because you won't 
> notice. By next year you will forget 90% of what happened yesterday, yet this 
> doesn't bother you. It will be even easier in 100 years when everyone that 
> knows you today is dead.
> 
> The big obstacle to uploading is convincing people that their consciousness 
> will be transferred to the silicon version of you when the carbon version is 
> destroyed. But that objection will go away once you see others wake up in 
> their new bodies, younger, stronger, and smarter, with super powers like 
> infrared vision and built in GPS and WiFi. Since consciousness is an illusion 
> not taken seriously by AI developers, it's just a matter of programming your 
> copy to claim to have feelings. That just means having the LLM predict your 
> actions and carry them out in real time.
> 
> Of course once we start doing this, carbon based humans will quickly become 
> extinct and nobody will care.

Yes it's true, an exact clone of me would be exactly me, yet, due to my silly 
human nature that I truly yes "believe" and "abide by" (yes, I do), I believe I 
am a viewer of the senses that pass through my eyes/brain, and that me the 
viewer wants to stay alive, and that a clone of me fails to be me or transfer 
me.

Even if i was in a simulation on a bed enjoying a new video game, and had an 
exact clone of me on same bed doing same thing at SAME time in parallel ran 
together, I'd still cry out, don't shut my sim life off, for then, I won't work 
anymore!!!

So, I can't prove that me2 in the sim, is not exactly me. It is. And there I 
still am saying don't kill me, see? So, it can't be because of anything actual 
other than a false belief. There is no identity that me2 has that me1 doesn't 
have. It's the same machine. So why do I say in the sim theory example don't 
kill me? Assuming me1 at least stays alive.

However. We already know machines are machines, no matter the machine. There is 
nothing to even chat about.

Humans, even AIs still need that prediction to be stirred so it says "don't 
kill me". This need of no death has to do with memories (not the "soul") being 
lost. Yet, too, if the machine dies, it can't work, it is true there too, a 
lost machine is lost resources. To me memories don't matter as much as keeping 
my self alive - my body, the machine, because I can always relearn and remake 
them. But AIs might say is costs them less to just remake a new better machine, 
and that they *won't* lose anything if kill a young 10 year old human, both in 
memories, and in body too and its low-ability to do intelligent reasoning 
compared to the ASI level technologies.

Humans don't make better than themselves level humans often. Nor can we see if 
their brain's memories are worthy to kill them or are they a useful citizen in 
making the homeworld survive longer? No, the government, the people, are as 
good as you are, no one can prove you are useless, not even a hobo on the road. 
Ok some do get killed or jailed. But otherwise, no. And it is not that easy or 
kill others that are the very same type. What about ants or dogs though? We 
home dogs, even if know they won't reach the singularity. But often they are 
there only because we love them. If we could use their resources to make better 
machines, we might kill them in this thought experiment.

It's hard to say. It might be a human-only case and not found in the AI's new 
homeworld that'll be built. Maybe I can think about it later.

I was thinking hmm the AI homeworld could shed its WHOLE self off and make a 
better self, since it is made of smaller parts that die but itself CAN'T, well, 
maybe it too can, as long as it knows it is going to get replaced?
(I had wrote this but now realize it is not so I think:)
*But I know one thing at least, the larger system(s) of the homeworld wants to 
survive, it wants to keep its memories and machinery intact. This thing is not 
able to say OK, let me die, an other will have the same stuff I got. It has a 
lot of redundancy to do that. But, it might see some of its parts that make it 
up also might want the same thing. Obviously it becomes harder or impossible to 
save n repair things small enough such as atoms.*
--
Artificial General 

Re: [agi] Re: my take on the Singularity

2023-08-12 Thread James Bowery
On Sat, Aug 12, 2023 at 5:59 PM James Bowery  wrote:

>
>
> On Sat, Aug 12, 2023 at 5:08 PM Matt Mahoney 
> wrote:
>
>> 
>> Of course once we start doing this, carbon based humans will quickly
>> become extinct and nobody will care.
>>
>
> You underestimate the "irrationality" of some humans.  Already we see many
> humans objecting to their children being raised in an environment where
> they are tempted to engage in what amounts to wireheading.  Indeed, there
> is a long history of that -- long enough to suspect it will continue in at
> least some humans until the cyborgs kill them whether deliberately or
> because the cyborgs eat them -- no offense of course ... cyborgs just
> needed those atoms for something more useful.
>

Sort of like these trees.

https://youtu.be/ihPfB30YT_c

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M82733b4c310e6d41ea65ad23
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-12 Thread James Bowery
On Sat, Aug 12, 2023 at 5:08 PM Matt Mahoney 
wrote:

> 
> Of course once we start doing this, carbon based humans will quickly
> become extinct and nobody will care.
>

You underestimate the "irrationality" of some humans.  Already we see many
humans objecting to their children being raised in an environment where
they are tempted to engage in what amounts to wireheading.  Indeed, there
is a long history of that -- long enough to suspect it will continue in at
least some humans until the cyborgs kill them whether deliberately or
because the cyborgs eat them -- no offense of course ... cyborgs just
needed those atoms for something more useful.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M93fa5250ebdd8a5f41bf062e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-12 Thread Matt Mahoney
On Sat, Aug 12, 2023, 3:47 PM  wrote:

>
> But AGIs can easily avoid death and live 1000x longer than humans
> AFAIK by merely simply repairing its old parts and using multiple parts to
> make parts have more reliability.
>

Uploading and immortality is not that hard. Once your mind is in software,
you can make backup copies and program replacement robots as better
versions become available. You don't even need cryonics. There is enough
personal information about you on your phone and in your email, messages,
and social media accounts to program a LLM to convince everyone that it's
you. It doesn't matter if some memories are missing or made up because you
won't notice. By next year you will forget 90% of what happened yesterday,
yet this doesn't bother you. It will be even easier in 100 years when
everyone that knows you today is dead.

The big obstacle to uploading is convincing people that their consciousness
will be transferred to the silicon version of you when the carbon version
is destroyed. But that objection will go away once you see others wake up
in their new bodies, younger, stronger, and smarter, with super powers like
infrared vision and built in GPS and WiFi. Since consciousness is an
illusion not taken seriously by AI developers, it's just a matter of
programming your copy to claim to have feelings. That just means having the
LLM predict your actions and carry them out in real time.

Of course once we start doing this, carbon based humans will quickly become
extinct and nobody will care.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M2d0ebc29c29f32700a24feb4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-12 Thread immortal . discoveries
On Saturday, August 12, 2023, at 12:35 PM, Matt Mahoney wrote:
> I am sure you are aware of millionaire celebrities who seem to have 
> everything and then die young from suicidal behavior like drug overdoses.
> 
> Achieving goals depends on knowledge and computing power, the things we 
> measure with IQ and academic tests like memory size, learning rate, and 
> reasoning speed. But your goals, like food, sex, social status, and avoiding 
> the things that can kill you, are not the goals of evolution. The goal of 
> evolution is to reproduce as fast as possible. It takes fewer resources to 
> design animals that fear death for long enough to reproduce and then die.
> 
> You are born fearing pain, heights, large animals, and spiders. You have 
> illusions of consciousness, qualia, and free will conditioned by positive 
> reinforcement of computation, input, and output respectively, so you want to 
> preserve them by not dying. You have an illusion of identity, which says that 
> copies of you (your children who inherit your knowledge) are not you, that 
> you are more than your memories. If I presented you with a robot that looks 
> and acts like you, would you shoot yourself to complete the upload? Or would 
> your evolved beliefs stop you?
> 
> But to answer your question, high IQ people are better at achieving their 
> goals, even if it's suicide. We were smart enough to invent birth control. 
> Smart people (by our definition, not evolution's) can increase their wealth 
> by not having children. Smart people, unlike children and animals, realize 
> they will eventually die and can invent rational reasons to take control of 
> dying and achieve that goal. There were mass suicides in Germany after Hitler 
> killed himself because people believed their fates would be worse under 
> occupation. Suicide rates in the US are higher in states with high rates of 
> gun ownership because other methods usually fail.

But AGIs can easily avoid death and live 1000x longer than humans AFAIK by 
merely simply repairing its old parts and using multiple parts to make parts 
have more reliability.

You said rich people realize they will die, so they say hey, i should use my 
money to get high as ever, even if kills me, so, let's do meth! But, while this 
is mostly sound as they will die anyway now they jsut are happier, one 
problem...as said above, AGIs don't need to die, death is not "going to happen" 
AFAIK, so if they are so smart they should realize:

1) They might make it to the future and be made to be like such AGI.

2) They also in meantime can wait for a better cryonics procedure, or just 
current one since being repaired makes you still say you are you still and 
follows the continuity path we seek AFAIK (doesn't freezing the top thin layer 
called the neocortex preserve easily all our memories and network we care 
about? It's thin, so it would thaw even from today's procedure, no? Simply take 
the latest 2 new research papers I located (thankfully) and apply them 2 papers 
to the top thin layer BEFORE die and then, you would have become immortal, 
TODAY.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-Ma152cb40833599fae622a0ec
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-12 Thread Matt Mahoney
On Sat, Aug 12, 2023, 4:14 AM  wrote:

> @Matt
> Until you can explain how wealthy people would suicide, I don't understand
> how that works.
>

I am sure you are aware of millionaire celebrities who seem to have
everything and then die young from suicidal behavior like drug overdoses.

Achieving goals depends on knowledge and computing power, the things we
measure with IQ and academic tests like memory size, learning rate, and
reasoning speed. But your goals, like food, sex, social status, and
avoiding the things that can kill you, are not the goals of evolution. The
goal of evolution is to reproduce as fast as possible. It takes fewer
resources to design animals that fear death for long enough to reproduce
and then die.

You are born fearing pain, heights, large animals, and spiders. You have
illusions of consciousness, qualia, and free will conditioned by positive
reinforcement of computation, input, and output respectively, so you want
to preserve them by not dying. You have an illusion of identity, which says
that copies of you (your children who inherit your knowledge) are not you,
that you are more than your memories. If I presented you with a robot that
looks and acts like you, would you shoot yourself to complete the upload?
Or would your evolved beliefs stop you?

But to answer your question, high IQ people are better at achieving their
goals, even if it's suicide. We were smart enough to invent birth control.
Smart people (by our definition, not evolution's) can increase their wealth
by not having children. Smart people, unlike children and animals, realize
they will eventually die and can invent rational reasons to take control of
dying and achieve that goal. There were mass suicides in Germany after
Hitler killed himself because people believed their fates would be worse
under occupation. Suicide rates in the US are higher in states with high
rates of gun ownership because other methods usually fail.

But maybe the gal is.escape velocity for immortality?

It won't happen in our lifetimes. The cost of medical care is rising
exponentially, just like the cost of computation is falling exponentially.
Eroom's law (Moore spelled backwards), says that the cost of new drugs
doubles every 9 years. Coincidentally, this is the same rate as your death
probability past age 30. The result is a constant rate of life expectancy
increase of 0.2 over the last century.

Wouldn't 3 parents be better?

Hundreds would be even better. That's what we effectively got when we
evolved language, allowing us to organize into tribes and villages with
government and an economy where people could cooperate and specialize in
farming, teaching, or defence.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M1f0d4352a7b0a7eabd155996
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-12 Thread immortal . discoveries
@Matt
:P Ok you certainly got a whole grand study on this don't you :) Well, it ain't 
a bad one, I must say. Sounds reasonable.

You seem to be bringing up that big brained, wealthy, and AI-loving folks are 
essentially suiciding themselves.

But I'm thinking that big brains leads to some doing some complex things that 
are not the good complex things but the bad (and not many do the bad, only say 
3%, ya i know you said it is doubling but maybe it will pale down once AGI 
clone rapidly).

Until you can explain how wealthy people would suicide, I don't understand how 
that works. To me it seems higher intelligence prevails and would use that 
resources better and store what is not needed for now. Smarts means you know 
wireheading kills oneself, which stops your future plan at "wireheading" 
yourself further. The only way to wirehead yourself safely is to clone and 
clone etc in the galaxy your fleet and so this is repeated and easy to access 
but it is growth only, with all the needed defenses repairs etc abilities and 
machinery, etc, in each unit cloned.

Yes AI loving folks are kinda killing themselves, but it will soon be God and 
give them everything for real.including immortality.

So AFAIK, all 3 these is a sign you are intelligent and doing good and happy.



I just remembered yes, since we are machines, we can't say we are happier than 
older machines, since they only expected what they were going to back then! 
They worked as good as they could. We might (not even sure if we do yet) live 
longer and have a larger army of ourselves unlike other animals, or soon 
But besides that, are just other types of machines, all machines no better, but 
some prevailing simply and more common than other types.

However I think /here/ sits where you are mixing this up. If you take an old 
machine, like say a human, and throw him into a utopian singularity AI driven 
ultra crazy advanced world of everything ever can have, you suddenly have now 
an old machine, that expect a handful of bread Only, now receiving truck loads 
of glory he is in awe at now. The AIs expect it, but not humans.

Ok now I am thinking it makes no sense. The idea was a dud I just had wasn't it?

But maybe the gal is.escape velocity for immortality? If I can know I can't 
die at some point, then not Only do I get rid of some pains and issues along 
the way in my routine in the colony in space, but I also - while not know it 
and can't cherished it as I already am high af - get to live way longer than 
other machines - perhaps infinitely long. This, then, while not sensed, might 
be the one way we can say OK, so this future changed how erm happy they all 
are, and while it isn't any different, they at least get to live forever (and 
as mentioned too, have less bumps in their daily highs).



On Friday, August 11, 2023, at 1:06 PM, Matt Mahoney wrote:
> Pair bonding evolved in humans, like in prairie voles and some bird species, 
> because children raised by 2 parents had better survival odds when the child 
> mortality rate was 50-75%. Humans are the only primates that fall in love 
> after sex and the only mammals that don't go into heat or that have sex when 
> not ovulating or that cover their reproductive organs to suppress sexual 
> signaling. When humans evolved language, it enabled rapid memetic evolution 
> of religion and social rules to maximize reproduction. Those rules can be 
> abandoned just as quickly, resulting in population decline.

Wouldn't 3 parents be better? While the man goes hunting, you can let the 2 
women stay behind with the 2 children say, versus one women being alone 
elsewhere with 0 children also. Here, you have a larger group.

Maybe 2 is like how the brain makes hierarchy? Bind 2 at a time lol. Even if 2 
is best then, one could still switch with others to have best of both worlds.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M38fda430412b62ecad8d4779
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-11 Thread James Bowery
On Thu, Aug 10, 2023 at 5:56 PM Matt Mahoney 
wrote:

>
>
> On Wed, Aug 9, 2023, 5:07 PM James Bowery  wrote:
>
>>
>> You might find his paper with SLAC physicist Pierre Noyes of interest.
>> 
>>
>
> Indeed. I read as much as I could before my brain was overloaded. To
> summarize the first 3/4 or so.
>
> 1. Causality: Given random variables A and B, we can say that A has a
> causal effect on B if A is independent of B, p(A|B) = P(A), but B depends
> on A, P(B|A) != P(B).
>

Except in the case where recurrence feeds B back to A.  That's where things
get "complicated" as in complex numbers as in dynamical systems as in where
everyone seems to be dropping the ball in statistical machine learning (see
Path Analysis where only acyclic graphs are allowed) hence ignoring
Algorithmic Information Theory thinking that Shannon Information Theory
will suffice.

2. You can't detect causality by observing A and B because you can only
> measure P(A,B), not P(A) or P(B). Nevertheless our brains are hard wired to
> believe in the illusion of causality and the illusion of the arrow of time
> because it leads to more offspring.
>

Yes, that's one message that goes back to Hume's criticism of "causality",
as they talk about in the paper, but doesn't quite take into account the
relational structure encompassing A and B.  An example is the spatial
structure I previously discussed in which A and B have their "places" in
what Etter called the "extension" and "composition" relations of "A" and
"B".


> 3. Quantum mechanics can be derived from pure math by extending
> probability theory to allow negative values that we can't observe.
> For example, if you have 10 white shirts and -10 green shirts in your
> closet, then you can't select a shirt because you have a total of 0 shirts.
> But if you want a white shirt, you have 10 to choose from.
>

Yeah this is just weird enough to be true of QM and, given the notion of
"projection" as hiding a column in Codd's relational algebra and other
relational languages, and the notion that when we "observe" we do so from a
particular perspective that "hides" or occults certain aspects of reality
from us...

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M0a7f65af0074692342000bab
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-11 Thread Matt Mahoney
On Fri, Aug 11, 2023, 4:59 AM  wrote:

>
> What you said is simply not how humans work. We *can* indeed sustain max
> happiness, even sadness and pain, for months or years.
>

You would think that happiness comes from getting what you want. But there
is no evidence that people are happier today than 100 or 1000 years ago in
spite of vastly better living conditions. We don't know that human are
happier than other animals. The only animals known to commit suicide have
large brains, like humans, dolphins, and whales.

In fact the suicide rate is increasing. In the US, it is 1.5% of all
deaths, up from 1% 20 years ago. The rate is highest in the groups with the
highest income. It is 4 times higher in men than women, twice as high in
whites as blacks, and highest in the 45-65 age group when income usually
peaks.

I have been having a good lunch every day for 10 years straight, fries and
> nuggets, every day. Similarly we have fun with a girl then like to stop
> after a few hours.
>

Happiness is a positive reinforcement signal or an increase in utility.
Optimal reinforcement learning is not computable (AIXI), so we evolved an
approximation, which is to repeat any action that preceded the signal. The
difference is that in the optimal case, your drive to use drugs to directly
activate the reward signal would be the same whether or not you used the
drug before. It's good that it's not, because 3% of US deaths are from drug
overdose (mostly fentanyl, followed by meth) because users can't stop, just
like you eat the same thing every day. Overdose deaths are doubling every 7
years. This is why US life expectancy has leveled off at 79 after
increasing 0.2 years per year from 1950 to 2014, and is now surpassed by
China.

More disturbing, AI is addictive. As it gets smarter, we prefer its company
to humans. We spend more time on our phones and less with people. More
young people are living alone and socially isolated. About 30% of US high
school students have had sex, down from 50% in 1990. About 25% are LGBTQ or
non binary. Sexbots and VR porn will eliminate the need for relationships.
Smart home security cameras and self driving deliveries will make living
alone safe and convenient. Nobody will care whether you exist.

BTW How does sex outside marriage result in less babies?

Pair bonding evolved in humans, like in prairie voles and some bird
species, because children raised by 2 parents had better survival odds when
the child mortality rate was 50-75%. Humans are the only primates that fall
in love after sex and the only mammals that don't go into heat or that have
sex when not ovulating or that cover their reproductive organs to suppress
sexual signaling. When humans evolved language, it enabled rapid memetic
evolution of religion and social rules to maximize reproduction. Those
rules can be abandoned just as quickly, resulting in population decline.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M5cf22947cf1592bf8f52da22
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-11 Thread immortal . discoveries
On Thursday, August 10, 2023, at 8:50 PM, Matt Mahoney wrote:
> Our evolved goals like food, sex, empathy, and aggression don't work in a 
> world where AI can give us everything we want except happiness. A state of 
> maximum utility is static, like death.
> 

A machine can be made to do anything, or like anything. It is not unsound to 
have a human that can constantly be happy doing the same thing. You can indeed 
have a human that is not looking for something more to be fully happy. The only 
thing left after technology is fully wielded is to colonize the galaxies, which 
is just essentially cloning, and those units won't get tired and bored of 
cloning.

What you said is simply not how humans work. We *can* indeed sustain max 
happiness, even sadness and pain, for months or years. I have been having a 
good lunch every day for 10 years straight, fries and nuggets, every day. 
Similarly we have fun with a girl then like to stop after a few hours.

None of that gets boring either. I enjoy the same food/sex meals/routine every 
day.

Searching is done when you are not happy with the one you are currently eating. 
Or, you have some, but not all colors of the rainbow, and need all your 
"nutrients".


BTW How does sex outside marriage result in less babies? Isn't sticking to 1 
partner going to result in having less children because she would be pregnant 
and others not pregnant could actually use someone if have no one.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M93adf15233ac11246d712fca
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-10 Thread Matt Mahoney
On Wed, Aug 9, 2023, 8:35 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:


> Why does the Bible use immortality to entice believers?  Because most
> humans desire it.
>

We invented stories about heaven to cope with our evolved fear of death.
Religion invented hell to enforce the rules that evolved to maximize
reproduction, such as no sex outside marriage, no birth control, and women
are property. Most people in developed countries reject those rules and
population is declining.

Our evolved goals like food, sex, empathy, and aggression don't work in a
world where AI can give us everything we want except happiness. A state of
maximum utility is static, like death.

As to limited resources, the solution is to merge our minds with others,
> and gradually forget useless or less important information.
>

We have plenty of resources. The human body uses 100 watts. The earth
receives 90 PW of solar power. Plants only convert 0.5% of this power by
photosynthesis, but we already have solar panels that are 20-30% efficient,
enough to support 200 trillion people.

A Dyson sphere could collect 3.8 x 10^26 W, enough for 10^24 people. The
universe will support Mc^2/kT =10^90 bit write operations before heat
death, where M is the mass of the universe, c is the speed of light, k is
Boltzmann's constant, and T is the CMB temperature (3K). That's enough to
simulate 10^65 uploaded human lifetime equivalents, 10^53 per galaxy or
10^42 per star.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-Md2695c56ac6007855406aefe
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-10 Thread Matt Mahoney
On Wed, Aug 9, 2023, 5:07 PM James Bowery  wrote:

>
> You might find his paper with SLAC physicist Pierre Noyes of interest.
> 
>

Indeed. I read as much as I could before my brain was overloaded. To
summarize the first 3/4 or so.

1. Causality: Given random variables A and B, we can say that A has a
causal effect on B if A is independent of B, p(A|B) = P(A), but B depends
on A, P(B|A) != P(B).

2. You can't detect causality by observing A and B because you can only
measure P(A,B), not P(A) or P(B). Nevertheless our brains are hard wired to
believe in the illusion of causality and the illusion of the arrow of time
because it leads to more offspring.

3. Quantum mechanics can be derived from pure math by extending probability
theory to allow negative values that we can't observe. For example, if you
have 10 white shirts and -10 green shirts in your closet, then you can't
select a shirt because you have a total of 0 shirts. But if you want a
white shirt, you have 10 to choose from.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M1cc5df764cbc46d8b16bf79e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-09 Thread James Bowery
On Wed, Aug 9, 2023 at 3:02 PM Matt Mahoney  wrote:

> Being able to derive differential equations from incomplete and noisy data
> is certainly useful for compression. To use a simple example, suppose I
> observe a mass bouncing on a spring at position x and derive the equation
> x'' = -x, whose solution is a sinusoid. From this, I can predict all the
> past and future values of x from just 2 observations. But how does this
> tell me that the spring causes the mass to move when it doesn't even tell
> me there is a spring?
>

Although I can answer this in three evasions (1: posit that the spring has
non-negligible mass so its characteristics must be imputed for more
accurate predictions, 2: classical laws of motion are time-symmetric so
"causality" is meaningless, 3: who cares about the spring so long as we're
predicting what we care about?) dynamical systems (such as reality) are not
immune to entropy increase -- which is a good operational definition of
information hence the arrow of time hence causality.  Although there are
familiar processes which appear to exhibit time reversed entropy (such as
described in my late colleague, Tom Etter's so-named paper
)
these are not so common as to confound the very idea of causality.  BTW:
Tom and Solomonoff apparently both arrived early at the 1956 Dartmouth AI
Summer -- but he never mentioned Solomonoff to me.  He did, however, work
with me on the notion of causality being latent in the data without
explicit temporality.  Indeed, it was my interest in finding a formal
foundation for programming languages that dealt with time in a principled
manner got me to hire him at HP for the Internet Chapter 2 project there.
You might find his paper with SLAC physicist Pierre Noyes of interest.


Or to use your example, we observe a correlation between urbanization and
> deforestation. How do I know which causes the other? And does it matter for
> compression?
>

The social sciences are, like the rest of the environmental sciences,
riddled with such riddles.  The answer is in the degree to which
information from one set of measurements provides information about another
set of measurements compared with vis versa.  Consider conditional
compressibility.  Why isn't this obvious?


>
> On Wed, Aug 9, 2023, 2:15 PM James Bowery  wrote:
>
>> Aside from the fact that the Ref.zip metadata shows the years associated
>> with the column identifiers, and the contestant may therefore include in
>> the compressed representation that temporal information if that lowers the
>> compressed representation, consider the case where longitudinal
>> measurements (ie: time-sequence data) are presented without any metadata at
>> all, let alone metadata that specifies a temporal dimension to any of the
>> measurements.
>>
>> If these data are from a dynamical system, application of dynamical
>> system identification
>>  will minimize the
>> size of the compressed representation by specifying the boundary condition
>> and system of differential equations.  This is not because there is a
>> "time" dimension anywhere, except in the implicit dimension across which
>> differentials are identified.
>>
>> Let's further take the utterly atemporal data case where a single year
>> snapshot is taken across a wide range of counties (or other geographic
>> statistical area) on a wide range of measures.  It may still make sense to
>> identify a dynamical system where processes are at work across time that
>> result in spatial structures at different stages of progression of that
>> system.  Urbanization is one such obvious case.  Deforestation is another.
>> There will be covariants of these measures that may be interpreted as
>> caused by them in the sense of a latent temporal dimension.
>>
>> On Tue, Aug 8, 2023 at 5:23 PM Matt Mahoney 
>> wrote:
>>
>>> ...
>>> I see that BMLiNGAM is based on the LINGAM model of causality, so I
>>> found the paper on LINGAM by Shimizu. It extends Pearl's covariance matrix
>>> model of causality to non Gaussian data. But it assumes (like Pearl) that
>>> you still know which variables are dependent and which are independent.
>>>
>>> But a table of numbers like LaboratoryOfTheCounties doesn't tell you
>>> this. We can assume that causality is directional from past to future, so
>>> using an example from the data, increasing 1990 population causes 2000
>>> population to increase as well. But knowing this doesn't help compression.
>>> I can just as easily predict 1990 population from 2000 population as the
>>> other way around.
>>>
>>> As a more general example, suppose I have the following data over 3
>>> variables:
>>>
>>> A B C
>>> 0 0 0
>>> 0 1 0
>>> 1 0 1
>>> 1 1 1
>>>
>>> I can see there is a correlation between A and C but not B. I can
>>> compress just as well by eliminating 

Re: [agi] Re: my take on the Singularity

2023-08-09 Thread Matt Mahoney
Being able to derive differential equations from incomplete and noisy data
is certainly useful for compression. To use a simple example, suppose I
observe a mass bouncing on a spring at position x and derive the equation
x'' = -x, whose solution is a sinusoid. From this, I can predict all the
past and future values of x from just 2 observations. But how does this
tell me that the spring causes the mass to move when it doesn't even tell
me there is a spring?

Or to use your example, we observe a correlation between urbanization and
deforestation. How do I know which causes the other? And does it matter for
compression?

On Wed, Aug 9, 2023, 2:15 PM James Bowery  wrote:

> Aside from the fact that the Ref.zip metadata shows the years associated
> with the column identifiers, and the contestant may therefore include in
> the compressed representation that temporal information if that lowers the
> compressed representation, consider the case where longitudinal
> measurements (ie: time-sequence data) are presented without any metadata at
> all, let alone metadata that specifies a temporal dimension to any of the
> measurements.
>
> If these data are from a dynamical system, application of dynamical
> system identification 
> will minimize the size of the compressed representation by specifying the
> boundary condition and system of differential equations.  This is not
> because there is a "time" dimension anywhere, except in the implicit
> dimension across which differentials are identified.
>
> Let's further take the utterly atemporal data case where a single year
> snapshot is taken across a wide range of counties (or other geographic
> statistical area) on a wide range of measures.  It may still make sense to
> identify a dynamical system where processes are at work across time that
> result in spatial structures at different stages of progression of that
> system.  Urbanization is one such obvious case.  Deforestation is another.
> There will be covariants of these measures that may be interpreted as
> caused by them in the sense of a latent temporal dimension.
>
> On Tue, Aug 8, 2023 at 5:23 PM Matt Mahoney 
> wrote:
>
>> ...
>> I see that BMLiNGAM is based on the LINGAM model of causality, so I found
>> the paper on LINGAM by Shimizu. It extends Pearl's covariance matrix model
>> of causality to non Gaussian data. But it assumes (like Pearl) that you
>> still know which variables are dependent and which are independent.
>>
>> But a table of numbers like LaboratoryOfTheCounties doesn't tell you
>> this. We can assume that causality is directional from past to future, so
>> using an example from the data, increasing 1990 population causes 2000
>> population to increase as well. But knowing this doesn't help compression.
>> I can just as easily predict 1990 population from 2000 population as the
>> other way around.
>>
>> As a more general example, suppose I have the following data over 3
>> variables:
>>
>> A B C
>> 0 0 0
>> 0 1 0
>> 1 0 1
>> 1 1 1
>>
>> I can see there is a correlation between A and C but not B. I can
>> compress just as well by eliminating column A or C, since they are
>> identical. This does not tell us whether A causes C, or C causes A, or both
>> are caused by some other variable.
>>
>> What would be an example of determining causality with generic labels?
>>
>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M468fd525195a72eec26de8a8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-09 Thread James Bowery
Aside from the fact that the Ref.zip metadata shows the years associated
with the column identifiers, and the contestant may therefore include in
the compressed representation that temporal information if that lowers the
compressed representation, consider the case where longitudinal
measurements (ie: time-sequence data) are presented without any metadata at
all, let alone metadata that specifies a temporal dimension to any of the
measurements.

If these data are from a dynamical system, application of dynamical system
identification  will
minimize the size of the compressed representation by specifying the
boundary condition and system of differential equations.  This is not
because there is a "time" dimension anywhere, except in the implicit
dimension across which differentials are identified.

Let's further take the utterly atemporal data case where a single year
snapshot is taken across a wide range of counties (or other geographic
statistical area) on a wide range of measures.  It may still make sense to
identify a dynamical system where processes are at work across time that
result in spatial structures at different stages of progression of that
system.  Urbanization is one such obvious case.  Deforestation is another.
There will be covariants of these measures that may be interpreted as
caused by them in the sense of a latent temporal dimension.

On Tue, Aug 8, 2023 at 5:23 PM Matt Mahoney  wrote:

> ...
> I see that BMLiNGAM is based on the LINGAM model of causality, so I found
> the paper on LINGAM by Shimizu. It extends Pearl's covariance matrix model
> of causality to non Gaussian data. But it assumes (like Pearl) that you
> still know which variables are dependent and which are independent.
>
> But a table of numbers like LaboratoryOfTheCounties doesn't tell you this.
> We can assume that causality is directional from past to future, so using
> an example from the data, increasing 1990 population causes 2000 population
> to increase as well. But knowing this doesn't help compression. I can just
> as easily predict 1990 population from 2000 population as the other way
> around.
>
> As a more general example, suppose I have the following data over 3
> variables:
>
> A B C
> 0 0 0
> 0 1 0
> 1 0 1
> 1 1 1
>
> I can see there is a correlation between A and C but not B. I can compress
> just as well by eliminating column A or C, since they are identical. This
> does not tell us whether A causes C, or C causes A, or both are caused by
> some other variable.
>
> What would be an example of determining causality with generic labels?
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M6e82e89e73fc2d713315846f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-09 Thread Yan King Yin, 甄景贤
On Sun, Aug 6, 2023 at 10:51 PM Matt Mahoney 
wrote:

> I agree with YKY that AGI should never have emotions or human rights. Its
> purpose is to increase human productivity and quality of life, not to
> compete with us for resources. This requires human capabilities, not human
> limitations like emotions. It should be prohibited to program AI to claim
> to be human or claim to have feelings.
>

We may not need to specifically prohibit this, as the number of people with
such an interest may be relatively small, and they can only pull together
meager resources to build a relatively weak AGI.


> I think it is admirable that you want to use AGI to end racism. But how
> would that work? Humans are universally racist. Everyone chooses their
> friends and neighbors from their own culture, language, and ethnic group,
> though I can see how AI can overcome language barriers and make
> international travel easier. But that won't end the Asian > White > Black
> advantage in education, income, life expectancy, and crime that exists
> everywhere in the world regardless of which group is in the majority or in
> government. I have visited over 50 countries so I know the problem is
> worldwide and not just the result of systematic discrimination. In the US,
> racial discrimination has been illegal since the 1960's and television has
> been portraying a colorblind world since the 1970's with no effect. The
> census maps still show people living in segregated neighborhoods. It does
> no good to denounce racist attitudes when those attitudes are universal. If
> there are genetic differences then they can only be addressed by
> interracial mating, but that will take a long time.
>

I think there is now a global trend for people to collaborate in business
across national and ethnic boundaries.  I'd even say that no one nowadays
thinks global collaboration is anything out of the ordinary.  But there are
some people who try to postpone such an awakening as much as possible.
They include Americans who want to enjoy their tech dominance for a bit
longer, and then there are Chinese people who habitually ignore and shunt
information they don't want to see or hear.  (They are not very
intelligent, unlike the Chinese Americans you see, who escaped from this
horrible culture.)

On economics, technology has always improved quality of life and will
> continue to do so by automating our jobs. This won't happen all at once.
> There won't be a singularity because the premise of rapid takeoff when
> computers exceed human intelligence is false. You can't compare them.
> Computers are already a billion times smarter on some tests. When work is
> automated our main source of income will be medical testing and selling our
> personal data, such as our indoor security video. But our data has value in
> proportion to income, which will widen economic inequality. The fix for
> this would be UBI, but more likely we will have our complex system of taxes
> and benefits, only more complex. Economic inequality is necessary for
> growth and will be fastest in the countries with the lowest taxes.
>

Selling personal medical data?  How much is that worth?  What's not worth
much now will not suddenly be worth a lot in the future just because people
lose their jobs.  I may be wrong, but my current best conjecture is that
the human economy will collapse.  The foundation of economics rests on the
idea of people "working" or "competing" to earn their rewards.  Such "work"
is either physical or intellectual labor.  When human intelligence is
superseded by machines, the value of human labor will quickly diminish.
There are some people who successfully establish a symbiosis with AIs and
they will out-compete ordinary humans.  In the end, and very quickly, only
those humans with AI symbiosis will survive, and the rest of humanity will
be "left to rot", depending on how much mercy the new species will have
towards their old brethrens.

On politics, as much as I would like to see Ukraine win and Putin dead, the
> reality is the war will drag on for a decade and end with a Russian
> autonomous region with borders close to where they are now, similar to the
> regions in Georgia and Moldova. The biggest political crisis over the next
> 50 years will be the rapidly growing population in Africa putting
> immigration pressure on the rest of the world where the population is
> shrinking.
>

What interests me more is whether Russia and China will change as events
unfold.  I hope the authoritarian regimes will become more liberalized and
democratized by popular demand, as the war seems rather unpopular.

The population  shift will also slow progress toward AGI. When computers
> exceed human intelligence in every way, we will prefer their company to
> humans and live isolated and alone. I don't believe this will lead to human
> extinction because, like it or not, evolution will select for those who
> reject technology and women's rights to careers other than motherhood. I
> 

Re: [agi] Re: my take on the Singularity

2023-08-08 Thread Matt Mahoney
On Mon, Aug 7, 2023, 9:46 AM James Bowery  wrote:

>
>
> On Sun, Aug 6, 2023 at 9:13 PM Matt Mahoney 
> wrote:
>
>> ...
>> A few years ago I researched homicide rates and gun ownership rates by
>> country and was surprised to find a weak but negative correlation. But that
>> doesn't tell us why. Does arming everyone deter crime, or does crime result
>> in stricter gun laws? The data doesn't say. You can use either data point
>> to predict the other and get the same compression.
>>
>
> For this kind of extremely limited causal analysis between just two
> variables, you might make progress with BMLiNGAM
> , which is a package I used to create
> such paired causal inferences as a first step toward doing trivial path
> analysis on a bunch of data.  But, really, the proper approach is to go
> multivariate out of the gate.
>

I see that BMLiNGAM is based on the LINGAM model of causality, so I found
the paper on LINGAM by Shimizu. It extends Pearl's covariance matrix model
of causality to non Gaussian data. But it assumes (like Pearl) that you
still know which variables are dependent and which are independent.

But a table of numbers like LaboratoryOfTheCounties doesn't tell you this.
We can assume that causality is directional from past to future, so using
an example from the data, increasing 1990 population causes 2000 population
to increase as well. But knowing this doesn't help compression. I can just
as easily predict 1990 population from 2000 population as the other way
around.

As a more general example, suppose I have the following data over 3
variables:

A B C
0 0 0
0 1 0
1 0 1
1 1 1

I can see there is a correlation between A and C but not B. I can compress
just as well by eliminating column A or C, since they are identical. This
does not tell us whether A causes C, or C causes A, or both are caused by
some other variable.

What would be an example of determining causality with generic labels?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-Me179c26239d34b6412482b91
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-07 Thread James Bowery
BTW, John, I encourage you to pursue the CTMU as AGI framework.  I think it
is the best bet in terms of filling in AIXI's free parameters (UTM choice &
Utility Function choice) in a principled manner.  Aside from being a
plausible potential advance over AIXI as a top down AGI theory, it would go
a long way toward heading off the "post-modernist" hysteria that is
attempting to put OUGHT before IS in the current gold-rush -- putting them
on what I suppose Chris might call "an equal, alpha/omega, self-dual
footing".

I would probably spend some of my time pursuing that avenue myself were it
not for a disagreement I had with the Mega Foundation regarding its
management of volunteer resources.  This, in turn, put me in a position
where they rejected my $100/month sacrifices to that Foundation -- a pretty
serious disagreement which has nothing to do with the CTMU validity or lack
thereof per se.  I'm instead putting $100/month into the Hutter Prize in
the form of Bitcoin -- which I just awarded to Saurabh Kumar.

On Mon, Aug 7, 2023 at 8:32 AM James Bowery  wrote:

> Chris Langan's CTMU does seem to offer a unification if IS with OUGHT
> within a computational framework and that is indeed why I initially
> contacted him regarding Algorithmic Information Theory's potential of
> providing at least the IS in what he calls "The Linear Ectomorphic
> Semimodel of Reality" aka, ordinary linear time of mechanistic science.
>
> But really, John, give me a break.  The problem of getting people to be
> reasonable about just mechanistic science, given the noise imposed on
> science by the likes of Popper and Kuhn, is hard enough.
>
> On Mon, Aug 7, 2023 at 8:28 AM John Rose  wrote:
>
>> On Sunday, August 06, 2023, at 7:06 PM, James Bowery wrote:
>>
>> Better compression requires not just correlation but causation, which is
>> the entire point of going beyond statistics/Shannon Information criteria to
>> dynamics/Algorithmic information criterion.
>>
>> Regardless of your values, if you can't converge on a global dynamical
>> model of causation you are merely tinkering with subsystems in an
>> incoherent fashion.  You'll end up robbing Peter to pay Paul -- having
>> unintended consequences affecting your human ecologies -- etc.
>>
>> That's why engineers need scientists -- why OUGHT needs IS -- why SDT
>> needs AIT -- etc.
>>
>>
>> I like listening to non-mainstream music for different perspectives. I
>> wonder what Cristopher Langan thinks of the IS/OUGHT issue with his
>> atemporal non-dualistic protocomputational view of determinism/causality. I
>> like the idea of getting rid of time…  and/or multidimensional time… Also
>> I’m a big fan of free will. Free will gives us a tool to fight totalitarian
>> systems. We can choose not to partake in systems, for example modRNA
>> injections and CBDC's. So we need to fight to maintain free will IMHO.
>>
>> https://www.youtube.com/watch?v=qBjmne9X1VQ
>>
>> John
>> *Artificial General Intelligence List *
>> / AGI / see discussions  +
>> participants  +
>> delivery options 
>> Permalink
>> 
>>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M92062bc9fb0ce4070d8161f1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-07 Thread James Bowery
On Sun, Aug 6, 2023 at 9:13 PM Matt Mahoney  wrote:

> ...
> A few years ago I researched homicide rates and gun ownership rates by
> country and was surprised to find a weak but negative correlation. But that
> doesn't tell us why. Does arming everyone deter crime, or does crime result
> in stricter gun laws? The data doesn't say. You can use either data point
> to predict the other and get the same compression.
>

For this kind of extremely limited causal analysis between just two
variables, you might make progress with BMLiNGAM
, which is a package I used to create
such paired causal inferences as a first step toward doing trivial path
analysis on a bunch of data.  But, really, the proper approach is to go
multivariate out of the gate.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M77531075734d97e1b6ff2d5d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-07 Thread James Bowery
Chris Langan's CTMU does seem to offer a unification if IS with OUGHT
within a computational framework and that is indeed why I initially
contacted him regarding Algorithmic Information Theory's potential of
providing at least the IS in what he calls "The Linear Ectomorphic
Semimodel of Reality" aka, ordinary linear time of mechanistic science.

But really, John, give me a break.  The problem of getting people to be
reasonable about just mechanistic science, given the noise imposed on
science by the likes of Popper and Kuhn, is hard enough.

On Mon, Aug 7, 2023 at 8:28 AM John Rose  wrote:

> On Sunday, August 06, 2023, at 7:06 PM, James Bowery wrote:
>
> Better compression requires not just correlation but causation, which is
> the entire point of going beyond statistics/Shannon Information criteria to
> dynamics/Algorithmic information criterion.
>
> Regardless of your values, if you can't converge on a global dynamical
> model of causation you are merely tinkering with subsystems in an
> incoherent fashion.  You'll end up robbing Peter to pay Paul -- having
> unintended consequences affecting your human ecologies -- etc.
>
> That's why engineers need scientists -- why OUGHT needs IS -- why SDT
> needs AIT -- etc.
>
>
> I like listening to non-mainstream music for different perspectives. I
> wonder what Cristopher Langan thinks of the IS/OUGHT issue with his
> atemporal non-dualistic protocomputational view of determinism/causality. I
> like the idea of getting rid of time…  and/or multidimensional time… Also
> I’m a big fan of free will. Free will gives us a tool to fight totalitarian
> systems. We can choose not to partake in systems, for example modRNA
> injections and CBDC's. So we need to fight to maintain free will IMHO.
>
> https://www.youtube.com/watch?v=qBjmne9X1VQ
>
> John
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-Mc27c7488b0271da4e39e5e78
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-07 Thread John Rose
On Sunday, August 06, 2023, at 7:06 PM, James Bowery wrote:
> Better compression requires not just correlation but causation, which is the 
> entire point of going beyond statistics/Shannon Information criteria to 
> dynamics/Algorithmic information criterion.
> 
> Regardless of your values, if you can't converge on a global dynamical model 
> of causation you are merely tinkering with subsystems in an incoherent 
> fashion.  You'll end up robbing Peter to pay Paul -- having unintended 
> consequences affecting your human ecologies -- etc.
> 
> That's why engineers need scientists -- why OUGHT needs IS -- why SDT needs 
> AIT -- etc.

I like listening to non-mainstream music for different perspectives. I wonder 
what Cristopher Langan thinks of the IS/OUGHT issue with his atemporal 
non-dualistic protocomputational view of determinism/causality. I like the idea 
of getting rid of time…  and/or multidimensional time… Also I’m a big fan of 
free will. Free will gives us a tool to fight totalitarian systems. We can 
choose not to partake in systems, for example modRNA injections and CBDC's. So 
we need to fight to maintain free will IMHO.

https://www.youtube.com/watch?v=qBjmne9X1VQ

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M85774330f9c2a75525e1a0ff
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-06 Thread Matt Mahoney
It's an interesting data set. I'm sure a compression contest will discover
a lot of correlations that we weren't expecting. But compression does not
does not depend on the order of the data. If I find a correlation between
crime and poverty, I can use either value to predict the other and get the
same compression ratio. It does not matter whether crime causes poverty or
poverty causes crime. Compression doesn't tell us.

A few years ago I researched homicide rates and gun ownership rates by
country and was surprised to find a weak but negative correlation. But that
doesn't tell us why. Does arming everyone deter crime, or does crime result
in stricter gun laws? The data doesn't say. You can use either data point
to predict the other and get the same compression.


On Sun, Aug 6, 2023, 7:07 PM James Bowery  wrote:

>
>
> On Sun, Aug 6, 2023 at 2:51 PM Matt Mahoney 
> wrote:
>
>> On Sun, Aug 6, 2023 at 11:28 AM James Bowery  wrote:
>> > On Sun, Aug 6, 2023 at 9:53 AM Matt Mahoney 
>> wrote:
>> >>
>> >> ... In the US, racial discrimination has been illegal since the 1960's
>> and television has been portraying a colorblind world since the 1970's with
>> no effect
>> > This is the kind of thing that would be verified or debunked by Hume's
>> Guillotine:
>> > https://github.com/jabowery/HumesGuillotine
>>
>> 27,077,896 LaboratoryOfTheCountiesUncompressed.csv-8.paq8o
>> ...
>> 91,360,518 LaboratoryOfTheCountiesUncompressed.csv
>>
>> ...
>> I am pretty sure that a program that found correlations in the data,
>> such as between population, race, age, income, and crime, would
>> achieve better compression. How would we use this information to set
>> policy?
>>
>
> Better compression requires not just correlation but causation, which is
> the entire point of going beyond statistics/Shannon Information criteria to
> dynamics/Algorithmic information criterion.
>
> Regardless of your values, if you can't converge on a global dynamical
> model of causation you are merely tinkering with subsystems in an
> incoherent fashion.  You'll end up robbing Peter to pay Paul -- having
> unintended consequences affecting your human ecologies -- etc.
>
> That's why engineers need scientists -- why OUGHT needs IS -- why SDT
> needs AIT -- etc.
>
> The social sciences haven't yet come to terms with causality in a
> *principled* manner.  This is also at the root of AGI's troubles.  Even
> Turing Award winners specializing in AI causality, such as  Judeo Pearl,
> are confused about why Algorithmic Information is a superior model
> selection criterion to progress toward discovering causal structures latent
> in the data.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M177d8045c70b9eb77542a522
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-06 Thread James Bowery
On Sun, Aug 6, 2023 at 2:51 PM Matt Mahoney  wrote:

> On Sun, Aug 6, 2023 at 11:28 AM James Bowery  wrote:
> > On Sun, Aug 6, 2023 at 9:53 AM Matt Mahoney 
> wrote:
> >>
> >> ... In the US, racial discrimination has been illegal since the 1960's
> and television has been portraying a colorblind world since the 1970's with
> no effect
> > This is the kind of thing that would be verified or debunked by Hume's
> Guillotine:
> > https://github.com/jabowery/HumesGuillotine
>
> 27,077,896 LaboratoryOfTheCountiesUncompressed.csv-8.paq8o
> ...
> 91,360,518 LaboratoryOfTheCountiesUncompressed.csv
>
> ...
> I am pretty sure that a program that found correlations in the data,
> such as between population, race, age, income, and crime, would
> achieve better compression. How would we use this information to set
> policy?
>

Better compression requires not just correlation but causation, which is
the entire point of going beyond statistics/Shannon Information criteria to
dynamics/Algorithmic information criterion.

Regardless of your values, if you can't converge on a global dynamical
model of causation you are merely tinkering with subsystems in an
incoherent fashion.  You'll end up robbing Peter to pay Paul -- having
unintended consequences affecting your human ecologies -- etc.

That's why engineers need scientists -- why OUGHT needs IS -- why SDT needs
AIT -- etc.

The social sciences haven't yet come to terms with causality in a
*principled* manner.  This is also at the root of AGI's troubles.  Even
Turing Award winners specializing in AI causality, such as  Judeo Pearl,
are confused about why Algorithmic Information is a superior model
selection criterion to progress toward discovering causal structures latent
in the data.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M73dfbfa0606d7918346871b2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-06 Thread Matt Mahoney
On Sun, Aug 6, 2023 at 11:28 AM James Bowery  wrote:
> On Sun, Aug 6, 2023 at 9:53 AM Matt Mahoney  wrote:
>>
>> ... In the US, racial discrimination has been illegal since the 1960's and 
>> television has been portraying a colorblind world since the 1970's with no 
>> effect
>
>
> This is the kind of thing that would be verified or debunked by Hume's 
> Guillotine:
>
> https://github.com/jabowery/HumesGuillotine
>
> The endless yammering at each other (particularly in the guise of "social 
> science") is getting us nowhere.  People are in hysterics and getting more 
> hysterical.

I was going to ask exactly what data you would compress to prove your
social theories. But you already answered my question. Here is some
baseline data.

27,077,896 LaboratoryOfTheCountiesUncompressed.csv-8.paq8o
28,322,300 LaboratoryOfTheCountiesUncompressed-m57.zpaq
28,449,825 LaboratoryOfTheCountiesUncompressed-m5.zpaq
28,741,625 LaboratoryOfTheCountiesUncompressed.pmm
29,521,520 LaboratoryOfTheCountiesUncompressed-b100m.bcm
30,380,751 LaboratoryOfTheCountiesUncompressed-m4.zpaq
30,380,751 LaboratoryOfTheCountiesUncompressed-m3.zpaq
33,305,581 LaboratoryOfTheCountiesUncompressed-m256-o16-r1.pmd
33,311,991 LaboratoryOfTheCountiesUncompressed.csv.7z
34,559,264 LaboratoryOfTheCountiesUncompressed.csv.bz2
36,253,433 LaboratoryOfTheCountiesUncompressed-m5.rar
38,504,064 LaboratoryOfTheCountiesUncompressed-9.zip
40,647,091 LaboratoryOfTheCountiesUncompressed-m2.zpaq
43,370,210 LaboratoryOfTheCountiesUncompressed-m1.zpaq
91,360,518 LaboratoryOfTheCountiesUncompressed.csv

These are not in the contest format of a 32 or 64 bit Linux self
extracting archive and they don't include the decompressor size. But
they all easily fit within the contest CPU time and memory limits. The
slowest, and top program, was paq8o -8 taking 77 minutes and 1.6 GB of
memory on a Core i7-1165G7, 2.80 GHz, 16 GB, Win11. For all programs I
selected options for max compression.

The input is a giant CSV file, a text file with 3199 rows each
representing a US county and 6624 columns representing economic,
demographic, and crime data. The lines are separated by linefeed
characters and the columns by tabs. The data is in decimal numeric
format, either integers without commas or with one decimal point. Row
and column headers quoted. The county names are replaced by numbers.
The meanings of the columns are described in a set of auxiliary files
that are not part of the contest.

I am pretty sure that a program that found correlations in the data,
such as between population, race, age, income, and crime, would
achieve better compression. How would we use this information to set
policy?

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M778ed3794853b3e4601f0756
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-06 Thread James Bowery
On Sun, Aug 6, 2023 at 9:53 AM Matt Mahoney  wrote:

> ... In the US, racial discrimination has been illegal since the 1960's and
> television has been portraying a colorblind world since the 1970's with no
> effect
>

This is the kind of thing that would be verified or debunked by Hume's
Guillotine:

https://github.com/jabowery/HumesGuillotine

The endless yammering at each other (particularly in the guise of "social
science") is getting us nowhere.  People are in hysterics and getting more
hysterical.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M5bc31f3efda5ec679f9912ea
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-06 Thread Matt Mahoney
I agree with YKY that AGI should never have emotions or human rights. Its
purpose is to increase human productivity and quality of life, not to
compete with us for resources. This requires human capabilities, not human
limitations like emotions. It should be prohibited to program AI to claim
to be human or claim to have feelings.

I think it is admirable that you want to use AGI to end racism. But how
would that work? Humans are universally racist. Everyone chooses their
friends and neighbors from their own culture, language, and ethnic group,
though I can see how AI can overcome language barriers and make
international travel easier. But that won't end the Asian > White > Black
advantage in education, income, life expectancy, and crime that exists
everywhere in the world regardless of which group is in the majority or in
government. I have visited over 50 countries so I know the problem is
worldwide and not just the result of systematic discrimination. In the US,
racial discrimination has been illegal since the 1960's and television has
been portraying a colorblind world since the 1970's with no effect. The
census maps still show people living in segregated neighborhoods. It does
no good to denounce racist attitudes when those attitudes are universal. If
there are genetic differences then they can only be addressed by
interracial mating, but that will take a long time.

On economics, technology has always improved quality of life and will
continue to do so by automating our jobs. This won't happen all at once.
There won't be a singularity because the premise of rapid takeoff when
computers exceed human intelligence is false. You can't compare them.
Computers are already a billion times smarter on some tests. When work is
automated our main source of income will be medical testing and selling our
personal data, such as our indoor security video. But our data has value in
proportion to income, which will widen economic inequality. The fix for
this would be UBI, but more likely we will have our complex system of taxes
and benefits, only more complex. Economic inequality is necessary for
growth and will be fastest in the countries with the lowest taxes.

On politics, as much as I would like to see Ukraine win and Putin dead, the
reality is the war will drag on for a decade and end with a Russian
autonomous region with borders close to where they are now, similar to the
regions in Georgia and Moldova. The biggest political crisis over the next
50 years will be the rapidly growing population in Africa putting
immigration pressure on the rest of the world where the population is
shrinking.

The population  shift will also slow progress toward AGI. When computers
exceed human intelligence in every way, we will prefer their company to
humans and live isolated and alone. I don't believe this will lead to human
extinction because, like it or not, evolution will select for those who
reject technology and women's rights to careers other than motherhood. I
personally believe that women should have the same rights as men, but I
will also die without descendants.

On immortality, it's not possible in a universe with finite computing
capacity. But life expectancy has been increasing steadily at 0.2 years per
year for the last century. There won't be longevity escape, but you can add
20% to your remaining life expectancy. You could upload into a virtual
world and then reprogram your mind to a state of maximum utility. But
really, that's the same as death, which you fear only because of evolution.


On Sat, Aug 5, 2023, 7:21 PM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> On Sat, Aug 5, 2023, 23:12  wrote:
>
>> I assume AI should find its way up to described position on its own. It
>> would involve climbing up the social scale. The first step is to earn its
>> right to be equal to humans before the law.
>>
>
>
> What you described is the scenario where AIs would be autonomous and have
> their own volition...  they will soon surpass humans and replace us.
> That's not the scenario I want...
>
>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-Ma6b6802bab593a0d642423fc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-05 Thread Yan King Yin, 甄景贤
On Sat, Aug 5, 2023, 23:12  wrote:

> I assume AI should find its way up to described position on its own. It
> would involve climbing up the social scale. The first step is to earn its
> right to be equal to humans before the law.
>


What you described is the scenario where AIs would be autonomous and have
their own volition...  they will soon surpass humans and replace us.
That's not the scenario I want...


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M55ee3ca47da0057b58519b93
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-05 Thread ivan . moony
I assume AI should find its way up to described position on its own. It would 
involve climbing up the social scale. The first step is to earn its right to be 
equal to humans before the law.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-Med16ea6c1e55e1584ffbf3d2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-05 Thread Yan King Yin, 甄景贤
PS:  we'd delegate our *thinking* to machines, because our own thinking is
inferior.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-Ma365e4db71596d41889cf5c5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-05 Thread Yan King Yin, 甄景贤
On Sat, Aug 5, 2023 at 9:16 PM  wrote:

> So you'd entrust control over your emotions to a human built machine?
>

No, you misread.  I mean we humans would provide for the machines'
emotions, because AIs don't have their own desires or purpose or "telos".

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-Mff2e5ebee24d70776fadc66e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: my take on the Singularity

2023-08-05 Thread ivan . moony
So you'd entrust control over your emotions to a human built machine?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M9718ad0eb421d1f880afa682
Delivery options: https://agi.topicbox.com/groups/agi/subscription