Re: [agi] Re: Understanding Compression

2020-02-04 Thread John Rose
On Monday, February 03, 2020, at 3:32 PM, Matt Mahoney wrote:
> Less than 8.7 x 10^244 bits. That's the square of the Bekenstein bound of a 
> black hole with a Schwartzchild radius equal to the Hubble radius,13.8 
> billion light years. Edit distance expressed as the shortest program that 
> inputs one string and outputs the other is equal to the size of the output 
> for the vast majority of random string pairs.

I would think bigger than that? This would be the set of all the combinatorial 
subsets with order, permutations without repetition, summing the edit distances 
from every element to every other element. So, 1 particular bit's edit distance 
to the full universe string would be something like 10^124 bits but do this for 
every bit in the universe and for every n bit ordered sequence to every other n 
bit ordered sequence. It's immense. Why does it matter? It's a measurement of 
the magnitude, and there might be better ones, of the edit "separation"  of 
everything. Most of the stuff is physically non-computable so you have to take 
the max. Effectively though it could be compressed way down to the KC of the 
universe I would guess... Well, that is if KC is some sort of minimization of 
edit separation or Levenshtein separation which it has to be?

Speaking colloquialliy here, just speculating :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M2483f5f88d7bbc1ea4729af9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-02-03 Thread Matt Mahoney
On Mon, Feb 3, 2020, 7:52 AM John Rose  wrote:

> Talking big numbers, what is an expression of the sum of ALL edit
> distances in the universe? Not just Hamming but say Levenshtein distance,
> for sequences of unequal length. The distance from one sequence to ALL
> other sequences, summed for all sequences.
>

Less than 8.7 x 10^244 bits. That's the square of the Bekenstein bound of a
black hole with a Schwartzchild radius equal to the Hubble radius,13.8
billion light years. Edit distance expressed as the shortest program that
inputs one string and outputs the other is equal to the size of the output
for the vast majority of random string pairs.

> Gotta be yuge! Something on the order of 10^(N!) or even bigger. Though it
> could be reduced I suppose, deduplicated.
>

Not really. It's much smaller than 33, Graham's number, or iterated
Ackerman functions, which are too large to correspond to anything
physically imaginable.

> And then how does that change over time….
>

The Bekenstein bound is 1/ln 16 times surface area in Planck units, so it
grows as the universe expands.

Kinda gets ya wondering. And does information have mass.

Yes, I answered this in Quora. Reading and writing bits takes energy
equivalent to a mass of kT ln 2/c^2, where k is Boltzmann's constant, T is
the microwave background temperature (3 K), and c is the speed of light.
It's a tiny number, but what you might notice is that the Bekenstein bound
gives a value 10^30 times the mass of the universe. That's because most of
the entropy of the universe is not accessible. You can only encode and read
back 10^90 to10^92 bits using the universe's 10^80 atoms as a giant storage
device. There's not enough mass/energy to read the rest.

Is dark matter actually information?

No, it has all the characteristics of ordinary matter in objects smaller
than stars, like planets or comets, but free floating and not orbiting
stars. That's what makes it dark. We don't know for sure, but to me it
seems the most plausible explanation.

Not to be confused with dark energy, which is what ordinary gravity looks
like near a black hole.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M53bb40814f3b38e950fe53fc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-02-03 Thread John Rose
Talking big numbers, what is an expression of the sum of ALL edit distances in 
the universe? Not just Hamming but say Levenshtein distance, for sequences of 
unequal length. The distance from one sequence to ALL other sequences, summed 
for all sequences.

Gotta be yuge! Something on the order of 10^(N!) or even bigger. Though it 
could be reduced I suppose, deduplicated.

And then how does that change over time…. Kinda gets ya wondering. And does 
information have mass. Is dark matter actually information.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M2d3bd95dea3ee96708ad8ce3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-29 Thread rouncer81
how dangerous are viruses, and it will show how dangerous a nanobot is.  why do 
they do x,  look in your microscope.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Ma5e0c0dd02c7bf20eaba8c77
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-29 Thread immortal . discoveries
So you know, we KNOW nanobots and metal men are coming, they will be in the lab 
with tentacles and 50 eyes n their head etc. But what's in the skull? What in 
his hand exactly? What in THAT? And what's between these views? I.e. what is in 
the hallway near lab? Where do the nanobots return to? Why do they do X? So we 
do entailment/transform (many to one, one to many) to summarize/elaborate and 
move around the future.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M788649e6545a0e297fa9fa0e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-29 Thread rouncer81
zoom in,  in what way?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M5115799d3cc91f0fdee0b24a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-29 Thread immortal . discoveries
On Wednesday, January 29, 2020, at 4:32 PM, Matt Mahoney wrote:
> We aren't smart enough to look ahead more than one advance in intelligence, 
> or else we could just skip to that step.
Hehe. "me thinks replicating nanobots will take over one day, but can't make 
them"

We are getting smarter on a weekly basis now. We can approx. see the future, we 
are sorta there although there will be still a big jump from 'us'. We can see 
the future but not the details of it. That's interesting. We need to zoom in 
simply.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Md3c219df2e3ba83c7d20f66a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-29 Thread Matt Mahoney
I think Vernor Vinge meant a singularity in the mathematical sense. At
least that was my interpretation of his paper. If each doubling or n-fold
increase of progress takes half the time, then that's exactly what you get.
We can't say it won't happen because a singularity is an event horizon on
our view of the future. We aren't smart enough to look ahead more than one
advance in intelligence, or else we could just skip to that step. Our
understanding of physics could be wrong. Historically it always has been.

But I don't believe infinite progress will happen, and neither do a lot of
people. So we co-opt the word singularity to mean something weaker. We did
the same with AI, which is why we need a new term (AGI) to mean what AI
originally meant.

On Wed, Jan 29, 2020, 2:42 PM WriterOfMinds 
wrote:

> "In either case, the numbers are finite, so there will be no singularity."
>
> Does the average person (or indeed any person) who uses the term
> "singularity" genuinely expect that any physical quantity will go to
> infinity?  That was not my impression.  I take "technological singularity"
> as a metaphor that means a dramatic leap in capacity, beyond which life as
> we now know it will be obsolete.  Arguing against the singularity because
> it can't literally be a mathematical singularity seems like a straw man.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M463184d0a078d736804f9a73
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-29 Thread immortal . discoveries
Well, with my example above, the volume does keep growing non-linearly lol. So 
in a sense, yes, the singularity is sorta real.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mb12e4e9e3b86b64b5ed26689
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-29 Thread rouncer81
https://www.youtube.com/watch?v=c3I2zeoUbzg <-look free nrg.  =)  infinito.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M395ee3feb8a165376643928f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-29 Thread WriterOfMinds
"In either case, the numbers are finite, so there will be no singularity."

Does the average person (or indeed any person) who uses the term "singularity" 
genuinely expect that any physical quantity will go to infinity?  That was not 
my impression.  I take "technological singularity" as a metaphor that means a 
dramatic leap in capacity, beyond which life as we now know it will be 
obsolete.  Arguing against the singularity because it can't literally be a 
mathematical singularity seems like a straw man.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M845c3a184c9e942a92ff0d1f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-29 Thread rouncer81
a 1024 qbit computer is already 10^308, a small exponential qbit quantum 
computer would be something like 2^1 billion.  which is even more.   i dont 
think putting natural amplitude limits on a quantum computers power actually is 
what you do...  more think of permutations of space,  a chess board has heaps. 
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Ma033b1009f696129bd9b77d3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-29 Thread Matt Mahoney
On Wed, Jan 29, 2020, 1:25 PM  wrote:

> what if a quantum computer isnt a finite amount of qbits,  its actually an
> exponential amount.
>

Lloyd calculated the computing capacity of the universe to be 10^120
quantum operations and 10^90 bits. https://arxiv.org/abs/quant-ph/0110141

A qubit flip in time t requires borrowing h/2t energy, where h is Planck's
constant (6.626 x 10^-34 J-s). If you converted all the 1.5 x 10^53 kg of
mass in the universe to energy (E = mc^2 = 1.3 x 10^70 J) then you get
about 10^120 operations over the age of the universe, 4 x 10^17 s.

Memory can be encoded in the positions and momentums of the universe's
10^80 particles with resolution h (by Heisenberg's uncertainty principle)
and bounded by available energy. This gives you 10^90 bits.

I get similar numbers using a different calculation. The Bekenstein bound (
https://en.m.wikipedia.org/wiki/Bekenstein_bound )  of the entropy
contained in the Hubble radius (13.8 billion light years) is 1/ln 16 bits
per Planck unit area of the enclosing sphere. This is 2.95 x 10^122 bits.
(Coincidentally this gives you roughly the size of a proton as the volume
of a bit, independent of the properties of any particles). This is the
upper bound for a black hole, which would be a little more than the actual
mass of the universe, so the actual information content is smaller.

But most of these bits are not usable. Reading and writing memory,
including reading the output of a quantum computer, are not time reversible
operations, and therefore not quantum. These operations require by the
Landauer principle kT/ln 2 free energy, where k is Boltzmann's constant,
1.38 x 10^-23 J/K, and T is the cosmic background radiation temperature of
the universe = 3 K. This gives you about 10^92 memory bit operations.

In either case, the numbers are finite, so there will be no singularity.




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mfbc2c5ef279e93cb0d6d9a51
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-29 Thread immortal . discoveries
Events are exponential, I think once the group settles and runs out of 
updates/resources they hang/wait, until another system finishes it's S 
curve.so it's many S curves happening at different times, slowly combining, 
and faster later on.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Me3b0a5660e9a18568fa6d55a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-29 Thread immortal . discoveries
Remember I drew a pic few months ago showing how Earth radiating replicators 
like a growing sphere means it can double. Earth can touch/eats 6 planets 
around itself, then can touch 2456..170. The larger the volume the 
larger volume it can gain
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mc48529e5da61430d7ab63b9f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-29 Thread rouncer81
what if a quantum computer isnt a finite amount of qbits,  its actually an 
exponential amount.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M4bad2c7418529afac77ac215
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-29 Thread Matt Mahoney
On Wed, Jan 29, 2020, 1:14 AM  wrote:

> If we ignore all this detail, we can see Evolution of Earth has been
> exponential.
>

Evolution is chaotic, not exponential. It has long periods where nothing
happens, punctuated by mass proliferation and mass extinction when a new
species evolves a major survival advantage. Examples include the transition
from RNA to DNA, protein synthesis, photosynthesis, oxidizing metabolism,
multicellular organisms (the Cambrian explosion) with muscles, brains, and
sensory organs, sexual reproduction, and human language allowing us to work
as groups and development technology. Each burst starts off with
exponential growth until it exhausts resources and establishes its place as
the new dominant lifeform.

We are in the exponential phase of human proliferation and mass extinction
of other species now. But population growth peaked in 1970 and is slowing
now. The rate of increase of life expectancy peaked at 0.2 years per year
in 1990. Computer clock speeds leveled off at 2-3 GHz in 2010. By 2030 we
will not be able to shrink transistors any more.

There is still room for other species to dominate humans, species that we
create ourselves using nanotechnology instead of DNA. Plants produce only
250 TW (terawatts) of carbohydrates from the 90,000 TW of sunlight
available at the Earth's surface. (Global energy production is 15 TW). We
already have solar panels that are 20-30% efficient. We already know how to
make tiny wheels and electric motors out of metal and plastic. We can make
mechanical computing elements out of molecules that are a billion times
more efficient than transistors and a thousand times more efficient than
neurons. We can build a Dyson sphere to capture all of the sun's 3.84 x
10^26 watts. We can seed planets throughout the galaxy with self
replicating nanotechnology using conventional rockets over millions of
years.

But this is not a singularity. The observable universe has finite computing
capacity, 10^120 quantum operations and 10^90 bits of memory. Progress must
eventually stop.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M5f9eec39edf2b5a42b38d5aa
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-29 Thread rouncer81
 "once the passcode is broken, all the jail inmates can run past the door".    
thats only if its not kept a secret, by the person that works it out.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Me1614d14203428368631
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-28 Thread immortal . discoveries
I get you though, it may take us 60 more years to escape ageing...it's up to 
us, including you.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M72c12ea3615cf8c5021eca29
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-28 Thread immortal . discoveries
Perhaps it'd be better I said "once the passcode is broken, all the jail 
inmates can run past the door".
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M3744fef7827a04db184fdb83
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-28 Thread immortal . discoveries
If we ignore all this detail, we can see Evolution of Earth has been 
exponential.

So no linear, no slowing down. Only in local times.

It's sorta strange to think as soon as we sort some particles' positions in a 
bedroom (invent AGI), Earth changes so much so fast. Is it hopeful wishing. ?. 
Yet when the first cell appeared it probably multiplied 1, 2, 4, 8, 16, 32, 64, 
128.When a threshold is met, sudden change happens fast. Once enough bricks 
are removed, all the robbers escape the jail. Suddenly.

Once AGIs are made/improved they will have many many many capabilities we don't 
have and will be able to sort/work with a lot of biology data a lot of ways we 
can't as fast and will like I said all be immortalists, not slackers, so they 
all will work on biology. They'll build replicating nanobots/easy to build 
computers and data fetchers, so it can improve the nanobots. It's self 
recursive. They'll be able to control the nanobots, unlike us we can't. 
Basically, because they can modify their program/brain/body and knowledge much 
faster and much more, they have a lot of capabilities to jump into new paths 
and make greater change, at scale.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M37294e81c5e5a6e5a5bd3112
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-28 Thread Matt Mahoney
Logarithmic is the inverse of exponential. So another way to state that
intelligence increases with the log of computing power is that while global
computing power doubled every 1.5 years since 1950  (exponential),
intelligence (measured by GDP) increased almost linearly, about 3% per
year. (And 40% of this growth is due to population growth rather than
technology).

Whether this growth speeds up, leading to a singularity, or slows down due
to resource limitations, is a matter of debate that can only be resolved by
waiting for the future to happen. Personally I think it will slow.

On Mon, Jan 27, 2020, 7:10 PM  wrote:

> Isn't an exponential curve just the reverse of a logarithmic curve? Are
> you saying cost is falling and slowing down falling? But evolution is an S
> curve made of S curvesI'm contused about this word 'logarithmic' in
> your context.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mf94d4bc72ee56fada7d344f1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-28 Thread rouncer81
its cause logic is log compressable. =D
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mb79a3e682dd4fd34de9b7c8a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-27 Thread immortal . discoveries
a logistic curve is just the true form of the exponential curve
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Md462f4f9876ffa81b73a5dc5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-27 Thread rouncer81
and then,  why the hell is logarythmic have log at the start and logic has it 
at the start as well!
why is that?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M2c20dd9153bcc8e3cfd9c0ed
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-27 Thread immortal . discoveries
Isn't an exponential curve just the reverse of a logarithmic curve? Are you 
saying cost is falling and slowing down falling? But evolution is an S curve 
made of S curvesI'm contused about this word 'logarithmic' in your context.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mb17b3bbcdf58dd33cf3561fe
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-27 Thread Matt Mahoney
On Mon, Jan 27, 2020, 5:50 PM  wrote:

> Yes intelligence/evolution grows exponentially more faster (hence
> exponentially more powerful) the more data, compute, and arms (ex.
> nanobots) you have.
>

No, it grows logarithmically, whether you measure intelligence using
prediction accuracy (compression) or in dollars per hour to approximate
Legg and Hutter's universal intelligence (expected reward over a universal
distribution of environments). While global computing power grows
exponentially by Moore's Law, world GDP grows only linearly. Just like
doubling the processing power or memory of your phone or computer doesn't
double the number of things you can do with it or the amount you can earn
with it.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Me2441281067d152990e0e04a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-27 Thread rouncer81
making brains massive is not my solution,   im going to finish my bot with 
under a meg of random access memory.  how do i plan on doing that you wonder.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Maeea30bc84ff776617c7eee1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-27 Thread immortal . discoveries
On Monday, January 27, 2020, at 5:49 PM, immortal.discoveries wrote:
> I was just thinking this 4 days ago. Perhaps I read it somewhere directly.
Lossless Compression, to be clear here.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M0923aad39e44c7b8b0236d2c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-27 Thread immortal . discoveries
On Monday, January 27, 2020, at 5:02 PM, James Bowery wrote:
> Unfortunately, measures inferior to self-extracting archive size, such as 
> "perplexity" or *worse* are now dominating SOTA publications.
I was just thinking this 4 days ago. Perhaps I read it somewhere directly.

On Monday, January 27, 2020, at 4:03 PM, Matt Mahoney wrote:
> The first main result of my 12 years of testing 1000+ versions of 200 
> compressors is that compression (as a measure of prediction accuracy or 
> intelligence) increases with the log of computing time and the log of memory 
> (and probably the log of code complexity, which I didn't measure). The best 
> way to establish this relationship is to test over as wide a range as 
> possible by removing time and hardware restrictions. The top ranked program 
> (cmix) requires 32 GB of RAM and takes a week, which is about a million times 
> more time and memory than the fastest programs. But it is still a billion 
> times faster and uses 100,000 times less memory than a human brain sized 
> neural network.
> 
Yes intelligence/evolution grows exponentially more faster (hence exponentially 
more powerful) the more data, compute, and arms (ex. nanobots) you have. It has 
better predictions and eats what it regurgitates and can recursively settle 
into the future faster than using poor-but-fast answers. So if you max your 
computer RAM/tolerable time wait with the simplest idea, you get better 
predictions/compressableness. Of course too precision predictions can be too 
slow and too big RAM to even use or you may need only a quick make-do solution 
for some small problem. So sometimes you can go deeper but don't, and sometimes 
you know the answer, and sometimes you need to do some thinking, sometimes 
years of thinking. But it's better to stay in bounds even when you don't have 
the answer. First the AGI identifies the best question for cost, then 
recursively takes cost effective baby steps evolving better 
answers(questions)/related knowledge in that domain.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Ma8adbfe7b685bc644ebcdb4b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-27 Thread James Bowery
And, even worse, I suggested the entire *change log* of Wikipedia as the
corpus so as to exposure latent identities of information sabotage in the
language models.

Google DeepMind can, and should, finance compression prizes with such
expanded corpora, based on the lessons learned with enwik8 and enwik9.

Unfortunately, measures inferior to self-extracting archive size, such as
"perplexity" or *worse* are now dominating SOTA publications.

For example, one recent publication claimed 0.99 bits per character on
enwik8 but when I went looking for the size of their model, here's what I
found:

transformer-xl/tf/models/pretrained_xl/tf_enwik8/model$ ls -alt
total 3251968
drwxrwxrwx 1 jabowery jabowery   4096 Jan 14 12:53 .
drwxrwxrwx 1 jabowery jabowery   4096 Jan 14 12:51 ..
-rwxrwxrwx 1 jabowery jabowery171 Dec 25  2018 checkpoint
-rwxrwxrwx 1 jabowery jabowery 3326781856 Dec 25  2018
model.ckpt-0.data-0-of-1
-rwxrwxrwx 1 jabowery jabowery  30159 Dec 25  2018 model.ckpt-0.index
-rwxrwxrwx 1 jabowery jabowery3195458 Dec 25  2018 model.ckpt-0.meta

A more recent paper purports 0.97 bpc and, although its authors do admit
the problematic nature of measuring model complexity, they justify
excluding it on the basis that they used the same "model setup" as the
TransformerXL 0.99 -- purportedly the prior "SOTA".

Here's my LinkedIn post on the decay of rigor in language model SOTA
metrics compared to size of self-extracting archive:

The so-called "SOTA" (State Of The Art) in the language modeling world has
wandered so far from the MDL (minimum description length) approximation of
Kolmogorov Complexity as to render papers purporting "SOTA" results highly
suspect.

An example is Table 4 provided by the most recent paper purporting a SOTA
result with the enwik8 corpus.

https://lnkd.in/ejJSNPC

The judging* criterion for the Hutter Prize is size of a self-extracting
archive of the enwik8 corpus, to standardized on the algorithmic resources
available to the archive.  This is essential for MDL commensurability.
Dividing the corpus into training and testing sets is neither necessary nor
desirable under this metric.

Controlling for the same "model setup" is a big step in the right direction
-- as it increases the commensurability with TransformerXL -- particularly
as compared to the other items in Table 4.  Model ablation can produce even
more commensurable measures, but it would be helpful for SOTA comparisons
to be more rigorous in defining the algorithmic resources assumed in their
measurements.

This improved rigor would expose just how important purported improvements,
such as .99 to .97 can  be.

*I'm on the Hutter Prize judging committee.


On Mon, Jan 27, 2020 at 3:04 PM Matt Mahoney 
wrote:

>
>
> On Mon, Jan 27, 2020, 12:04 PM  wrote:
>
>> I see the Hutter Prize is a separate contest from Matt's contest/rules:
>> http://mattmahoney.net/dc/textrules.html
>>
>
> Marcus Hutter and I couldn't agree on the details of the contest, which is
> why there are two almost identical contests.
>
> He is offering prize money, so I understand the need for strict hardware
> restrictions (1 MB RAM and 8 hours x 2.2 GHz to extract 100 MB of text) to
> make the contest fair and accessible. But I think this is unrealistic for
> AGI. The human brain takes 20 years to process 1 GB of language, which is
> 10^25 operations on 6 x 10^14 synapses.
>
> The first main result of my 12 years of testing 1000+ versions of 200
> compressors is that compression (as a measure of prediction accuracy or
> intelligence) increases with the log of computing time and the log of
> memory (and probably the log of code complexity, which I didn't measure).
> The best way to establish this relationship is to test over as wide a range
> as possible by removing time and hardware restrictions. The top ranked
> program (cmix) requires 32 GB of RAM and takes a week, which is about a
> million times more time and memory than the fastest programs. But it is
> still a billion times faster and uses 100,000 times less memory than a
> human brain sized neural network.
>
> The other main result is that the most effective text compression
> algorithms are based on neural networks that model human language learning
> (lexical, semantics, and grammar in that order). But the grammatical
> modeling is rudimentary and probably requires a lot more hardware to model
> properly.
>
>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M2534be684467494d8ecbf677
Delivery options: https://ag

Re: [agi] Re: Understanding Compression

2020-01-27 Thread Matt Mahoney
On Mon, Jan 27, 2020, 12:04 PM  wrote:

> I see the Hutter Prize is a separate contest from Matt's contest/rules:
> http://mattmahoney.net/dc/textrules.html
>

Marcus Hutter and I couldn't agree on the details of the contest, which is
why there are two almost identical contests.

He is offering prize money, so I understand the need for strict hardware
restrictions (1 MB RAM and 8 hours x 2.2 GHz to extract 100 MB of text) to
make the contest fair and accessible. But I think this is unrealistic for
AGI. The human brain takes 20 years to process 1 GB of language, which is
10^25 operations on 6 x 10^14 synapses.

The first main result of my 12 years of testing 1000+ versions of 200
compressors is that compression (as a measure of prediction accuracy or
intelligence) increases with the log of computing time and the log of
memory (and probably the log of code complexity, which I didn't measure).
The best way to establish this relationship is to test over as wide a range
as possible by removing time and hardware restrictions. The top ranked
program (cmix) requires 32 GB of RAM and takes a week, which is about a
million times more time and memory than the fastest programs. But it is
still a billion times faster and uses 100,000 times less memory than a
human brain sized neural network.

The other main result is that the most effective text compression
algorithms are based on neural networks that model human language learning
(lexical, semantics, and grammar in that order). But the grammatical
modeling is rudimentary and probably requires a lot more hardware to model
properly.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M5e6922e62911859156b660fd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-27 Thread immortal . discoveries
So basically, in all practicalness and utilizationalness: For the computer you 
have access to, or that we can share to peers, we want to aim for the best 
compression aka quality speech the AGI talks, while the RAM/speed that it talks 
at 'works' on the computer enough not to annoy you. And you can opt for lower 
RAM Working Memory so you can max it out with better gain for compression. As 
for speed, if you can make it 10 times faster but twice as dumb or even just a 
tad dumber, I don't think it'd be useful much, I mean maybe it'd be more useful 
if talks faster but less quality, though an idiot can talk all week and adds up 
to 0. So quality beats speed sorta. So what I'd personally do is make the time 
just bearable enough so I get better compression/smarts while can actually 
finish testing it in time practically and not take over 10 days. Actually, I'd 
say ~20 days would be my breaking limit on tolerability.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Me7d2d869c4f0cec5dc38aed9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-27 Thread immortal . discoveries
It'd make sense to me that, if one entry can get 100MB down to 20MB using 32GB, 
and another alg can get it to 21MB using 1GB, it's better but but 1MB off and 
so unless he can pump it up to 32GB RAM and get it down to 19MB, it's not as 
smart. It's all about how smart it talks and the feasibility of getting the 
words on your home monitor in time... If the fast guys/low RAM guy's gets worse 
compression and can't use 32GB to improve it rawly by more data then it's more 
useful to see the smarter words pop up on your monitor in time. Both are 
'fast/feasilbe' if you don't notice enough drag personally, and so you end up 
just focusing on which talks smarter. Time/RAM don't matter as much. Smarts can 
do you a lot more...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M09b90d1e7278991423ccf9ec
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-27 Thread immortal . discoveries
I see the Hutter Prize is a separate contest from Matt's contest/rules:
http://mattmahoney.net/dc/textrules.html

Time and Working Memory has no hard limit. Just the compressed result. This 
makes sense because, the compression/decompression time for outputting 1 
letter/word is ok on modern computers if it takes 3 seconds, an AGI can still 
talk to us! And it's ok if takes 64GB...AGI can still operate. Yes the lower 
the better, but on modern computers, it's feasible (at the edge in my example, 
at least for home pcs). So ya bloating those is ok, it let's us get closer to 
AGI! You'll 'feel' it hurt when you bloat the time/mem too far in your project, 
so the limit is, the participant. So only rule needed is compression result, 
and, the instinctive judgement of "hmm, the entry algorithm outputs 1 word per 
second, just fast enough, and 64GB RAM, bearable enough".

I'll give this more thought and post here but otherwise if I don't pot then it 
seems right.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Md30b8640f361d6a998c985aa
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-27 Thread rouncer81
If you want to get the closest distance of 2 3d lines in 3d space,  you can do 
a 2d line intersect then interpolate the depth ratio and get the difference of 
the 2 interpolations.

or.   if you just do a subtract for every point along the line,  theres alot 
more to do,  but thats the only command it has.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M6261f625f5309d8628f59288
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-27 Thread rouncer81
beautiful picture mate.  loved it :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mda62a396ce20fb982ede34d1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-27 Thread immortal . discoveries
like:
https://ibb.co/vZP57vC
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M64a956d38c79104c86878a41
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-26 Thread immortal . discoveries
When I first read the "context mixing" on Matt's pages, I thought it was nuts, 
I mean I didn't bother looking into the algorithms because I imagined it was 
using thousands of "models", like giants. But it turned out to be small 
unicorns. It's really simple hehe. And it's /the way/ to view AGI.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M79e2a0a190ade62c175416a3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-25 Thread immortal . discoveries
Me:
"Ya BF is a simpler idea and easy to code up but the wait time and memory size 
isn't simple to deal with now is it. So BF is not the simplest way to get the 
target. It ends up being the WORST. However for small narrow tasks, it can be 
best."

Matt:
"There is a 4 way trade-off between compression ratio, speed, memory, and code 
complexity. Evolution is simple, but only because it uses 10^46 DNA copy 
operations on 10^37 bits."

AI and beyond is Evolution, too, it's not 'simple'. There's no 'artificial 
evolution'. It just gets 'more complex' aka faster because there's more 
context/sorting done better. What happens is that the 'today' tech moves 
exponentially faster aka our survival/re-generation (immortality) is becoming 
exponentially better/longer living (as a whole - not just a single human or ape 
but the city even more) because we have exponentially more data. The reason for 
faster development these days or in Big Cities like Toronto is that there is 
more context. That's it. More humans, more data/communication sharing/exchange. 
Better predictions and more growth faster. Big cities add new skyscrapers every 
year, 3rd world hick towns don't. So, just the right combination of speed and 
memory compression learning and working memory size and code complexity is the 
best, it does become a larger system (well, you can't delete/create 
bits/particles, only sort particles differently) / more complex but it actually 
becomes less complex and Earth will be a fractal of patterns - nanobot modules 
made of nanobot modules, so everyone knows where/when/what everything is for 
best resource/speed use. So the sorting is done faster the more sorting from 
chaos we have; less complexity/entropy. Focus also on size of the program as in 
how small it compresses AND how big the yield is (extracting 
insights/nanobots). With too large/impossible atoms or planets they burst 
radiation; are unstable. Humans are agents and we compress data/extract 
insights and try to stabilize how much we 'lose' to how much we 'gain', but is 
really just sorting particles around differently as
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mbd7507c4dac1e5850fab99c5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-25 Thread Matt Mahoney
There is a 4 way trade-off between compression ratio, speed, memory, and
code complexity. Evolution is simple, but only because it uses 10^46 DNA
copy operations on 10^37 bits.

On Sat, Jan 25, 2020, 1:39 PM  wrote:

> Ya BF is a simpler idea and easy to code up but the wait time and memory
> size isn't simple to deal with now is it. So BF is not the simplest way to
> get the target. It ends up being the WORST. However for small narrow tasks,
> it can be best.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M29f286364f00b23ec11ceaa1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-25 Thread immortal . discoveries
Ya BF is a simpler idea and easy to code up but the wait time and memory size 
isn't simple to deal with now is it. So BF is not the simplest way to get the 
target. It ends up being the WORST. However for small narrow tasks, it can be 
best.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mfc8aad30a320c08098e6a5ba
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-25 Thread rouncer81
brute force algorythms actually have less commands in them,  than non brute 
force.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M4e21b4bec625f095160d9307
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-24 Thread immortal . discoveries
For anyone still wondering why we need to compress data if we are making an AI 
brain, let me explain again. Lossless Compression let's you compress 100MB to 
14.8MB (this has been done using a wikipedia dataset), and it decompresses it 
back using a predictor for the next bit/letter/word/phrase to re-generate. 
There is patterns in the 100MB data, like both dogs and cats eat, drink, sleep, 
etc, so it can group words and compress better. This let's you re-generate not 
just the full 100MB back, but other, related, data. So it enables AI to 
understand the data/patterns and generate future discoveries that actually 
entail its questions. Also translation (first it recognizes the context, before 
it predicts Next Letter, so this is that process used for prediction; 
translation, ex. my cats eat = my dogs ?_?).
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mcbc1fc9f4a03d15e6d80419d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-22 Thread immortal . discoveries
If you have seen a 13MB entry of wiki8 compressed, I assume someone compressed 
it in under a year, no brute force.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mee8ecda7f62ecc9f1cc2cbbc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-22 Thread immortal . discoveries
True, the smaller code is the better algorithm, as long as it is not slow brute 
force. Code can become longer if you do heuristics but is faster. You can 
create a code a few pages long only that is much faster than brute force and 
almost gives best compression (for wiki8 and related data). So in essence, the 
speed of the algorithm and the size of memory it uses / ends up at is based on 
the code; we want a small but just longer enough code than brute force's to get 
both lower memory usage than brute force and more speed than brute force. So 
yes talking about the length of the code is the target, simple ideas win, so me 
wondering about the lowest compression and/or fastest speed is directly 
referring to the shorter algorithm, I mean do you want me to ask you instead 
what is the shortest/simplest code you've seen? But it better compress and be 
fast, hence I asked you that, and I actually asked you which have seen in your 
life so far had best compression on wiki8 regardless of speed?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M59a3d48038c8a03fc8f6464d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-22 Thread James Bowery
A new corpus suggestion:  Google's One Billion Word Benchmark.

The idea would be to get people to stop using the misleading model
selection criterion of perplexity and start to realize the principled
generality of lossless compression.

I'm really surprised and even dismayed at how much of an uphill battle this
has been.  It's like people KNOW that they don't want to know the unbiased
truth when it is being handed to them on a silver platter.  Yes, I know
people don't want to know the truth but what surprised me is the degree to
which they exhibit intent at a high enough level that they must invest
cognitive resources to suppress self-knowledge of their meta-mendacity.

On Wed, Jan 22, 2020 at 12:11 PM Matt Mahoney 
wrote:

>
>
> On Tue, Jan 21, 2020, 12:45 PM  wrote:
>
>> On Tuesday, January 21, 2020, at 2:38 PM, Matt Mahoney wrote:
>>
>> create all possible archives starting with the smallest
>>
>> Brute Force? Makes no sense but you get 1st place for trying!
>>
>
> I get the prize for simplest description, not for size or speed.
>
>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mbd5b91fe71ec2647e7624a31
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-22 Thread Matt Mahoney
On Tue, Jan 21, 2020, 12:45 PM  wrote:

> On Tuesday, January 21, 2020, at 2:38 PM, Matt Mahoney wrote:
>
> create all possible archives starting with the smallest
>
> Brute Force? Makes no sense but you get 1st place for trying!
>

I get the prize for simplest description, not for size or speed.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M7e16507923773f1395188332
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-22 Thread immortal . discoveries
Matt, is 14.8MB the lowest entry for the 100MB wiki8 dataset? Or have you seen 
ni your life a ex. 14.4MB entry? Could be really slow but I want to know the 
lowest size seen so far.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M8b20d655b532c0dac16ef3e2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-21 Thread immortal . discoveries
Depending on how happy I feel I will provide cash to contestees. Up to 1,000USD 
but varies.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mf057da72f2bfbf29a8a48755
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-21 Thread immortal . discoveries
On Tuesday, January 21, 2020, at 2:38 PM, Matt Mahoney wrote:
> create all possible archives starting with the smallest
Brute Force? Makes no sense but you get 1st place for trying!
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mfe9c9f75723f1f1ae6c8646a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-21 Thread Matt Mahoney
On Sat, Jan 18, 2020, 4:32 PM  wrote:

> I almost feel we should open a 2nd Hutter Prize contest that awards cash
> and ranking to those who can explain their algorithm in the least amount of
> words and takes the least amount of time to understand it. AGI deserves it.
> You could check it works by coding up your own.
>

To decompress, read the archive header and execute it, taking the rest of
the archive as input. To compress, create all possible archives starting
with the smallest, and run the n'th one for at most n steps until you find
one that reproduces the input.

I basically described zpaq decompression. It is ranked 9th on enwik9.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Ma573e3301730935412cdfae1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-18 Thread immortal . discoveries
I almost feel we should open a 2nd Hutter Prize contest that awards cash and 
ranking to those who can explain their algorithm in the least amount of words 
and takes the least amount of time to understand it. AGI deserves it. You could 
check it works by coding up your own.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M021b631adc593950eb366b1b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-15 Thread immortal . discoveries
The universe can't create new information from nowhere. So Earth, and the 
universe, is lossless. But more tangibly is the thought that we don't actually 
have a lossy world to regenerate as I said actually. We have a lossless system 
and are simply sorting particles around until equilibrium. Only particle 
positions are distorted. So while we think we shrink wiki8 in bit size ... we 
can only ever actually re-arrange particles, not delete! So AI is a 
compression/insight extraction problem, and is actually a particle sort 
problem. If you sort them the right way it gives survival. More patterns are 
made exponentially near the end. Nanobot megaworld will know where and when 
everything is, no need to look for food, friends, keys, because their fabric is 
all grid patterns, a fractal.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mef1fc82c31f457208b05d86b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-12 Thread immortal . discoveries
Focus on lossless compression to achieve the chatting/discovery connections 
ability, and hence survival.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M86563b74e77e4675cca41f9f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-12 Thread immortal . discoveries
Tests for AI include the Turing test, survival, making you understand its 
discoveries, and lossless compression.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Md5e7fd11176494f456488a70
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-12 Thread rouncer81
not if the dog has a robot other people couldnt make, even if they stole his 
manuscripts.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mc36b45ea0a38d0f5e095d7cb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-12 Thread immortal . discoveries
Your problem Ranch you don't like Asians. I don't see the difference. We all 
can defend against a miniature dog. We are human-level machines.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M655a1e2e15b8313b35150032
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-12 Thread rouncer81
how could a person making monkeys of their children involve anything successful 
at all?
Its more like a natural disaster.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mcbfa536f5ef4b92c88b7a068
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-12 Thread immortal . discoveries
That'd be considered evolution my good friend. Change is evolution too.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mda3a4a9bc92f915400023804
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-12 Thread rouncer81
more like the future devolves into utter crap,  and nothing successful survives 
except stinky fetched shit.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M0ab591382f5409c38385574b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-11 Thread immortal . discoveries
Breaking News: Alien Life has been found! And it wasn't what you expected at 
all! There are everywhere. Check your closet, check your bed, check your food, 
take a look at Mars. Everything is just particles. All of Mars is a alien. 
Different machines/systems churn/ evolve/ move/ react differently. Some do 
'better' (by our reactive definition of our conversation radiating what is 
what). What's it matter if they divide cells or cling on to cold icicles or are 
magnets? Doesn't matter.

Evolution's evaluation is what systems out-survive other weaker ones/ideas. 
Immortality is the goal/evaluation method. And it may paperclip effect us like 
when a sun finds TOO much food/energy.

Similarly Lossless Compression is a good evaluation method to prune out weaker 
algorithms and find better ones, but is just based on the immortality 
evaluation you see... Good lil trick we humans discovered.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Md4c2a618331f2da8fd59822c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-11 Thread John Rose
I would think there would be libraries of circuits that can be wired together...

It's nice to be able to get a sense of the complexity of an observable. With 
that Wick rotations can be used for mapping between statics and dynamics and 
converting between quantum fields and statistical mechanics.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M8d54efe5c089148df1cd6dde
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-10 Thread rouncer81
they say grover search doesnt work,  but who knows for real...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M428af76d95632cf701609678
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-10 Thread John Rose
The circuit is actually on Wikipedia. Wonder if it will go into Quiskit:


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M19282ca200bd11951c4a088f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-10 Thread rouncer81
if you put all the logic in hard for a program,  then you get rid of the cycles 
required multiple.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mb12d2994d541048ee904bef0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Understanding Compression

2020-01-10 Thread Matt Mahoney
On Fri, Jan 10, 2020, 7:26 AM John Rose  wrote:

> And then is there a quantum compression system that uses a many paths
> simultaneity to seek KC?
>
> ... seems viable to me but not sure.  Matt would know :)
>

I suppose you could use Grover's algorithm to speed up a search for
programs that output the string to be compressed. You could test n programs
in O(sqrt(n)) time. Currently there is no hardware capable of implementing
it and I don't know of any research in this area.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-M22a5b0a22f1375fb8b652d25
Delivery options: https://agi.topicbox.com/groups/agi/subscription