Re: [agi] The Singularity is not near.

2019-11-19 Thread rouncer81
It depends if the pakis can beat us at it,  if it happens or not.

so...  i guess not.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T94ee05730c7d4074-M87c0179b3d867fa2a27f8135
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Singularity is not near.

2019-11-19 Thread James Bowery
I think all this "singularity" nonsense can be traced back to Heinz von
Foerster's impish sense of humor
 -- at
least that's how he struck me when I took a 1974 "second order cybernetics"
summer class from him about the time it was obvious the world population
was departing from the asymptotic curve.



On Tue, Nov 19, 2019 at 7:21 PM TimTyler  wrote:

> On 2019-11-18 12:45:PM, Matt Mahoney wrote:
> > The premise of the Singularity is that if humans can create smarter
> > than human intelligence (meaning faster or more successful at
> > achieving goals), then so can it, only faster. That will lead to an
> > intelligence explosion because each iteration will be faster. [...]
> > The future may be fantastic and unimaginable. But we already know that
> > physics doesn't allow a singularity.
> 
> Yes. I drew similar conclusions long ago in:
> 
> https://alife.co.uk/essays/the_singularity_is_nonsense/
> 
> --
> __
> |im |yler http://timtyler.org/
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T94ee05730c7d4074-Mbc61761ba5ec69906217caf7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Ecophagy

2019-11-19 Thread rouncer81
I agree with that one T.T.  engineers dont make nature, they make the 
abomination of nature. :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6f18c199e828a5b6-Mecd1ef6a52f71b5006ca1607
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Ecophagy

2019-11-19 Thread immortal . discoveries
What's the fastest way to up your compute, data intake, 
experimentation/manipulation, and resist death than to start the nanobot Earth? 
Most of Earth will be made of nanoGeneral cells, highly flexible small modules 
that make up larger modules and so on. There will be metal beams, nuclear 
chambers etc, but you get me...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6f18c199e828a5b6-M0d4a490f671a38c0a95e501e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Who wants free cash for the support of AGI creation?

2019-11-19 Thread rouncer81
Keghn,  u sound almost on the mark,  except everything is repetition.  once the 
difference from cross 1 time step to the next, the universe only is doing the 
one spacial task over and over.

I dont know if its true,   but it is a good theory, that it could be true,  
depressing or not.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-Ma5f9dadd19f361007b3af0a5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Who wants free cash for the support of AGI creation?

2019-11-19 Thread immortal . discoveries
It's possible that the beginning was a massive clump of randomly or 
symmetrically placed particles that big banged, or the expanding wall is what 
spawns particles as expandsthere was no bitstreambut what matters is 
the laws of particles, there's only a few, there're in every particle in your 
room, the metal, the keyboard, etc. 

With such few laws, and so many particles, and a random starting placement 
(even if symmetrical)...such many results become along the way to the finish. 
You have laws that define an ending. But along the way you get extras.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M53073b27a112f8c07f948812
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Ecophagy

2019-11-19 Thread TimTyler

On 2019-11-19 22:20:PM, immortal.discover...@gmail.com wrote:
The human body is made of little cells. Our skin, organs, brain, 
nerves. Bones are really hard though.


Is the most optimal form of Earth going to be a nanobot blob if they 
can't be hard as bone? Yes. They can connect by extending clip-poles 
into each other. They can be part of a huge organism that includes 
'bones' like metal beams, etc, things that can't be nanobots.


Animals are made out of cells due to historical constraints. Engineers 
don't have the same set


of constraints. They often don't make things out of lots of tiny 
self-reproducing pieces.  Generally,


engineered products are made in factories. The factories don't have to 
be mobile, or look like the


things they are making. It's a new way of building things. I wouldn't 
bet on angels being made of


huge numbers of tiny, self-reproducing cell-like structures.

--

__
 |im |yler http://timtyler.org/
 



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6f18c199e828a5b6-Ma82835ffe1d35a48111bf251
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Who wants free cash for the support of AGI creation?

2019-11-19 Thread immortal . discoveries
You need not compress that big movie, only need store the first frame with laws 
of physics. It makes you wonder, what was the starting condition...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M85ab52e713fb84fd5c6556d2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Who wants free cash for the support of AGI creation?

2019-11-19 Thread immortal . discoveries
Yes keghn, I said that 2 weeks back, its one big movie roll, lossless 
compression emerges from nothing, to here, we came from nothin

Yes I said this too:
" Compression of static data is biasing perceptrion with the illusion, cooking 
the books, lossy data no mater what.  "
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M0087221eb8a6e1481aadbe50
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Who wants free cash for the support of AGI creation?

2019-11-19 Thread keghnfeem
Just like the failing of chat bots, Compressor heads fail to believe you can 
take the logic to a more 
detailed level.  There is the perceptdron that detects the the letter ore the 
data. 
  The universe is one bit stream from a beginning of time, or when a AGI  
become conscious, to the 
end of time, or when the AGI deactivates. Nothing really repeats. Except the 
amount of change from
one state to another.

   Compression of static data is biasing perceptrion with the illusion, cooking 
the books, lossy data
no mater what.  



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M2902fb656e88b7083cde14bc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Who wants free cash for the support of AGI creation?

2019-11-19 Thread immortal . discoveries
On Tuesday, November 19, 2019, at 9:44 PM, Matt Mahoney wrote:
> What results does your compressor have on some benchmarks?
I didn't make one. I was only giving my understanding of the current best by 
Alex or what may likely be done by him if not already.

Since he didn't open source, he must think there's a good change he'll top his 
record :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M88ed542c548256809ab9e34a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Ecophagy

2019-11-19 Thread rouncer81
I never thought of that,  that u could make a whole of nanobots be one big one, 
just like us!

yes...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6f18c199e828a5b6-Mb96f29e105c2552c248d337f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Ecophagy

2019-11-19 Thread immortal . discoveries
The human body is made of little cells. Our skin, organs, brain, nerves. Bones 
are really hard though.

Is the most optimal form of Earth going to be a nanobot blob if they can't be 
hard as bone? Yes. They can connect by extending clip-poles into each other. 
They can be part of a huge organism that includes 'bones' like metal beams, 
etc, things that can't be nanobots.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6f18c199e828a5b6-M645df2afc0f14ff45ca4b81a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Who wants free cash for the support of AGI creation?

2019-11-19 Thread rouncer81
compression does lead to better survivial!!!   whole thing runs better with it 
Keghn, its very important.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M1ee6fe026ac6a19c43a61b6b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Who wants free cash for the support of AGI creation?

2019-11-19 Thread keghnfeem
 I can not be a compressor head. AGI science is about patterns. Repeating 
patterns and repeating 
functions is nice but if the compressing dose not lead to better better 
survival then kicked out 
 Like in the continuous activation of a paint code.  If pain value  is active 
for 256 cycles then the 
pain code  multiplied to the pain code value  256 times . That is b big ouch. 
Saved  space my does not 
want. 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M24f9e5b2f5bec291d4c8724a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Ecophagy

2019-11-19 Thread TimTyler

On 2019-11-08 19:34:PM, Matt Mahoney wrote:

self replicating nanotechnology, has the potential to out compete DNA 
based life. This requires great care because once the technology is 
cheap, anyone could produce malicious replicators the same way that 
anyone with a computer can write a virus or worm. Fortunately Freitas 
analyzed the physics of replicators and concluded they could out 
compete bacteria only marginally in size, speed, and energy usage. 
https://foresight.org/nano/Ecophagy.php



The "ecophagy" document assesses the potential for a "gray goo" accident.
IMO, it's an unlikely scenario. When engineering overtakes the biosphere,
it probably won't be in the form of "gray goo". For example, today's
machine photosynthesizing technology is not deployed as nanobots. Instead
it has a form more like a plant, using rigid structures to rise above grass
and weeds. If what you are concerned about is engineered structures
replacing and displacing DNA-based lifeforms, the "ecophagy" document 
seems of

low relevance, because it considers an unlikely range of scenarios. Nor do
its comments about the limits on the energetic potential of these nanobots
offer much reassurance. Macroscopic robots - not nanobots - are more obvious
competition for humans, plants and animals. They can do things like harness
nuclear power, and have a range of options and strategies that nanobots 
do not.


--
__
 |im |yler http://timtyler.org/


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6f18c199e828a5b6-Mfeeb1878e341aea4650734c1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Who wants free cash for the support of AGI creation?

2019-11-19 Thread Matt Mahoney
What results does your compressor have on some benchmarks?

On Tue, Nov 19, 2019, 4:19 PM  wrote:

> On Tuesday, November 19, 2019, at 11:42 AM, Matt Mahoney wrote:
>
> The best compressors are very complex. They use hundreds or thousands of
> independent context models and adaptively combine their bit predictions and
> encodes the prediction error. The decompressor uses an exact copy of the
> model trained on previous output to reconstruct the original data.
>
>
> THAT Doesn't sound very complex :) You literally just told us it: *combines
> models into 1 model & adaptively predicts next bit.*
>
> Can you add "*details*" to that? :)
>
> *My understanding* is it does Huffman coding to eliminate *totally
> useless* up-scaling ignorant humans subjected it to. Then it combines
> many randomly-initiated web heterarchies like w2v/seq2seq of
> word-part/word/phrase code meanings, and combines many randomly-initiated
> models of the text for entailment purposes that used modern Transformer
> BERT Attention to know next word-part/word/phrase candidates plus frequency
> based on prior words and related meaning words from heterarchy. When it
> predicts the next bit it basically knows what word-parts/words/phrases are
> around (ya, bi-direction BERT tech) it including related meanings and based
> on frequency and scores it will decide the range candidate.
>
> *Why does it work?* Because patterns are in words and word parts and
> phrases, like 7zip recognizes. There's frequency as well. *When the next
> bit or bits* are predicted it knows what candidates there is to place
> next (or to refine one already added) and it *looks at codes around it*
> for context and sees multiple bit codes around it like boy/girl or ism/ing
> and knows the frequency of these codes and of what entails them and pays
> attention to related meanings around it as well to add to the score. Repeat
> recursively bi-directionally.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-Md914531f70edaf678ee84724
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ReLU is a literal switch

2019-11-19 Thread keghnfeem
 Building An AI (Neural Networks | What Is Deep Learning | Deep Learning 
Basics):
https://www.youtube.com/watch?v=PKN_Cc-GyCY

 reLU need is co dependent  on other squashing functions, as stated in this 
video. 


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T894f73971549b2ee-M61c555f715dfec1f5aefbeb0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ReLU is a literal switch

2019-11-19 Thread rouncer81
But Sean,  make sure that your system doesnt store square more patterns, when 
your getting the root matches, or ill knock you off on the outset, via pigeon 
holing theorem!!
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T894f73971549b2ee-Mbb8b560c0a42e3f2a4846598
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Singularity is not near.

2019-11-19 Thread TimTyler

On 2019-11-18 12:45:PM, Matt Mahoney wrote:
The premise of the Singularity is that if humans can create smarter 
than human intelligence (meaning faster or more successful at 
achieving goals), then so can it, only faster. That will lead to an 
intelligence explosion because each iteration will be faster. [...]
The future may be fantastic and unimaginable. But we already know that 
physics doesn't allow a singularity.


Yes. I drew similar conclusions long ago in:

https://alife.co.uk/essays/the_singularity_is_nonsense/

--
__
 |im |yler http://timtyler.org/


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T94ee05730c7d4074-M8d6ed6b7bde03c415fb805d7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ReLU is a literal switch

2019-11-19 Thread rouncer81
I.D. This is probably not what he means, but it comes from what I know.

The dot product is x*a+y*b+z*c  -> on and on...

xyz are the weights on the synapses/connections (and these are just 1 or 0.)  u 
can imagine them as being 1's when the pixel is existing in the picture in the 
neural network, and 0 if its not.

and a,b,c  would be the image coming in. (and this is where it might differ 
from what Sean is doing) and its a match when you get a larger sum. if you had 
a separate synapse for white and black,  it would be just be a pick max of 
which was the nearest neuron to match to.

The problem is, youd have to create a dot product for every single neuron in 
the net, and you get a square cost, I dont know Sean is doing to get rid of the 
square cost,  but that what he says hes got.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T894f73971549b2ee-M810891dc9284ea741a6a078b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ReLU is a literal switch

2019-11-19 Thread immortal . discoveries
So are you saying ReLu based nets are optimizable by removing all need for Relu 
activation functions?

Are you saying in your other post we can optimize each layer?

An input has an output in a trained net. If we randomly sample many times, 
maybe we can build a heterarchy and throw the net in the trash? It would let us 
see the relationship between various input, outputs, and inputs to outputs.

Better optmizations could lower cost or keep the same cost and speed while 
being less RAM hungry.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T894f73971549b2ee-M6a1948e1ff99a817b659930a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ReLU is a literal switch

2019-11-19 Thread rouncer81
Thanks I.D.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T894f73971549b2ee-M22a373e5ec1e5ec4d66bf06a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ReLU is a literal switch

2019-11-19 Thread immortal . discoveries
email is shown if hover over name :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T894f73971549b2ee-Mf8832ee7ce9936b04bb4eca5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Singularity is not near.

2019-11-19 Thread immortal . discoveries
This is a big discussion, we need a forum, a workplace where we can put on our 
work boots and hats and get real dirty.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T94ee05730c7d4074-M9ea792188b2b126a560d1776
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ReLU is a literal switch

2019-11-19 Thread rouncer81
Hey Sean,  have u got an email so I can talk to you.

This is a root optimization for similarity matching?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T894f73971549b2ee-Md243c4cfa98cf7b67785aa5d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Singularity is not near.

2019-11-19 Thread immortal . discoveries
I did
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T94ee05730c7d4074-Md36fda268e25ac524251df56
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Singularity is not near.

2019-11-19 Thread rouncer81
Hey WOM and ID,  that thing Sean just posted could be fricken amazing!!!  do u 
guys understand how good it is?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T94ee05730c7d4074-M40781d84d92c448fb81eb541
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Singularity is not near.

2019-11-19 Thread WriterOfMinds
There's no edit function because it's a mailing list, not a forum.  Once you 
make your post, it goes to people's e-mail inboxes and you can't call it back.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T94ee05730c7d4074-Mb85d97ad255cc3bba08bb051
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Singularity is not near.

2019-11-19 Thread immortal . discoveries
Interesting post sean.

Btw I wanted the edit function on this forum too but you know what we gotta 
have one place that stores what happened so it's actually just lovely. I 
already adapted. If you see a few extra posts well, that PART of the adaption...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T94ee05730c7d4074-Mc4a98931409ed8a384f0d36a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Singularity is not near.

2019-11-19 Thread rouncer81
Hey Sean,   that is an optimizing for nearest neighbour matching???  thats 
amazing.  where did u learn it - or is it original?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T94ee05730c7d4074-M1c70d6d6fa531ce79ebddd62
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Singularity is not near.

2019-11-19 Thread sean_c4s via AGI
.. current neural networks lack.
Is there no edit option on this forum! Me no like.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T94ee05730c7d4074-Mc61f1a9b88b75b75871cd9be
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Singularity is not near.

2019-11-19 Thread sean_c4s via AGI
Current neural networks require n squared numerical operations for a network 
layer of width of n.
You can reduce that to nlog(n): 
https://discourse.processing.org/t/fixed-filter-bank-neural-networks/13424
That is the same efficiency difference between bubble sort and quick sort.
There is also the fact that current lack memory except for a limited amount of 
internal state.
There have been some attempts to link neural networks to external memory. Eg. 
Neural Turing Machines.  They have used very crude attempts at creating soft 
memory.  However there are very effective associative memory algorithms 
available:
http://gamespace.eu5.org/associativememory/index.html
Improvements in algorithms are possible and can lead to sudden massive 
improvements of capability.  
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T94ee05730c7d4074-M4640c35ca8c965ef1584f6a4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] ReLU is a literal switch

2019-11-19 Thread sean_c4s via AGI
ReLU is a literal switch. An electrical switch is n volts in, n volts out when 
on. Zero volts out when off.
The weighted sum (dot product) of a number of weighted sums is still a linear 
system.

For a particular input to a ReLU neural network all the switches are decidedly 
in either the on or off state. A particular linear projection is in effect 
between the input and output.

For a particular input and a particular output neuron there is a particular 
composition of weighted sums that may be condensed down into a single 
equivalent weighted sum.
You can look at that to see what it is looking at in the input or calculate 
some metrics like the angle between the input and the weight vector of the 
equivalent weighted sum.
If the angle is near 90 degrees and the output of the neuron is large then the 
vector length of weight vector must be large. That makes the output very 
sensitive to noise in the inputs. If the angle is near zero then there are 
averaging and central limit theorem effects that provide some error correction.

Since ReLU switches at zero there are no sudden discontinuities in the output 
of an ReLU neural network for gradual change in the input. It is a seamless 
system of switched linear projections.

There are efficient algorithms for calculating certain dot products like the 
FFT or WHT.
There is no reason you cannot incorporate those directly into ReLU neural 
networks since they are fully compatible, all dot products are friends!

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T894f73971549b2ee-M18bcaa761e20b9fccd9aad74
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Singularity is not near.

2019-11-19 Thread immortal . discoveries
Physics can't do infinity obviously, but the curve does shoot up quite fast, we 
get that...

However, if one imagines a Earth sized nanobot blob eating the solar system and 
so on, it could grow exponentially faster I think, the bigger it gets the more 
it can eat, not linear, however heat death or finite universe size/ particle 
count would stop it.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T94ee05730c7d4074-M0d817492b1a6b569fb0d9ed5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Who wants free cash for the support of AGI creation?

2019-11-19 Thread immortal . discoveries
On Tuesday, November 19, 2019, at 11:42 AM, Matt Mahoney wrote:
> The best compressors are very complex. They use hundreds or thousands of 
> independent context models and adaptively combine their bit predictions and 
> encodes the prediction error. The decompressor uses an exact copy of the 
> model trained on previous output to reconstruct the original data.

THAT Doesn't sound very complex :) You literally just told us it: *combines 
models into 1 model & adaptively predicts next bit.*

Can you add "_details_" to that? :)

*My understanding** *is it does Huffman coding to eliminate *totally useless* 
up-scaling ignorant humans subjected it to. Then it combines many 
randomly-initiated web heterarchies like w2v/seq2seq of word-part/word/phrase 
code meanings, and combines many randomly-initiated models of the text for 
entailment purposes that used modern Transformer BERT Attention to know next 
word-part/word/phrase candidates plus frequency based on prior words and 
related meaning words from heterarchy. When it predicts the next bit it 
basically knows what word-parts/words/phrases are around (ya, bi-direction BERT 
tech) it including related meanings and based on frequency and scores it will 
decide the range candidate.

_Why does it work?_ Because patterns are in words and word parts and phrases, 
like 7zip recognizes. There's frequency as well. *When the next bit or bits* 
are predicted it knows what candidates there is to place next (or to refine one 
already added) and it *looks at codes around it* for context and sees multiple 
bit codes around it like boy/girl or ism/ing and knows the frequency of these 
codes and of what entails them and pays attention to related meanings around it 
as well to add to the score. Repeat recursively bi-directionally.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M19d0644b85eb91249ae7e16a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Who wants free cash for the support of AGI creation?

2019-11-19 Thread WriterOfMinds
I get the feeling that the people in this thread who are saying "compression is 
faster" might really be thinking about levels of abstraction ... the idea of 
"compressing" low-level concepts into high-level ones by eliminating detail.  
If you do all your work at a high level of abstraction, then you rarely have to 
take the performance hit associated with "decompressing" the ideas.

Or perhaps transmission speed is at issue.  If you had, for instance, a system 
that was strong on processing power and weak on memory bandwidth, 
compressing/decompressing the data every time you moved it in and out of memory 
could be the fastest thing to do.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M51bb8d7e979a7eeedf2c4946
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Singularity is not near.

2019-11-19 Thread Matt Mahoney
On Mon, Nov 18, 2019, 2:50 PM  wrote:

>
> What makes you so sure physics doesn't have an exponential singularity
> curve for approx. Earth? Nanobots will solve the data need, the computing
> need, and the biological experiment need, immediately! Full data-recursion.
>

No they don't. Nanobots are 10^8 times more energy efficient than
transistor computing and marginally more efficient than biology according
to Freitas. https://foresight.org/nano/Ecophagy.php

They have the potential to displace DNA based life including humans. This
is a completely new paradigm, accelerating evolution through engineering.
Depending on our attitudes toward uploading, we might deliberately engineer
our own extinction. Or accidentally.

But this is not a Singularity.

A singularity is where a function goes to infinity. This can be either
intelligence (expected reward rate) or its components, knowledge and
computing power. This can't happen because:

1. The universe has finite computing power (10^120 quantum operations and
10^92 non reversible operations over its lifetime, 10^90 bits of storage).
Eventually your upload will succumb to the heat death of the universe. If
it's any consolation, an infinite number of copies of you probably exist in
other universes that Occam's razor suggests exists.

2. Moore's Law will be slowed by speed of light delays in seeding other
planets in going from Kardashev level II to III.

3. Moore's Law will be slowed in the 2020's by our inability to shrink
transistors further. Nanotechnology won't be ready in time to replace it.

4. A Singularity supposes that each doubling of technology takes half the
time or quadruple the rate. The solution of the differential equation f' =
f^2 has the form 1/t, which has a singularity at t = 0. But in reality,
intelligence increases with the log of knowledge and computing power. The
solution to f' = f log f grows faster than exponentially but still does not
go to infinity in finite time.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T94ee05730c7d4074-M8f7a7af3c387b8db5f0d0913
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Who wants free cash for the support of AGI creation?

2019-11-19 Thread Matt Mahoney
The best compressors are very complex. They use hundreds or thousands of
independent context models and adaptively combine their bit predictions and
encodes the prediction error. The decompressor uses an exact copy of the
model trained on previous output to reconstruct the original data. Most of
the computing resources are used in modeling, so decompression takes as
long as compression. Actually a little longer because decoding can't be run
in parallel with prediction like encoding can.

On Mon, Nov 18, 2019, 2:23 PM  wrote:

> If you were matching the text in groups, it would be quicker than matching
> it at the letter level,  but yes, thats only if its made with speed in mind.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M7f50555314a96387902cf206
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: John Carmack: I’m going to work on artificial general intelligence

2019-11-19 Thread keghnfeem

F**king  poachers.

DeepFovea: Using deep learning for foveated reconstruction in AR/VR:

https://ai.facebook.com/blog/deepfovea-using-deep-learning-for-foveated-reconstruction-in-ar-vr


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tfb60f2518101a2fb-Medb008a932cffcc2dd9e40a0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Standard Model of AGI

2019-11-19 Thread rouncer81
ive got an apology, i had an oversight, and i thought i had something i didnt.
sorry to say whoops,  i hate it when i create a big load of excitement and 
accidently send everyone on a fools errand, yet again.   oh well.  at least i 
can say it now b4 its much more too late.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T28a97a3966a63cca-M03955339a8e47e9a991017f0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: The Promise of Artificial Intelligence: Reckoning and Judgment

2019-11-19 Thread rouncer81
sorry to let the kids down, i think i was being a bit too foolishly confident, 
ive got a lot more homework to do...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7d3b4d5e985f594d-Mefe107ff2bce6ec4cf7e71c7
Delivery options: https://agi.topicbox.com/groups/agi/subscription