Re: [agi] Re: Google - quantum computers are getting close

2019-10-28 Thread Rob Freeman
On Mon, Oct 28, 2019 at 11:11 AM  wrote:

> Do you mean, instead of feeding the net data and learning, to instead
> request new output data/solutions?
>

You could put it like that.

Without seeing an exact formalization it is hard to say.

You make the example of zebra, horse, dog, mouse, cat. You group them
heterarchically based on sets of shared contexts. (Your innovation seems to
be a more efficient representation for the shared contexts??)

That's OK. But perhaps I can distinguish myself by saying what I do is not
limited to groupings. I don't only group words heterarchically based on
sets of shared contexts. I use the shared contexts to chain words in
different ways.

Saying to look at the way these things chain, might capture what is
different in what I'm suggesting.

Because the groupings are a heterarchy, the patterns possible when you
chain them expand much faster. Perhaps something like the way the number of
representations possible with qbits expands, because each element can be
multiple, so their combination can be exponentially more multiple etc.

Traditionally language has been looked upon as a hierarchy, so we miss this
complexity. That has been the historical failure of linguistics. Deep
learning also looks for hierarchy. There is the potential for heterarchy in
their representations, but as soon as they try to "learn" structure, that
crystallizes just one of them, and the heterarchy is gone or at least
reduced. Such crystallization of a single hierarchy gives us the "deep"
part of deep learning, and it is also the failure of deep learning.

So, I guess I'm saying, yeah, heterarchy, but chain as a heterarchy too.
Which means abandoning the singular, "learned", hierarchies of deep
learning.

-Rob

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta664aad057469d5c-Mfc729479fb75cb9e16f09856
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
makes sense.   but if u have all the chess rules, then you have the model in 
complete, and its not approximate, its exact.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M634c2de96d6e1e99a28d2bf4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Extracting hidden meaning

2019-10-28 Thread rouncer81
do u mean an automatic summary?  If so, that sounds like it could have good 
results.   Getting the computer to pick out of whats there, instead of 
generating things from scratch sounds more possible to me.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbd90bf527b3895a1-M2600cb4412a56341e911f054
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Google - quantum computers are getting close

2019-10-28 Thread rouncer81
where the letters are appearing in the x,y and z, is a different attribute each.

It could be anything,  could be how close the letter is to each artificial 
emotion - might be a good one.

You only need the vector, if it means something to the machine.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta664aad057469d5c-M258f3d0a48c5d76e4493d792
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Google - quantum computers are getting close

2019-10-28 Thread rouncer81
and u should find the synonyms appearing at the same location, of the xyz.  for 
example.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta664aad057469d5c-M98aa2545d57b5a6ed1c1b109
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: I completely hate today's mainstream AI (Google etc.)

2019-10-28 Thread rouncer81
Sorry to be disagreeable,  but I dont mind whats going on,  I actually think 
theyll get there, because the a.i. problem isnt as hard as ppl made it out to 
be, I think ppl just have to attempt it and I think it should work out for them.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T10119d5c27aad6be-M2b5e4afb438f0d970857cf5d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread John Rose
For that matter if you can prove that the universe can be modeled from a few 
bits of cellular automata then all models are complete and exact?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M1ae303931c20f99a1a8de3a2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
a cellular automata is comprised of so many rules,  and these are the laws of 
the cellular automata, which is not this physics, its artificial.

If the model is complete or not, its up to you to put it in.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-Me841f05a21efe3560c2cd68c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Consistently Lower Error On Test Data Than Train Data?

2019-10-28 Thread rouncer81
Thats a wierd one, youd think that could possibly never happen - sorry dont 
know the answer.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbd9e6404771d6b20-Mf8316832ef8e5ad229a85fc9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Consistently Lower Error On Test Data Than Train Data?

2019-10-28 Thread rouncer81
hang on...  maybe the network is too small for the training partition?  - might 
cause that.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbd9e6404771d6b20-M123d93f035167b4aa3c98eb1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread John Rose
Hand picked models are fine but with general intelligence you more need models 
that can model multiple things. Your chess modeler might model a subset of all 
games for example chess, checkers, bridge, etc... Or that modeler might produce 
model instances one of which would be chess. Or say a hierarchy of modelers 
or just one general modeler :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-Mf176eee797b990ec323450fc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
If it were AGI it wouldnt just be a model, it would be a model that builds 
models.

But im not interested in that,  I think if you provide the models for the 
robots activity inside a specific environment, then you can get really far 
already,  by providing the robot what it knows, instead of it working it out 
itself.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M95df11b137780988f0fe8a20
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread John Rose
Yes I agree with that also, and doing that you still need some basic Model 
Selection Criterion though it could be very simple.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-Mb3b21ffa87621e5c37df8f59
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
Yes, you need to detect the states off its eye.

If it detects a door in its way,  then it follows the door opening protocol,  
if it meets stairs it follows the stairs protocol, if it is playing soccer, it 
follows the soccer protocol,  if its getting in and out of its store truck, 
etcetera.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M02cf65d0b1cceb346862646e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread John Rose
Accomplishing that goal is much less daunting yet can lead to generalization 
and provides valuable reward feedback to the engineers :)

Pre-pack like 50 models then SOM map from eye to model as selector?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M4d2f69be8d00962e0f199f41
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
yes u get me. SOM could work, definitely id suggest unsupervised learning for 
its recognition, but the actual formulas and protocols is fully supervised.
50 states may accomplish quite a bit, but im only guessing yet, coding the 
modules is like high-level english symbolic commandments crossed with the 
perdanticness of lowlevel specificity.

If an eye detection stuffs up, then the whole thing will backfire on you! =)



Ive got a personal formulation of how to structure it,  but any 
structure/framework under the sun could work, its not so important, just do it 
how you prefer.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M10e16344300c44c9ce0fb5eb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread John Rose
50, 100 whatever. Build and retain unrecognized model graphs in memory... robot 
encounters revolving door, model not pre-packed so SOM graph stored for future 
encounters. Robot stuck in revolving door signals for assist avoids door next 
encounter. Just a fancy FSM.

It helps to architect within the context of a broader theoretical framework 
with this proven as an instance case.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M3ebcd6575c666436aa5f856f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
Definitely, that u could combine it with some self learning aparatus that you 
could wait to develop over time,  instead of giving it everything.  But thats 
all I plan on doing, putting the lot in, I think its more trustworthy if I dont 
let it work things out,  more secure.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-Mf0aae9325ec495fe248c3baf
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread doddy
take a look at this.
https://gigazine.net/gsc_news/en/20181206-nvidia-ai-rendered-virtual-world


On Mon, Oct 28, 2019 at 7:14 AM  wrote:

> Definitely, that u could combine it with some self learning aparatus that
> you could wait to develop over time,  instead of giving it everything.  But
> thats all I plan on doing, putting the lot in, I think its more trustworthy
> if I dont let it work things out,  more secure.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M67cda26ab2bb1482595787ea
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
that would be way cool if u ran your bot inside that.    if you have a fake 
virtual environment,   the computer has full power over it,  why search cant do 
that in real life, is because theres hidden variables.

And... of course...  it takes too much computational resources. :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-Mfdae2193fcb444252a3416b8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: I completely hate today's mainstream AI (Google etc.)

2019-10-28 Thread doddy
did anyone study
google plaNet.

On Mon, Oct 28, 2019 at 4:28 AM  wrote:

> Sorry to be disagreeable,  but I dont mind whats going on,  I actually
> think theyll get there, because the a.i. problem isnt as hard as ppl made
> it out to be, I think ppl just have to attempt it and I think it should
> work out for them.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T10119d5c27aad6be-M8f920cca5ff39217645f2bd1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread doddy
i think you mean the agent representing the robot.

On Mon, Oct 28, 2019 at 8:10 AM  wrote:

> that would be way cool if u ran your bot inside that.if you have a
> fake virtual environment,   the computer has full power over it,  why
> search cant do that in real life, is because theres hidden variables.
>
> And... of course...  it takes too much computational resources. :)
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-Ma46e61883041f98764163f37
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
yes,  and is the agent in the world, or is the world in the agent. :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M59997f7b59986e8d1ec2c3ee
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
yes,  and is the agent in the world, or is the world in the agent. :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M07797ce8086000c6647c2ac6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
yes,  and is the agent in the world, or is the world in the agent. :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M114fa4f359495e186ee1d676
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
yes,  and is the agent in the world, or is the world in the agent. :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M49c024e39e16e1592cc0edf1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
yes,  and is the agent in the world, or is the world in the agent. :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M26cdeb98c3b34b4124278901
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
yes,  and is the agent in the world, or is the world in the agent. :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M7e35880400896bd97e7902e9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
yes,  and is the agent in the world, or is the world in the agent. :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M0a1afa8bea94c41f20efb4e0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
yes,  and is the agent in the world, or is the world in the agent. :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-Mf97f87fd2e1be6bc596ecffa
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
yes,  and is the agent in the world, or is the world in the agent. :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-Meec9f826adac5dcd9260e8b6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
yes,  and is the agent in the world, or is the world in the agent. :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-Mc64fffcfa406d835bd19ddf9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
yes,  and is the agent in the world, or is the world in the agent. :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M365c3adf0b7f8d93264ca56d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread rouncer81
oops, i dont know how that happened sorry...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-M09d9ec0dbb141fbb1821f81a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Google - quantum computers are getting close

2019-10-28 Thread immortal . discoveries
My pic above leaves out the learning part though. Glove takes very long to 
train, it can go on for months. And needs way more than 40,000 words, more than 
human brain knows! Seems wasteful. Anyway the new idea above on right side of 
the pic would use a similar learning method, and it wouldn't chop the data into 
a 500 dimensional space though. As for Online Learning, you tally up the 
appearances, basically, which gives each link a score, and so you can always 
later up it (update it), since you don't compute dimensional plotting.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta664aad057469d5c-M4ee18d81693c094715d83564
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Extracting hidden meaning

2019-10-28 Thread Steve Richfield
Yes. I extracted "extraneous" words and phrases used to turn dry legal text
into a "living" document, rearranged them, added as few as possible new
words to create grammatically correct sentences, and deleted material not
relating to the issues at hand. This yielded a statement of intent which,
under conditions 15 years after the words were written, was OPPOSITE to the
legal actions being specified. This breathed life into my client's cause.

This alternate information channel only uses maybe 5% of the words.

I suspect automated summary would discard the very words I am looking for.

Maybe this is like real language translation vs. automated translation.
Real translations of the Koran are a good example, where they get 2 streams
out:
1. The simple "closest" translation, and
2. Footnotes explaining the difference in meanings between the original
text and the simple closest translation.

Maybe I am really looking for the footnotes from automatic summary?!

I don't know about the internals of automatic summary, so maybe you could
evaluate my thoughts here.

Steve
On Mon, Oct 28, 2019, 1:13 AM  wrote:

> do u mean an automatic summary?  If so, that sounds like it could have
> good results.   Getting the computer to pick out of whats there, instead of
> generating things from scratch sounds more possible to me.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbd90bf527b3895a1-M4c4ebb1e9f159293ff35ca57
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Extracting hidden meaning

2019-10-28 Thread immortal . discoveries
Show us the original and new documents, or at least the modified area if 
private.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbd90bf527b3895a1-M1a96672fb025fad7edd950ab
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] COMPUTE THIS!!!

2019-10-28 Thread Stefan Reich via AGI
Uh... what?

On Sun, 27 Oct 2019 at 14:42, John Rose  wrote:

> Magic is not absolute, it's local.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>


-- 
Stefan Reich
BotCompany.de // Java-based operating systems

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T4f01e8a4b34d0e2a-Med58717b69eaca9ac5ae206e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] COMPUTE THIS!!!

2019-10-28 Thread John Rose
On Monday, October 28, 2019, at 3:14 PM, Stefan Reich wrote:
> Uh... what?

Knowledge of science and technology doesn't progress evenly across society. 
There are local minima and maxima. Some people can be deceived with scientific 
"magic" or tricks of deception unknown to the observer.

Like the Aztec leaders predicting eclipses and claiming to have magic powers... 
or making water boil steam and freeze at the same time (triple point), or 
projecting directed sound across distances to whisper in someone's ear... or 
using artificial consciousness to perform more efficient compression :)

Local or glocal?  Hmm…
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T4f01e8a4b34d0e2a-Ma3ab3c1f97f0c08898c5ec80
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] COMPUTE THIS!!!

2019-10-28 Thread immortal . discoveries
Cavemen would think the iphone is *magic*.
They'd be totally *confused *over how it works.

Are we magic? Are you confused still? We are just machines.

If we explain AGI in clear English, it will no longer be confusing or magical, 
and will start to seem boring in fact. Boring is the new interesting.

Boring.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T4f01e8a4b34d0e2a-Mc8df0e621538a8f908599bb9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] COMPUTE THIS!!!

2019-10-28 Thread John Rose
On Monday, October 28, 2019, at 5:29 PM, immortal.discoveries wrote:
> Cavemen would think the iphone is *magic*.
> They'd be totally *confused *over how it works.

I don't know, they would be smarter than monkeys and monkeys use it no problem:
https://www.youtube.com/watch?v=K3T-uvSHfdo
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T4f01e8a4b34d0e2a-Mf2a48ab04792dc5e65fc9ef1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-10-28 Thread immortal . discoveries
The universe doesn't need to begin as a long movie with all frames already 
made. It just has to let physics roll out on its own. We can test this in a 
computer simulation. The universe generates the "missing data". The universe 
starts off lossy, but by the end, it's lossless and has ALL data in a file. Or 
brain. So while it seems like a lossy file cannot become lossless, it can. 
Lossy kicks lossless in the butt. And we can simulate this on a future 
computer. Self-regeneration works as the best compressor (Physics is). Re-use/ 
recognition/ patterns/ superposition is everywhere in the universe, things pull 
together and fill in missing data e.g. dead employees. "Understanding"/ 
"recognizing" allows filling in missing data using other related data.

Lossless compression is ALREADY lossy compression, because after compression, 
the data is missing, until you decompress it. As for "Lossy compression", it 
may result in words missing, however a human can add them back, so even Lossy 
compression results in the data back. See, same result. It auto-regenerates.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-M25a4ef275f03bf99074adc64
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: putting models in your robots head

2019-10-28 Thread Alan Grimes via AGI
doddy wrote:
> take a look at this.
> https://gigazine.net/gsc_news/en/20181206-nvidia-ai-rendered-virtual-world

This is getting ridiculous!

We are so absurdly close to AGI it's insane! I mean even I could
probably hack that into general intelligence and it wouldn't take me
very long either...

I'm going to need about $700k...



-- 
Clowns feed off of funny money;
Funny money comes from the FED
so NO FED -> NO CLOWNS!!! 

Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-Mb7139315be843aaf99a1241a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Extracting hidden meaning

2019-10-28 Thread Steve Richfield
Immortal,

Hmmm - I will have to anonymize and post input and manual output.

Stay tuned.

Steve
On Mon, Oct 28, 2019, 10:13 AM  wrote:

> Show us the original and new documents, or at least the modified area if
> private.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbd90bf527b3895a1-M0f1b19606f0c1c38f62175ec
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Missing Data

2019-10-28 Thread James Bowery
I occasionally wonder if this forum is a testbed for AGI text generators
under development in various labs.

On Thu, Oct 24, 2019 at 7:15 PM James Bowery  wrote:

> In Solomonoff Induction the smallest program outputting observations up to
> the present point in time will continue to output _future_ observations as
> optimal predictions.
>
> These future observations may be thought of as missing data for which
> optimal imputations are the output.
>
> In ordinary statistical imputation, there is a table of potential
> observations, not all of which have been made in all attributes (columns)
> of all cases (rows).  These are represented by a special symbol indicating
> no data (such as "n/a") in the corresponding cells.  In the case of a pure
> time series imputation, such as Solomonoff induction, the entire future is
> present as something equivalent to "missing data".*
>
> In the case of supplanting statistics used by the social sciences with
> Solomonoff induction, rather than a linear tape going off into the future,
> one has a spatial dimension representing simultaneity.  In this event model
> selection based on lossless compression must not disqualify a model
> (self-extracting archive of minimal length) for failing to reproduce the
> symbols representing missing data and, instead, producing imputations.
>
>
> *An aside is that an intelligent observer is inducing a model of not just
> any future old future observations but a particular observer's decision
> process about what it will next observe, which means, interestingly, an
> imputed value function for sequential decision theory.  This means there is
> no way to escape from AIXI, even if one tries to strip off the sequential
> decision mechanism.
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-M29422405c4d0d0b3f8cadf0b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Missing Data

2019-10-28 Thread immortal . discoveries
It's not.

We've been working to generate AGI for a long, long time, but a lot of the work 
went into doing exactly that. A lot of the time, we don't even have to design 
something. We could make a simple GUI, for example, which would generate a list 
of all your options in any way we chose for them.

Some of these choices, however, have very strong implications for the way we 
interpret the data, the algorithms we build, our model of the world and 
humanity. These are the kinds of choices we are really interested in designing.

Here is an example:

(i) A list describing your options as to how likely a given item is of 
appearing as a given ingredient in the various recipes used to produce the 
various items in your home; and

(ii) A table of some of the ingredients by their probabilities, so that they 
are easier to tell from the items you actually have on hand.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-Maec824947e2b9d6181d6352a
Delivery options: https://agi.topicbox.com/groups/agi/subscription