This version is an improvement -- basically I get where you are headed:

"The gist of our theory is that Deep Learning provides us with neural
networks ... that serve
as the proof mechanism of logic..."

and why not?

On 7/19/21, immortal.discover...@gmail.com
<immortal.discover...@gmail.com> wrote:
> On Sunday, July 18, 2021, at 8:59 PM, YKY (Yan King Yin, 甄景贤) wrote:
>> Final version of my paper (corrected a lot of inaccuracies):
>> https://drive.google.com/file/d/1P0D9814ivR0MScowcmWh9ISpBETlUnq-/view?usp=sharing
>> You seem to mix together
> 1) BERT's prediction of next word in text
> with
> 2) prediction of next item in IQ tests
> but these two are not exactly the same...?
> You may argue that humans can do both,
> and indeed an AGI should be able to do that too.
> Typically, it would require multiple steps of reasoning,
> you were just looking at 1 layer of Transformer or Attention Mechanism,
> that corresponds to a single step of inference.
> Because prediction is prediction, the use of patterns is all that AGI does.
> There is different pattern finders for different types of problems. Saying
> to a robot "if you (see 1 cat) or (see 1 dog and a plane) walk forward else
> walk backwards" is a sort of match and then prediction as output.
> Nonetheless your paper is too un-unified, I'd like to see something way more
> simpler and evil and easy that explains everything about AGI. You can do a
> lot of damage with just 100 words and commonly used words instead of "topos
> theory". If we want to unite ideas to become openAI2, explain your work
> using common words/ sentences.
> 
> Question:
> I've asked a few people a question around forums and either they are too
> busy Transforming, or can't answer the question properly because really they
> didn't invent the Transformer architecture and are just users, lots of users
> albeit. I saw DALL-E/ IGPT predicts properly, whether given half a boat or
> half a word ex. 't h a n k s g i v ?' it is able to recognize it even if
> only saw a small version of a boat or word thanksgiving. This can't be just
> delay recognition with discounts for uneven timely activations, it needs a
> pattern of error in time errors, Hinton calls it "equivalence". And the
> thing is, BACKPROP and whatever cannot "learn" this on their own because it
> is too specific a function....it must be in the code, the code is like 6000
> lines of code for heavens sake, which is VerY annoying and I refuse to
> understand it for now cuz mine is ~120 lines and half way there.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb5526c8a9151713b-Mbe7251a66f200e8a3e9fb799
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to