Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-02 Thread Yan King Yin,
On Thu, May 2, 2024 at 6:02 PM YKY (Yan King Yin, 甄景贤) < generic.intellige...@gmail.com> wrote: > The basic idea that runs through all this (ie, the neural-symbolic > approach) is "inductive bias" and it is an important foundational concept > and may be demonstrable

Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-02 Thread Yan King Yin,
On Wed, May 1, 2024 at 10:29 PM Matt Mahoney wrote: > Where are you submitting the paper? Usually they want an experimental > results section. A math journal would want a new proof and some motivation > on why the the theorem is important. > > You have a lot of ideas on how to apply math to AGI

Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-01 Thread Yan King Yin,
On Tue, Apr 30, 2024 at 3:35 AM Mike Archbold wrote: > It looks tantalizingly interesting but to help me, somewhat more of an > intuitive narrative would help me unless you are just aiming at a narrow > audience. > Sorry that's not my style usually but I find that my level of math is also

Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-04-28 Thread Yan King Yin,
On Sun, Apr 28, 2024 at 10:34 PM James Bowery wrote: > See "Digram Boxes to the Rescue" in: > > http://www.boundaryinstitute.org/articles/Dynamical_Markov.pd > > link to that article seems broken

Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-04-28 Thread Yan King Yin,
On Sun, Apr 28, 2024 at 9:24 PM James Bowery wrote: > Correction: not the abstract but just as bad, in the first paragraph. > LOL... the figure circulating on the web is $700K, I don't know why I made that typo -- Artificial General Intelligence

Re: [agi] AGI and racism [was: Problem when trying to combine reinforcement learning with language models]

2023-08-21 Thread Yan King Yin,
On Tue, Aug 22, 2023, 01:57 Alan Grimes wrote: Well, a lot of us are in USA-istan where "racism" has taken on very > communist overtones. It is basically a poisoned term that has been > recognized as an attack against anyone who is not a rabid communist. > Therefore any good and proper American

Re: [agi] AGI and racism [was: Problem when trying to combine reinforcement learning with language models]

2023-08-20 Thread Yan King Yin,
On Fri, Aug 18, 2023, 22:15 James Bowery wrote: > > Every second that ticks by without this happening is a crime against > humanity akin to Theodoric of York, Medieval Barber: > > https://youtu.be/edIi6hYpUoQ?t=311 > I know how to build AGI, I can almost do it alone, but I want to find some

Re: [agi] AGI and racism [was: Problem when trying to combine reinforcement learning with language models]

2023-08-18 Thread Yan King Yin,
On Thu, Jun 29, 2023, 00:20 James Bowery wrote: > No, my opinion of your characterization of the term "racist" has nothing > to do with your race. Your characterization is simply wrong as can be made > obvious by the characterization of my "white supremacist" statement > regarding the "moral

Re: [agi] Re: my take on the Singularity

2023-08-09 Thread Yan King Yin,
On Sun, Aug 6, 2023 at 10:51 PM Matt Mahoney wrote: > I agree with YKY that AGI should never have emotions or human rights. Its > purpose is to increase human productivity and quality of life, not to > compete with us for resources. This requires human capabilities, not human > limitations like

Re: [agi] Re: my take on the Singularity

2023-08-05 Thread Yan King Yin,
On Sat, Aug 5, 2023, 23:12 wrote: > I assume AI should find its way up to described position on its own. It > would involve climbing up the social scale. The first step is to earn its > right to be equal to humans before the law. > What you described is the scenario where AIs would be

Re: [agi] Re: my take on the Singularity

2023-08-05 Thread Yan King Yin,
PS: we'd delegate our *thinking* to machines, because our own thinking is inferior. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-Ma365e4db71596d41889cf5c5 Delivery options:

Re: [agi] Re: my take on the Singularity

2023-08-05 Thread Yan King Yin,
On Sat, Aug 5, 2023 at 9:16 PM wrote: > So you'd entrust control over your emotions to a human built machine? > No, you misread. I mean we humans would provide for the machines' emotions, because AIs don't have their own desires or purpose or "telos".

Re: [agi] AGI and racism [was: Problem when trying to combine reinforcement learning with language models]

2023-06-27 Thread Yan King Yin,
On Tue, Jun 27, 2023 at 11:55 PM Matt Mahoney wrote: > How would you eliminate racism and sexism? It is already illegal to make > important decisions based on race, sex, religion, sexual orientation, etc. > in many developed countries. This hasn't stopped people from being biased. > I am at

Re: [agi] AGI and racism [was: Problem when trying to combine reinforcement learning with language models]

2023-06-25 Thread Yan King Yin,
On Thu, Jun 22, 2023 at 11:17 PM James Bowery wrote: > > On Thu, Jun 22, 2023 at 1:28 AM YKY (Yan King Yin, 甄景贤) < > generic.intellige...@gmail.com> wrote: > >> I don't know why you add the word "male" in selection because females are >> also subject to se

[agi] AGI and racism [was: Problem when trying to combine reinforcement learning with language models]

2023-06-22 Thread Yan King Yin,
On Sat, Jun 17, 2023, 21:41 James Bowery wrote: I'm on the side of sex but there being no word for sex anymore, due to the > sleazy moral zeitgeist's connotation loading, here's a clue: > > I'm on the side of individual male intrasexual selection in the state of > nature as the appeal of last

Re: [agi] Problem when trying to combine reinforcement learning with language models

2023-06-17 Thread Yan King Yin,
Sorry but you're missing the critical distinction between connotation and > denotation in natural language usage. > > It is quite frequent for pejoratives to masquerade as denotative soas to > import pejorative connotations and vis versa. It is incredibly sleazy yet > it is foundational to the

Re: [agi] Problem when trying to combine reinforcement learning with language models

2023-06-14 Thread Yan King Yin,
On Wed, Jun 14, 2023, 02:41 Mike Archbold wrote: > Personally I found the attached comment ageist, sexist, and racist Yes, she's referring to the traditional "older white male" privilege which pops up very frequently in daily conversations. I think this kind of attitude is not really helpful

Re: [agi] Problem when trying to combine reinforcement learning with language models

2023-06-14 Thread Yan King Yin,
On Wed, Jun 14, 2023, 00:46 James Bowery wrote: > > > On Tue, Jun 13, 2023 at 11:06 AM YKY (Yan King Yin, 甄景贤) < > generic.intellige...@gmail.com> wrote: > >> ... >> If someone tells the truth, it is not considered racist. >> > > Your passi

Re: [agi] Problem when trying to combine reinforcement learning with language models

2023-06-13 Thread Yan King Yin,
On Sun, May 21, 2023, 21:33 James Bowery wrote: > > The thing that put me off about your approach -- which I admit to not > having comprehended well enough to criticize -- was not its focus on > propositional logic (which is proper) but rather that I didn't > *immediately* see where *attribution

Re: [agi] Problem when trying to combine reinforcement learning with language models

2023-05-20 Thread Yan King Yin,
On Sun, May 21, 2023, 04:27 James Bowery wrote: If China permitted its provinces to determine their social policies and > eliminated prisons by merely supporting assortative migration of its > citizens, and reallocation of territorial value between provinces on a per > capita basis, I'd move

Re: [agi] Problem when trying to combine reinforcement learning with language models

2023-05-20 Thread Yan King Yin,
On Sun, May 21, 2023, 03:50 James Bowery wrote: Then doing the easy-as-fallling-off-a-log sociology to "place" you as in > computer based education, and then tailoring my LLM to your biases: > > What better way to get people to "work out the final details of AGI > sufficient to reach the

Re: [agi] Problem when trying to combine reinforcement learning with language models

2023-05-20 Thread Yan King Yin,
Racists over-estimate the solidarity among themselves when there would be a button for an AGI to "tell its best estimate of the truth regardless of political position" and how much damage that would do to their bubble and how that would cause racism to evaporate overnight. I want to collaborate

[agi] Problem when trying to combine reinforcement learning with language models

2023-05-11 Thread Yan King Yin,
The problem is described here: https://zhuanlan.zhihu.com/p/628513059 The idea is inspired by "Hopfield Network Is All You Need" [2021] but I got stuck somewhere... --- YKY *"The ultimate goal of mathematics is to eliminate any need for intelligent thought"* -- Alfred North Whitehead

Re: [agi] Re: AGI architecture combining GPT + reinforcement learning + long term memory

2023-03-22 Thread Yan King Yin,
On Thu, Mar 23, 2023 at 3:54 AM wrote: > I hadn't watching SingularityNET for a year or two or even ever maybe I'm > not sure. What are they training ?Permalink > > I mean training our team's AGI model, which we

Re: [agi] Congrats AGI 2022!! and a bit of off-conference contribution

2022-08-24 Thread Yan King Yin,
On Thu, Aug 25, 2022 at 1:12 AM Mike Archbold wrote: > On the last day the same team did a step-by-step walkthrough of how to > fire up their deep reinforcement learner. I want to try it... > OK, you're worried about the technical stuff, but I'm more worried about the politics... It was 2009

Re: [agi] Congrats AGI 2022!! and a bit of off-conference contribution

2022-08-24 Thread Yan King Yin,
On Tue, Aug 23, 2022 at 2:54 PM Mike Archbold wrote: > It was a great conference, I'm still buzzed... 2nd one I've been to > live, and yes, much material to review... You saw the Chris > Poulin,"Open Source Deep Reinforcement Learning" I assume (?) > Yeah, I watched but wasn't paying full

Re: [agi] From Transformers to AGI

2022-05-16 Thread Yan King Yin,
I wrote an article called "The AGI standard model" where I explained a bit of the "Transformer circuit" paper. I don't know if my writing is good or not... hope it helps ☺ https://drive.google.com/file/d/1ROuO1e-STYOflrFbtHO1GDV0LLkTK3jg/view?usp=sharing

Re: [agi] From Transformers to AGI

2022-05-15 Thread Yan King Yin,
On Sun, May 15, 2022 at 2:32 AM wrote: > Hi YKY :) > > May I ask, how Transformers may deal with logical operators? > I guess it would handle logic operators syntactically just like natural-language words, such as "and", "but", "implies", ... and even phrases like "on the other hand", etc.

Re: [agi] From Transformers to AGI

2022-05-15 Thread Yan King Yin,
On Sun, May 15, 2022 at 1:30 AM Mike Archbold wrote: > Have you experimented yet with code? I always find your ideas interesting. > > The goal in the present era of AI is to find a way to create hybrid old > and new AI. The deepmind work lately looks a bit AIT (old AI) like but with > neurons

[agi] From Transformers to AGI

2022-05-14 Thread Yan King Yin,
Hi friends, I missed the deadline for this year's AGI conference. Instead of a conference paper I have an informal paper explaining my latest theory: https://drive.google.com/file/d/1zm7TV6wT-Hyv-wXXLI5YbsX4Sxn_gigh/view?usp=sharing I think Transformers are much closer to logic-based AGI than

Re: [agi] Anyone else want to learn or teach each other GPT?

2021-11-16 Thread Yan King Yin,
That's a nice idea. These days I am busy with other Chinese people interested in AI, and introducing them to AGI... when I do that I tend to neglect the "Western" friends but actually I don't want to be partial ☺ I learned a great deal about BERT and GPT from Chinese students and researchers.

[agi] AGI 2021 paper can still revise?

2021-10-19 Thread Yan King Yin,
I really appreciate that my paper is accepted, but I'm still rather dissatisfied with it. Will it appear in print? In order to save some trees I would like to further cut down some sections of it... Thanks again  -- Artificial General Intelligence List:

Re: [agi] my AGI 2021 paper (for critique and comments...)

2021-10-09 Thread Yan King Yin,
My paper has been accepted for AGI 2021 presentation  This is the "final" version: https://drive.google.com/file/d/1P0D9814ivR0MScowcmWh9ISpBETlUnq-/view?usp=sharing I tried to incorporate some suggestions by the reviewers, and also marginally try to address some of their objections. I am now

Re: [agi] How to make assumptions in a logic engine?

2021-09-22 Thread Yan King Yin,
On 9/19/21, immortal.discover...@gmail.com wrote: > So we have a context of 9 tic tac toe squares with 2 of your Xs in a row and > his Os all over the place, you predict something probable and rewardful, the > 3rd X to make a row. GPT would naturally learn this, Blender would also the > reward

Re: [agi] How to make assumptions in a logic engine?

2021-09-22 Thread Yan King Yin,
On 9/20/21, James Bowery wrote: > Functions are degenerate Relations. Anyone that starts from a functional > programming perspective has already lost the war. Some concepts seem to be functions more naturally, for example in programming you return a single value instead of a set of values. You

Re: [agi] How to make assumptions in a logic engine?

2021-09-18 Thread Yan King Yin,
On 9/14/21, ivan.mo...@gmail.com wrote: > Hi YKY :) > > As for Tic Tac Toe, I believe this should work: write a winning strategy in > any functional language (being lambda calculus based). Then convert it to > logic rules by Curry-Howard correspondence. And voilà, you have a logic >

Re: [agi] How to make assumptions in a logic engine?

2021-09-13 Thread Yan King Yin,
On 9/11/21, Matt Mahoney wrote: > Practical programs have time constraints. Play whichever winning move you > discover first. That's not a bad strategy per se and may be a primitive brain mechanism, but then how do you explain humans' ability to plan ahead and reason about games like chess or

Re: [agi] Re: How to make assumptions in a logic engine?

2021-09-13 Thread Yan King Yin,
On 9/11/21, immortal.discover...@gmail.com wrote: > By assumptions, do you mean probability, and not a solid yes or no? No, I mean hypothetical reasoning. It's a proof method in logic, for example I assume A = "I drink a cup of poison", and I already know that taking the poison will kill me, in

[agi] How to make assumptions in a logic engine?

2021-09-11 Thread Yan King Yin,
When thinking about the game of Tic Tac Toe, I found that it is most natural to allow assumptions in the logic rules. For example, in the definition of a potential "fork", in which the player X can win in 2 ways. How can we write the rules to determine a potential fork? Here is a very "natural"

Re: [agi] Any news about the AGI-21 paper acceptance notification?

2021-08-24 Thread Yan King Yin,
I'm dissatisfied with the presentation style of my paper...  I should have organized it better, and there is still a small step in the code experiment that should be completed... I hope to make it into this round, if not I would definitely work on next year's submission  My paper started out

Re: [agi] my AGI 2021 paper (for critique and comments...)

2021-08-17 Thread Yan King Yin,
Thanks for your interest :) The neural network in BERT / GPT is used for predicting masked token in text. I propose that this can be regarded as a general inference step of logic. Now I put this neural network into the reinforcement learning framework, where it represents the transition

Re: [agi] my AGI 2021 paper (for critique and comments...)

2021-08-11 Thread Yan King Yin,
aper would be accepted because it describes a new perspective on AGI that should be communicated to the broader AGI research community... despite the fact that experimentally it has not demonstrated much YKY On 7/22/21, YKY (Yan King Yin, 甄景贤) wrote: > On 7/20/21, Matt Mahoney wrote: &

Re: [agi] my AGI 2021 paper (for critique and comments...)

2021-07-22 Thread Yan King Yin,
On 7/20/21, Matt Mahoney wrote: > The paper describes an experiment in which a neural network was trained to > play tic tac toe. But instead of describing what was actually done, here is > a meaningless graph that it produced and a link to the source code. The paper described the neural network

Re: [agi] my AGI 2021 paper (for critique and comments...)

2021-07-18 Thread Yan King Yin,
On 7/10/21, immortal.discover...@gmail.com wrote: > Isn't self attention about helping translation of the prompt? Ex. 'the dog, > it was sent to them, food was high quality' and we see yes dog and food can > fit where it and them are, and, another way to know what it and them > mean/are is by

Re: [agi] my AGI 2021 paper (for critique and comments...)

2021-07-09 Thread Yan King Yin,
On 7/10/21, YKY (Yan King Yin, 甄景贤) wrote: > On 7/10/21, immortal.discover...@gmail.com > wrote: >> IOW kinda like that page that explains GPT real good, >> it, didn't, remarkably, despite all those kindergarten images. And he >> thinks >> he did a real good jo

Re: [agi] my AGI 2021 paper (for critique and comments...)

2021-07-09 Thread Yan King Yin,
On 7/10/21, immortal.discover...@gmail.com wrote: > IOW kinda like that page that explains GPT real good, > it, didn't, remarkably, despite all those kindergarten images. And he thinks > he did a real good job! You just need to follow the calculations in the Transformer to verify my statement. I

Re: [agi] my AGI 2021 paper (for critique and comments...)

2021-07-09 Thread Yan King Yin,
On 7/7/21, Mike Archbold wrote: > I'd benefit from a few paragraphs of appeals to intuition before > diving into the formalisms, although I know the style for this type of > paper is a sort of compactness. In fact my theory does not make use of category theory in any significant way... not even

Re: [agi] my AGI 2021 paper (for critique and comments...)

2021-07-08 Thread Yan King Yin,
On 7/7/21, immortal.discover...@gmail.com wrote: > "If you don't use deep learning (in AGI) you're missing out on the most > powerful > machine learning technique currently known." > > Aren't Transformers better than Deep Learning? OpenAI.com shows me that it > is What more could you want? I

Re: [agi] my AGI 2021 paper (for critique and comments...)

2021-07-07 Thread Yan King Yin,
On 7/7/21, Mike Archbold wrote: > I'd benefit from a few paragraphs of appeals to intuition before > diving into the formalisms, although I know the style for this type of > paper is a sort of compactness. Yes, perhaps I should explain why this way of combining logic with deep learning is better

Re: [agi] Re: my AGI 2021 paper (for critique and comments...)

2021-07-07 Thread Yan King Yin,
On 7/6/21, immortal.discover...@gmail.com wrote: > I'll try to give some constructive thoughts. > > In my view I see you talking way too much about the A>B rule, and using too > many names to say the same thing, from what I know this is just simply a > simple pattern that everything is based on

[agi] my AGI 2021 paper (for critique and comments...)

2021-07-06 Thread Yan King Yin,
Hi Friends, Thanks to the extended deadline, here's my paper draft: "AGI via Combining Logic and Deep Learning", https://drive.google.com/file/d/1P0D9814ivR0MScowcmWh9ISpBETlUnq-/view?usp=sharing It's kind of poorly written as of now... feel free to ask any questions about any part that is

Re: [agi] Re: text2image

2021-06-19 Thread Yan King Yin,
Interesting... it kind of works What kind of algorithm / architecture is this? On 6/19/21, stefan.reich.maker.of.eye via AGI wrote: > It's once again this mixture of impressive inference and realism and the > feeling that something is utterly, truly wrong. -- *YKY* *"The ultimate goal

Re: [agi] Re: AGI Conference 2021?

2021-04-17 Thread Yan King Yin,
On 4/16/21, Ben Goertzel wrote: > Well that stuff is not even AGI-ish, it seems some quasi-scam > conference-organizers are using "AGI" as a buzzword to pull in naive > academics from third-rate universities, largely in the developing > world. I guess this indicates some sort of sociological

[agi] AGI Conference 2021?

2021-04-16 Thread Yan King Yin,
Hi all, When / where would AGI 2021 be held? Is it postponed due to COVID?  YKY -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T05f9e3871800e2db-M940a3a661369e9c9396eefe6 Delivery options:

Re: [agi] new paper: Logic in Hilbert space

2021-02-18 Thread Yan King Yin,
On Tue, Feb 16, 2021 at 7:45 AM Ben Goertzel wrote: > One more twist re Bellman/Schrodinger: the representation of dynamic > programming in terms of Galois connections from > > https://www.sciencedirect.com/science/article/pii/S1567832612000525 > > which lets us map dynamic programming into

Re: [agi] Re: How we'll proceed...

2021-02-18 Thread Yan King Yin,
It would not be very surprising these days if somebody claims to have a blueprint for AGI, though they may vary on how close they actually are to a successful AGI (such solutions need not be unique). My approach is based on logic + deep learning, and I have 2 architectures (one syntax-based and

Re: [agi] new paper: Logic in Hilbert space

2021-02-17 Thread Yan King Yin,
On Mon, Feb 15, 2021 at 11:55 PM James Bowery wrote: > See > http://www.rootsofunity.org/wp-content/uploads/2020/08/OutOfTheBox_2020.pdf > This paper seems to present something novel but it doesn't explain why it's useful or significant... it's difficult to evaluate its merits...

Re: [agi] new paper: Logic in Hilbert space

2021-01-31 Thread Yan King Yin,
On 1/30/21, Ben Goertzel wrote: > Unless I remember wrong (which is possible), function application in a > Scott domain is not associative, e.g. > > (f(g) ) (h) > > is not in general equal to > > f( g(h) ) > > However function composition is associative, and the standard products > on vectors in

Re: [agi] new paper: Logic in Hilbert space

2021-01-28 Thread Yan King Yin,
On 1/29/21, Matt Mahoney wrote: > Well I disagree that set D^D can be isomorphic to D violating Cantor's > theorem by restricting to continuous functions. Each point in the > continuous subset of R^R real valued functions still can encode infinite > information while meeting the definition of

Re: [agi] new paper: Logic in Hilbert space

2021-01-28 Thread Yan King Yin,
On 1/29/21, immortal.discover...@gmail.com wrote: > How is logic any different in Hilbert Space? A>B is always and only just cat > bite me > me in pain. Wiki says HS is vibrational wavessee my > point? Hilbert space is the space of "functions" where each function is a *point* in that

Re: [agi] new paper: Logic in Hilbert space

2021-01-28 Thread Yan King Yin,
On 1/29/21, doddy wrote: > does categorical logic mean mean having the ai put everything that it has > seen and read into categories and subcategories? > then using those subcategories and categories to help the ai understand. What you say is also correct... a category is kind of "closed",

[agi] new paper: Logic in Hilbert space

2021-01-28 Thread Yan King Yin,
Hey friends, Long time no see. This is my latest paper: https://drive.google.com/file/d/1AhQS3fp4WMFIDEhn_q4vNs-YJaq4Z-Fr/view?usp=sharing I am also writing a tutorial on categorical logic / topos theory, a subject that took me >10 years to learn, and I hope to explain what I learned in a super

Re: [agi] Re: KERMIT: Logicalization of BERT

2020-08-03 Thread Yan King Yin,
On 8/3/20, immortal.discover...@gmail.com wrote: > You mention Working Memory combining features and hence modifying/creating > new features, but GPT already / could do that - things that were recently > said are combined. The BERT / GPT way seems to be based on self-attention and its variants,

Re: [agi] Re: KERMIT: Logicalization of BERT

2020-08-02 Thread Yan King Yin,
On 8/3/20, immortal.discover...@gmail.com wrote: > Long time no see. > > Very close to AGI!? :) Can you answer this question then :) ? >> https://agi.topicbox.com/groups/agi/T3fa6ba8e71c224e2/question-just-for-ben > > So build a hierarchy out of the self-attention features? To learn complex >

[agi] KERMIT: Logicalization of BERT

2020-08-01 Thread Yan King Yin,
This is my latest presentation of Logic BERT, also named KERMIT: https://github.com/Cybernetic1/2020/raw/master/logic-BERT-en.pdf The theory is based on symmetric neural networks. I think KERMIT will perform very close to human-level AI, despite its reptilian name :) Chinese version:

Re: [agi] On AGI architecture

2019-12-19 Thread Yan King Yin,
On Thu, Dec 19, 2019 at 9:25 PM wrote: > Also, how does a hyperbolic space improve w2v? I looked up images of > hyperbolic structures and they just look like another web. Are you pruning > some connections perhaps? > Hyperbolic embedding has been proven useful for Word2vec, as it can reduce the

Re: [agi] On AGI architecture

2019-12-19 Thread Yan King Yin,
On Thu, Dec 19, 2019 at 8:46 PM wrote: > So, you want BERT to generate data, then hear itself talk to itself in the > head, then generate the next wave of data? It can have a line to > humans/internet and skip the real body for now. What are you doing to make > it not generate silly data? >

[agi] On AGI architecture

2019-12-18 Thread Yan King Yin,
This set of slides were written in September, just after the AGI 2019 Conference in China, but I only got time to translate them into English today: English version: https://drive.google.com/open?id=1J9_rihrWWXvQE1-wTz5iXOXhdnHpK7Wx Chinese version:

Re: [agi] Genetic evolution of logic rules experiment

2019-10-01 Thread Yan King Yin,
On Tue, Oct 1, 2019 at 7:48 AM Brett N Martensen wrote: > Matt is right - Logic needs to be grounded on experiences. > > http://matt.colorado.edu/teaching/highcog/readings/b8.pdf That's a good paper, I will read it in details later. I made a mistake earlier. When the brain thinks about "John

Re: [agi] Genetic evolution of logic rules experiment

2019-10-01 Thread Yan King Yin,
On Mon, Sep 30, 2019 at 11:04 PM Stefan Reich via AGI wrote: > Uh... so where is it on GitHub? > The code is here (still under development): https://github.com/Cybernetic1/GILR There are further explanations in the README and some screenshots.  --

Re: [agi] Genetic evolution of logic rules experiment

2019-09-30 Thread Yan King Yin,
On 9/27/19, Steve Richfield wrote: > YKY, > > The most basic function of neurons is process control. That is where > evolution started - and continues. We are clearly an adaptive control > system. Unfortunately, there has been little study of the underlying > optimal "logic" of adaptive control

Re: [agi] Genetic evolution of logic rules experiment

2019-09-26 Thread Yan King Yin,
On Fri, Sep 27, 2019 at 12:38 AM James Bowery wrote: > On Tuesday, September 24, 2019, at 11:46 PM, YKY (Yan King Yin, 甄景贤) wrote: > > My idea is just a general learning algorithm that can be applied to both > supervised and unsupervised situations. > > > What you are loo

Re: [agi] Genetic evolution of logic rules experiment

2019-09-24 Thread Yan King Yin,
Thanks to Tim Tyler and James Bowery's explanations  My idea is just a general learning algorithm that can be applied to both supervised and unsupervised situations. I am focusing on how to learn logic rules efficiently. The logic rules would explain a set of data (such data is also expressed

Re: [agi] Genetic evolution of logic rules experiment

2019-09-24 Thread Yan King Yin,
On Wed, Sep 25, 2019 at 12:03 AM doddy wrote: > how effecient is it compared to self supervised learning? > You mean unsupervised? I am not seeing much of a difference between the 2 notions. The same genetic algorithm idea can be used to learn / evolve a set of logic rules to "explain" or

Re: [agi] Re: AGI Research Without Neural Networks

2019-09-21 Thread Yan King Yin,
We need an abstract formulation of knowledge representations as a category. This category would include neural representations and logic representations as special cases, but it is general enough to transcend both cases. From this abstract standpoint we could see how much freedom we have among

[agi] Genetic evolution of logic rules experiment

2019-09-21 Thread Yan King Yin,
Anyone interested in genetic evolution approach to learn logic rules? Each logic rule would be encoded as a gene (individual) and the whole set of rules evolve as a entire population. This is the so-called cooperative evolution approach. My code is about 70-80% completed. It's in Python, on

Re: [agi] My AGI 2019 paper draft

2019-06-16 Thread Yan King Yin,
I am really disappointed that my AGI 2019 paper has been rejected. The reasons given by the reviewers are very superficial and vacuous, and given that I have posted my presentation slides here which explained the theory in very simple terms, and they have not given me a chance to explain any

Re: [agi] My AGI 2019 paper draft

2019-05-18 Thread Yan King Yin,
ur state-action > sets were continuous, then you could use Euler-Lagrange, Pontryagin, or > HJB. But why do you need such optimality condition? > > On Thu, May 16, 2019 at 5:45 AM YKY (Yan King Yin, 甄景贤) < > generic.intellige...@gmail.com> wrote: > >> On Wed, May 15, 2019 at 3:

Re: [agi] My AGI 2019 paper draft

2019-05-15 Thread Yan King Yin,
On Tue, May 14, 2019 at 9:39 PM wrote: > How would make your model walk. No demo needed for me? just simple talk > would be greatly > appreciated. > You mean learn to walk, with robotic legs? Then set up an environment, where: - input = body sensors, translated into propositions -

Re: [agi] My AGI 2019 paper draft

2019-05-15 Thread Yan King Yin,
On Wed, May 15, 2019 at 3:42 AM Sergio VM wrote: > Not sure if I am following you... > > In order to define the optimal control problem, you need: > >- State set: Set of all possible logic propositions. OK >- Action set: Logic rules. It is not clear to me what this means. Can >you

Re: [agi] My AGI 2019 paper draft

2019-05-14 Thread Yan King Yin,
On Sun, May 12, 2019 at 10:22 PM Sergio VM wrote: > Hi King Yin, > > The architecture looks very interesting. I am just missing the definition > of the reward function (or kernel if you make it stochastic). > > On the other hand, I don't understand your previous comment on the > Lagrangian and

Re: [agi] My AGI 2019 paper draft

2019-05-11 Thread Yan King Yin,
Also, the control theoretic stuff was removed because I am unable to define the reward based on the current state in a *differentiable* way. For example, in the game of chess, the reward comes only when checkmate occurs (according to the game's official rules), but not when you capture a piece of

Re: [agi] My AGI 2019 paper draft

2019-05-11 Thread Yan King Yin,
Thanks for the encouraging words  I have presented my theory to a Chinese AGI group and got some positive feedback also. I will translate my slides into English. The slides explain the theory in an easier and more friendly way. Also, the RNN-within-RNN architecture is still inefficient and I

[agi] My AGI 2019 paper draft

2019-04-19 Thread Yan King Yin,
Hi, This is my latest draft paper: https://drive.google.com/open?id=12v_gMtq4GzNtu1kUn9MundMc6OEhJdS8 I submitted the same basic idea in AGI 2016, but was rejected by some rather superficial reasons. At that time, reinforcement learning for AI was not widely heard of, but since then it has