Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread immortal . discoveries
BTW BERT is gay and so is BERT; "Bidirectional Encoder Representations from Transformers" Notice the 'bi' :) -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-M844037ab0e64bbadcc3e2bf2 Delivery opti

[agi] AI published paper

2021-08-15 Thread ymli
Hi all We would like to share our AI papers, enjoy! Yao, Q., Li R.Y.M.,* *Song, L., Crabbe, J. (2021) Construction Safety Knowledge Sharing on Twitter: A Social Network Analysis , Safety Science, 143, 105411 https://schol

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread immortal . discoveries
It is interesting, AGI needs some things if you don't do a complete brute force to get an AGI program. So depending on the implementation, ex. GANs, Transformers, etc, you will be using some tricks, more or less. But there is a sweet spot of tricks we want to use, like: more data, RL, casualty,

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread immortal . discoveries
If it's of any interest, there is 2 things very important I have found, that cannot be disproven. 1) I have found many of the most common patterns in a dataset. You can't predict text or image without these patterns, seriously. My AI and GPT have this in common, and they are utilizing those sim

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
I aim for simplicity, but not baby talk. There is a reason people talk like complete morons on abstract subjects: we evolved to hunt and gather. You have to be a mutant to work on AGI.  -- Artificial General Intelligence List: AGI Permalink: https://agi.to

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread immortal . discoveries
Hehe that's sound so funny nice comment. It would waste less time if you could write something that is very short and powerful and that cannot be read wrong / relatable to my language of use. Many people write papers that are not readable by any 15 year old boy or girl, and many people do not

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
Yeah, me too. Maybe another time? Think about it. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-M54ac013991fb0e0a217bc1bb Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread immortal . discoveries
I'm smart by being absoulately dumb at times. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-Me8ecc74cdbba0bf0295e0a89 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
I am confused. You keep saying that you are the smartest guy in AGI... -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-M87ee963c5eb06af353d6f89d Delivery options: https://agi.topicbox.com/groups/a

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread immortal . discoveries
You gave me code though instead of an answer in natural language. Anyhow, the most basic pattern in all datasets is this, used to predict data based on context (it's powerful and allows/builds all the rest of the AI abilities): Every time you see the context 'cat', store what letter was to the r

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
All patterns *are* predictions, it's just a matter of where and how strong. These are determined by projected accumulated match among constituents of each pattern (which is a set of matching inputs).   -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
On Sunday, August 15, 2021, at 4:50 PM, immortal.discoveries wrote: > What is cross-comp? Cross-comparison: comparison of each input to all other inputs within maximal search range, computing match and miss for each pair. See cross_comp() in line_patterns for the simplest version. I already gave

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread magnuswootton81
Talking about subsetted heirarchies. sequences of sequences of sequences, can form any house chore. (LEVEL A) pick up object. turn tap. organize dish. turns into. (LEVEL B) get sink ready do dish (x20 times) empty sink. turns into (LEVEL C) do dishes.    <-only needs 1 brain cell at the end fo

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread immortal . discoveries
What is cross-comp? Is this cross-entropy evaluation? Is it compression evaluation? I can draw my whole code and the future plans into a single image of my 'brain', it is really cool, it is all merging patterns and energy flow. Patterns. But I still want to know your simplest pattern finder, a

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
Screw letters. Every pixel predicts adjacent pixels, prediction is merely a difference-projected match. If confirmed by cross-comp, it forms patterns, which predict proximate patterns. Then pattern cross-comp forms patterns of patterns, etc. You have to understand compositional hierarchy, nothin

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread immortal . discoveries
What's your most simplest part of your architecture that would predict the next letter? For me it would be the last ex. 3 letters of context. I gather what letter I see come next and it gives me probabilities. You can call it a markov chain. Can you intuitively explain the most basic part that d

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
Delays / holes is what I call negative patterns / gaps, formed along with positive patterns. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-M36477279f560c124cd37584e Delivery options: https://ag

[agi] Re: GPT-J prompt: Google DeepMind hasn't funded the Hutter Prize because

2021-08-15 Thread immortal . discoveries
hehe -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T31c4c6495649906f-M627f74c47935275588a9814c Delivery options: https://agi.topicbox.com/groups/agi/subscription

[agi] Re: GPT-J prompt: Google DeepMind hasn't funded the Hutter Prize because

2021-08-15 Thread immortal . discoveries
*Google DeepMind hasn't funded the Hutter Prize because it* wants to "build the best AI" by funding "the best" teams. Instead, DeepMind says it is trying to "promote the best science," by funding "the best science." As a science-focused competition, the Hutter Prize is different from the Turing

[agi] Re: GPT-J prompt: Google DeepMind hasn't funded the Hutter Prize because

2021-08-15 Thread immortal . discoveries
*Google DeepMind hasn't funded the Hutter Prize because it* has any desire to get into the game of programming Go - but just because it doesn't want to get into the game doesn't mean that it isn't still playing it. Go is a game where players compete in a series of games against each other, the

[agi] Re: GPT-J prompt: Google DeepMind hasn't funded the Hutter Prize because

2021-08-15 Thread immortal . discoveries
*Google DeepMind hasn't funded the Hutter Prize because it*'s looking for short-term wins, it's looking for long-term wins. This is an interesting way of looking at how the fund will work: the winners are going to have the most significant impact for the next 10 years, and DeepMind doesn't reall

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
All that is explained in "outline of my approach", more code-specific in wiki:  https://github.com/boris-kz/CogAlg/wiki. No backprop, my feedback is only adjusting hyperparameters. I don't use any statistical methods. On Sunday, August 15, 2021, at 4:11 PM, immortal.discoveries wrote: > Also thou

[agi] Re: GPT-J prompt: Google DeepMind hasn't funded the Hutter Prize because

2021-08-15 Thread immortal . discoveries
Oh man you got me, I thought this post was written by you (actually that you linked it I mean). I looked at the title and thought owtfomg! I then noticed the bold text.and realized. Now the univeristy words makes sense, it LOVES bringing up Dr. Peuru and his team. ---

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread immortal . discoveries
Ah, I love it, you are a thinker I can tell (well unsure really)! Also though do note that don't take too long to try parts of your ideas because there can be things you don't know later like GPUs only accept certain operations or maybe your idea doesn't work at scale, etc. Tell me, do you use

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
I don't have any scores, the alg is far from complete. It won't be doing anything interesting until I implement level-recursion, which is a ways off even for 1D alg. This whole project is theory-first, as distinct from anything you may come across in ML. -

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread immortal . discoveries
@Boris where is your completion results and the loss/compression scores? I want some results. Sure, pitch helps volume, just like color helps B&W, it seems harder because sound has less data in it and more noise, but other than that it is the same thing and just need to compare your results to

[agi] GPT-J prompt: Google DeepMind hasn't funded the Hutter Prize because

2021-08-15 Thread James Bowery
*Google DeepMind hasn't funded the Hutter Prize because it* likes the competition, but because it sees a better future for humanity if it happens. It has been a slow week in AI research, with few papers published. Here’s a selection of the highlights, and the lowlights. On to the highlights: AI

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
Sound is actually a lot more complex, it has a huge frequency spectrum. You can make sense of grey-scale images, but not grey-scale sound. I started doing it here: https://github.com/boris-kz/CogAlg/blob/master/line_1D_alg/frequency_separation_audio.py, but it's not a priority, this whole unive

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread magnuswootton81
doing sound is 1d. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-Mc39d7403b92929052d7a6ec9 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread Boris Kazachenko
It won't be anything like your text predictor, if you ever get around to it. And you don't even have to do it in 2D, basic principles should be worked out in 1D first: just process one row of pixels of an image. That's my 1D alg:  https://github.com/boris-kz/CogAlg/tree/master/line_1D_alg, 1st le

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread immortal . discoveries
One of the only differences if I try vision is instead of me storing/matching a group of symbols ex. thanksgivin>g and get the next predicted letter like that, I must do so in 2 dimensions since images are 2D. Not hard to do so far. Also, instead of thanksgivin>g, it would look like (if we had 1

Re: [agi] Re: AGI discussion group, Aug 13 7:30AM Pacific: Open Ended Motivational Systems for AGIs

2021-08-15 Thread magnuswootton81
Yes, storing temporal patterns works fine, the same as text.  The easiest example that prooves it is Cellular Automata, that is it! Doing vision is cool if you change the semantics slightly,   doing markov chains with 3d vectors might end up better, because there is more invarience -> therefore