Rob, I already explained how it applies to your example, your just "unable" to
comprehend it. Because your talk / think ratio is way too high.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-Mcd
> What I mean by contradiction is different orderings of an entire set
of data, not points of contrast within a set of data
That's not what people usually mean by contradiction, definitely not in a
general sense.
You are talking about reframing dataset (subset) of multivariate items along
the sp
There can be variance on any level of abstraction, be that between pixels or
between philosopical categories. And it could be in terms of any property /
attribute of compared elements / clusters / concepts: all these are derived by
lower-order comparisons. None of that falls from the sky, other
Rob, a lot of your disagreements stem from your language-first mindset.
Which is perverse, you must agree that the language is a product of basic
cognitive ability, possed by all mammals.
Starting from your "contradiction": that's simply a linguistic equivalent of my
variance.
I have no idea wh
On Saturday, June 22, 2024, at 7:18 AM, Rob Freeman wrote:
> But I'm not sure that
just sticking to some idea of learned hierarchy, which is all I
remember of your work, without exposing it to criticism, is
necessarily going to get you any further.
It's perfectly exposed: https://github.com/boris-k
>Wow. Lots of words. I don't mind detail, but words are slippery.
He is marking a territory, like any dog. It's all about self-promotion: the
more he talk about himself, the better he feels. You both talk too much to get
anything done, it becomes an end in itself, substance is secondary.
--
Consequences is way too coarse a driver for cognition. As a principal mechanism
anyway. Good enough for brainless evolution, but cognition is way beyond that.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tc05
That's cool, this is connectivity clustering I was talking about:
"Equipped with this scene graph, we can then retraverse the frames and assign
the same label to each surface in the segmentation map that belongs to the same
connected component in the scene graph.This allows distinct surface comp
Ok, so you have a
correspondence theory. Not terribly novel or specific, but definitely a right
focus.
That’s correspondence between model and accessible environment, and the only
way to quantify it is comparison between the two. Both are supposed to expand
with an indefinite input stream to lear
"Is" here is the brain, neuromorphics, any sort of NN. In general, some version
of centroid-based clustering, starting with perceptron. Which is
summation-first, comparison-last.
"Ought to be" is comparison-first, summation-last: connectivity-based
clustering. Because it is the comparison that q
We are talking about general intelligence, so this has to be framed in general
terms. Language is just a high-level communication medium, a surface of the
mind.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T5d
On Thursday, June 30, 2022, at 10:15 AM, Rob Freeman wrote:
> But in the sense of having the same internal connectivity within two groups
> which are not directly connected together.
Yes, in the sense that inputs (input clusters) are parameterized with
derivatives ("connections") from lower-leve
On Thursday, June 30, 2022, at 6:10 AM, Rob Freeman wrote:
> what method do you use to do the "connectivity clustering" over it?
I design from the scratch, that's the only way to conceptual integrity in the
algorithm: http://www.cognitivealgorithm.info. Couldn't find any existing
method that's
I think "prediction" is a redundant term, any representation is some kind of
prediction.
"Shared property": I meant initially shared between two compared
representations, and only later aggregated into higher-level shared property,
within a cluster defined by one-to-one matches.
Vs. summing
On Thursday, June 30, 2022, at 3:00 AM, Rob Freeman wrote:
> I'm interested to hear what other mechanisms people might come up with to
> replace back-prop, and do this on the fly..
For shared predictions, I don't see much of an alternative to backprop, it
would have to be feedback-driven anyway.
On Wednesday, June 29, 2022, at 10:29 AM, Rob Freeman wrote:
> You would start with the relational principle those dot products learn, by
> which I mean grouping things according to shared predictions, make it instead
> a foundational principle, and then just generate groupings with them.
Isn't
You are an idiot, sir :).
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Td8db5de3bbc0c6ae-M23440b799fc7a058e26faef0
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Hey, that's most of the people here :). Idiocy is normal, that's what you get
from hypertrophic monkey brains trying to do things they didn't evolve for.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Td8db5de3
The "G" part is %100 unsupervised, the rest is application-specific and
optional.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Ta80108e594369c8d-Me30fe8b079cabdbda792215d
Delivery options: https://agi.topicbox.
Then I guess your "judgement" is top-level choices. The problem is, GI can't
have a fixed top level, forming incrementally higher levels of generalization
is what scalable learning is all about. So, any choice on any level is
"judgement", which renders the term meaningless.
So, you are talking about motivation. Which depends on the type of learning
process: it's an equivalent of pure curiosity in unsupervised learning, a
specific set of "instincts" in supervised learning, or some indirect
conditioning values in reinforcement learning. The 1st is intrinsic to GI, th
To clarify the above:
In transformers and graph NNs, context or embeddings in respectively attention
heads and edges represent relative lateral positions of connected items. But
the main question is where they come from. In a fully unsupervised scheme they
must be learned, not hand-coded. If suc
Thanks Mike!
I just updated my introduction, it's even more abstract than Brett's :)
http://www.cognitivealgorithm.info/
Intelligence is a general cognitive ability, ultimately the ability to
predict. That includes planning, which technically is a self-prediction.
Any prediction is interactive pro
As we discussed, Brett, I agree on most of the principles above. But your
implementation is not defined / justified strictly bottom-up. To me, that means
pixels-up cross-comparison, which defines variance: differences/gradients, and
invariance: match/compression. Without that, your specifics is
I know how we work. Am 59, grew up in Soviet Union, spent 5 years in military,
walked across Turkish border chased by a platoon of riflemen to get to the
States. Stop freaking out about trivial crap.
--
Artificial General Intelligence List: AGI
Permalink:
Get some fucking meaning of life, no matter for how long
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T3dc391c1276210e8-M7bf73c8f32478c4c53ec6f8f
Delivery options: https://agi.topicbox.com/groups/agi/subscripti
Just try to get those stupid emotions out of your mind.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T3dc391c1276210e8-M55858e394c61b4431b19f3f9
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Interesting that I was never concerned about my happiness, or living for
gazillion years with nothing to do. Maybe because I am not unhappy or insecure
about next minute, never mind septillion years.
--
Artificial General Intelligence List: AGI
Permalink:
It's funny how transhumanism is just a form of escape for a lot of people, like
religion.
They turn to it out of misery and fear.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T3dc391c1276210e8-Mbfbf897e487a990f
This is just an emotional attitude, that tells me you don't feel terribly
secure.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T3dc391c1276210e8-M571223cec24e6708abb2aad2
Delivery options: https://agi.topicbox.
Uhh, do you believe in self-improvement? Once you have direct access to your
"source code" and start improving it, how long do you think it will be until
you are no longer recognizable to current "you"?
See, "you" is whatever you identify with, it changes all the time anyway. As
you get smarter
It's just that I do have something better to compare human mind with, in
theory.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T3dc391c1276210e8-Mb0d28feeb7b4e47a06ea6fb1
Delivery options: https://agi.topicbox.
It's neither personal nor emotional :)
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T3dc391c1276210e8-M5569989404c9b9d9899d297f
Delivery options: https://agi.topicbox.com/groups/agi/subscription
It's garbage on all levels, relatively speaking. We just have nothing to
compare it with yet. Human "mind" is 99% about the body, and the rest is crap
too.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T3dc391
You will recycle yourself. As soon as you realize what kind of garbage you are
made of.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T3dc391c1276210e8-Mfd374c19e22fac4ddc54cf26
Delivery options: https://agi.top
Fucking spammer
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T3389d065100463ce-M08c3fe15a532432e58663cfc
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Who gives a shit. You will be obsolete before you grow old, never mind freezing
and re-animation.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T3dc391c1276210e8-M2f0795666d6c756986e82089
Delivery options: https
No, for the reasons I explain in my "Comparison to ANN and BNN" section:
www.cognitivealgorithm.info
Forget about GTP, it's a flavor of the year, variations of MLP can work almost
as well, same for some other architectures.
And forget about architectures, try to first understand the principles a
You need to understand core principle behind all NNs, GPT or not. And that is
fuzzy centroid clustering in perceptron with any sort of feedback, local or
backprop.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi
Urgh... Never seen a "AGI" list that is not a freak show.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tfc4d42f7fb128a4f-Mcb4fb9239c9b917eef13f2fb
Delivery options: https://agi.topicbox.com/groups/agi/subscripti
He is talking about instruction set, which is part of decompressing program. I
agree that it's a red herring. And I agree that KC is a trivial principle.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T728994814c
It's really the number of bit in (compressed representation + decompressing
program). If the amount of data is large enough, the second component becomes
insignificant.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups
Yeah, let's get deeper into that bullshit. Because there's not real work to be
done, right?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T5e30c339c3bfa713-Mb0ae68a6c814a412b382f763
Delivery options: https://ag
Working on it: https://github.com/boris-kz/CogAlg/wiki
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T2ee04a3eb9a964b5-M9b7c4a2cce2d4138e8de6fd4
Delivery options: https://agi.topicbox.com/groups/agi/subscription
On Sunday, September 12, 2021, at 5:14 PM, doddy wrote:
> is algo the abreviation for algorithm?
Yes: http://www.cognitivealgorithm.info/
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T2ee04a3eb9a964b5-Md08477334
That's two things: comparison to quantify similarity and clustering to group
the results. This is what my alg is specifically and uniquely designed to do.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T2ee04a3eb
> The model is presumed to be hierarchical, and its "validation": comparison to
> lower-level experience, should proceed top-down.
To be precise, we are almost never comparing model to raw experience, it's
always a comparison between models (I think pattern is better name) of some
level.
Brett, your True is what I call binary match between specific model, or any
part thereof, wrt. specific experience. Belief, certainty - those are just
emotional connotations of such match. And it doesn't have to be binary, that's
the crudest version, match can be expressed in any order of quanti
@Brett, "true" means a match of current experience to to the model, which is
what I meant by "understanding is recognition". And a model doesn't fall from
the sky, it's composed from "recognitions" / confirmations of its element
sub-models, starting from raw input. So, it's comparison -> recogni
The only high-level term that needs to be defined constructively is GI.
Understanding, wisdom, qualia... those are all excuses for loose talk.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T2ee04a3eb9a964b5-Me6
I aim for simplicity, but not baby talk. There is a reason people talk like
complete morons on abstract subjects: we evolved to hunt and gather. You have
to be a mutant to work on AGI.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.to
Yeah, me too. Maybe another time? Think about it.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-M54ac013991fb0e0a217bc1bb
Delivery options: https://agi.topicbox.com/groups/agi/subscription
I am confused. You keep saying that you are the smartest guy in AGI...
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-M87ee963c5eb06af353d6f89d
Delivery options: https://agi.topicbox.com/groups/a
All patterns *are* predictions, it's just a matter of where and how strong.
These are determined by projected accumulated match among constituents of each
pattern (which is a set of matching inputs).
--
Artificial General Intelligence List: AGI
Permalink:
ave you a link:
On Sunday, August 15, 2021, at 2:56 PM, Boris Kazachenko wrote:
> https://github.com/boris-kz/CogAlg/blob/master/line_1D_alg/line_patterns.py
On Sunday, August 15, 2021, at 4:50 PM, immortal.discoveries wrote:
> But I still want to know your simplest pattern finder, and how
Screw letters. Every pixel predicts adjacent pixels, prediction is merely a
difference-projected match. If confirmed by cross-comp, it forms patterns,
which predict proximate patterns. Then pattern cross-comp forms patterns of
patterns, etc. You have to understand compositional hierarchy, nothin
Delays / holes is what I call negative patterns / gaps, formed along with
positive patterns.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-M36477279f560c124cd37584e
Delivery options: https://ag
All that is explained in "outline of my approach", more code-specific in wiki:
https://github.com/boris-kz/CogAlg/wiki. No backprop, my feedback is only
adjusting hyperparameters. I don't use any statistical methods.
On Sunday, August 15, 2021, at 4:11 PM, immortal.discoveries wrote:
> Also thou
I don't have any scores, the alg is far from complete. It won't be doing
anything interesting until I implement level-recursion, which is a ways off
even for 1D alg. This whole project is theory-first, as distinct from anything
you may come across in ML.
-
Sound is actually a lot more complex, it has a huge frequency spectrum. You can
make sense of grey-scale images, but not grey-scale sound. I started doing it
here:
https://github.com/boris-kz/CogAlg/blob/master/line_1D_alg/frequency_separation_audio.py,
but it's not a priority, this whole unive
It won't be anything like your text predictor, if you ever get around to it.
And you don't even have to do it in 2D, basic principles should be worked out
in 1D first: just process one row of pixels of an image. That's my 1D alg:
https://github.com/boris-kz/CogAlg/tree/master/line_1D_alg, 1st le
Well, show me your code processing images, then we will talk.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-M4229832de56be44242537b71
Delivery options: https://agi.topicbox.com/groups/agi/subscr
Hint: pattern is a set of matching inputs.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-Mdd2357fb6ddcaac4617efa07
Delivery options: https://agi.topicbox.com/groups/agi/subscription
On Saturday, August 14, 2021, at 6:00 PM, immortal.discoveries wrote:
> .it would be better if you used more common words and examples to explain
> what you're seeing visually.
This happens to be the most abstract subject ever. Which means you need to
think in terms of definitions, not examples.
On Saturday, August 14, 2021, at 1:20 PM, immortal.discoveries wrote:
> You must not know how GPT nor my AI works then.
How about keeping this discussion on a conceptual level? I explain my
objections to all statistical / perceptron-based methods in "Comparison to ANN
and BNN" section.
On Satur
On Friday, August 13, 2021, at 10:45 PM, immortal.discoveries wrote:
> I like this part. Well, cognition is very similar to goals and priming, it
> all weighs in in predicting the next word to a sentence.
>
Thanks. But I think it's quite different: predictive value should be maximized
indefinit
Sorry, wrong link,
https://meta-evolution.blogspot.com/2012/01/cognitive-expansion-curiosity-as.html
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-Md383e9168039e25539f81635
Delivery options: ht
You are mixing-up instincts, conditioning, and cognition. I am only talking
about cognitive function. I make those distinctions in the last post:
https://www.blogger.com/blog/post/edit/4539256615029980916/407095229373126
--
Artificial General Intelligen
On Friday, August 13, 2021, at 7:28 PM, immortal.discoveries wrote:
> The mind ideas are trying to survive, really they are, and in doing so they
> eventually help the host survive.
I don't thinks so. Ideas have no agency, and they only help the host in
proportion to their predictive power. Ok,
On Friday, August 13, 2021, at 6:36 PM, James Bowery wrote:
> Are you aware that field observations of eusocial insects has pretty well
> debunked the idea that the "cooperative" behavior of the sterile castes has
> little to do with their relatedness and even less to do with reciprocation?
I ne
Utterly vacuous, as usual. Here is something actually meaningful, if you can
understand it: https://meta-evolution.blogspot.com/
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-Md27bd1073b0a21c0a
I am definitely a jackass, but that's what it takes to think on your own.
On Sat, May 18, 2019 at 9:03 AM Jim Bromer wrote:
> Am I a crackpot or a jackass? I think I'm more of a crackpot - although I
> do have my jackass moments.
> Jim Bromer
>
>
> On Fri, May 17, 2019 at 9:25 PM MP via AGI wro
Out of touch with the nature of the problem he is trying to solve. Wanting
doesn't make it so. He can be a great programmer and sell 10 chatbots every
year, and still be a crackpot in AGI.
On Sun, May 19, 2019 at 1:20 PM Mike Archbold wrote:
> A crackpot to me is somebody out of touch with reali
"Outside the box" doesn't mean much, it includes " out of your fucking
mind". What it really needs is a deep introspection, AKA intellectual
integrity. And there is a dire scarcity of that in a monkey brain. Because
there always are bananas to be picked up.
On Mon, May 20, 2019 at 6:31 AM Brett N
Color me skeptical. Anyone can claim they have AGI in their garage, but
I've never seen even a coherent write-up, other than me-too name dropping:
CNN, RNN, GAN, brain, whatever.
Mine is in the open: http://www.cognitivealgorithm.info
On Mon, Mar 18, 2019 at 8:30 PM wrote:
> Mr Kazachenko said:
operative communication by scaffolding of joint
>>> attention)
>>>
>>> I'm with Rodney Brooks on this, the hard part of AGI has nothing to do
>>> with language, it has to do with agents being highly optimized to control
>>> an environment in terms of ecolog
t; with language, it has to do with agents being highly optimized to control
>>> an environment in terms of ecological information supporting
>>> perception/action. Just as uplifting apes will likely require only minor
>>> changes, uplifting animaloid AGI will likely r
; perception/action. Just as uplifting apes will likely require only minor
> changes, uplifting animaloid AGI will likely require only minor changes.
> Even then we still haven't explicitly cared about language, we've cared
> about cooperation by means of joint attention, which can
Doesn't surprise me, you have friends like Mentifex too.
On Thu, Mar 7, 2019 at 4:24 PM Steve Richfield
wrote:
> Boris,
>
> I would like to introduce your AGI to a magician friend of mine.
>
> Steve
>
>
> On Thu, Mar 7, 2019, 12:05 Boris Kazachenko wrote:
>
>
I would be more than happy to pay:
https://github.com/boris-kz/CogAlg/blob/master/CONTRIBUTING.md , but I
don't think you are working on AGI.
No one here does, this is a NLP chatbot crowd. Anyone who thinks that AGI
should be designed for NL data as a primary input is profoundly confused.
On Thu,
"But why would you think that AGI would not hallucinate?"
Your "AGI" may hallucinate, because it is designed to feed on that
incoherent second-hand natural-language data.
Mine won't, it is designed to be integral and self-sufficient. It will
believe what it sees, not what a bunch of nuts on the ne
someone honestly
> thought that, though! Maybe we could use an AI to tell the difference
> between the two of us :P
>
> Sent from ProtonMail Mobile
>
>
> On Mon, Jun 25, 2018 at 6:46 AM, Boris Kazachenko via AGI <
> agi@agi.topicbox.com> wrote:
>
> Yeah, I thoug
Yeah, I thought that too. But this list is freak show, go figure.
On Mon, Jun 25, 2018 at 3:42 AM Giacomo Spigler via AGI <
agi@agi.topicbox.com> wrote:
> Is it only me that thinks that MP is another email controlled by AT Murray?
>
>
> On Monday, June 25, 2018, MP via AGI wrote:
>
>> This makes
an approach you seemingly take). One could consider a pattern of learning,
> as a possible example of this.
>
> Comments?
>
> Rob
> --
> *From:* Boris Kazachenko via AGI
> *Sent:* Wednesday, 13 June 2018 12:03 AM
> *To:* agi@agi.topicbox.com
> *
method
> set, in the role of knowledge codifier.
>
> I'm sure this is all old hat to you, but I'd appreciate your views on the
> probable application of the points I raised.
>
> Rgds
>
> Rob
> --
> *From:* Boris Kazachenko via AGI
; there are a number of essential components still missing from your design
> schema. Such system components may be related to your thinking on levels
> of pattern recognition, and suitable to your notion of hierarchical
> increments from the lowest (meta) level.
>
> The wood for the t
>
>
> AGI's bottleneck must be in *learning*, anyone who focuses on something
> else is barking under the wrong tree...
>
Not just a bottleneck, it's the very definition of GI, the fitness /
objective function of intelligence.
Specifically, unsupervised / value-free learning, AKA pattern
recogni
87 matches
Mail list logo