On Sat, Jun 22, 2024 at 6:05 PM Boris Kazachenko <cogno...@gmail.com> wrote: > ... > You both talk too much to get anything done...
Ah well, you may be getting lots done, Boris. The difference is perhaps, I don't know everything yet. Though, after 35 years, it can be surprising what other people don't know. I like to help where I can. Some people just have no clue. Even LeCun. Vision guy. He's probably been thinking about language only 7 years or so, since transformers. He only knows the mental ruts his vision, back-prop, career, has led him to. You can be deep in one problem and shallow in another. But I don't know everything. Trying to explain keeps me thinking about it. And here and there you get good new information. For instance, that paper James introduced me to, perhaps for the wrong reasons, was excellent new information: A logical re-conception of neural networks: Hamiltonian bitwise part-whole architecture E.F.W.Bowen,1 R.Granger,2* A.Rodriguez3 https://openreview.net/pdf?id=hP4dxXvvNc8 Very nice. The only other mention I recall for the open endedness of "position-based"/"embedding" type encoding, as a key to creativity. A nice vindication for me. Helps give me confidence I'm on the right track. And they have some ideas for extensions to vision, etc. Though I don't think they see the contradiction angle. And, another example, commenting on that LeCun post (the one mentioning the "puzzle" of transformer world models which get less coverage as you increase resolution... A puzzle. Ha. Nice vindication in itself...) Twitter prompted me to a guy in Australia who it turns out has just published a paper showing that sequence networks with a lot of shared "walk" end points, tend to synchronize locally. Wow. A true wow. Shared endpoints constrain local synchronization. I was wondering about that! How shared end points could constrain sub-net synchrony in a feed forward networks was something I was struggling with. I think I need it. So a paper explaining that they do is well cool. New information. It gives me confidence to move forward looking for the right kind of feed forward net to try and get local synchronizations corresponding to substitution groupings/embeddings. Those substitution "embeddings" would be "walks" between such shared end points, and I want them to synchronize. Paper here: Analytic relationship of relative synchronizability to network structure and motifs Joseph T. Lizier,1, 2, ∗ Frank Bauer,2, 3 Fatihcan M. Atay,4, 2 and J¨urgen Jost2 https://openreview.net/pdf?id=hP4dxXvvNc8 More populist presentation shared on my FB group here: https://www.facebook.com/share/absxV8ij9rio2j9a/ He has a github: https://github.com/jlizier/linsync And that guy Lizier appears to be part of a hitherto unsuspected sub-field of neuro-computational research attempting to reconcile synchronized oscillations to some kind of processing breakdown. None of it from my point of view though, I think. I need to explore where there might be points of connection there. So, lots of opportunity to waste time, sure. But I'm not sure that just sticking to some idea of learned hierarchy, which is all I remember of your work, without exposing it to criticism, is necessarily going to get you any further. ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M0018a3d4180b84e0801eae92 Delivery options: https://agi.topicbox.com/groups/agi/subscription