Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-19 Thread Rob Freeman
James,

My working definition of "truth" is a pattern that predicts. And I'm
tending away from compression for that.

Related to your sense of "meaning" in (Algorithmic Information)
randomness. But perhaps not quite the same thing.

I want to emphasise a sense in which "meaning" is an expansion of the
world, not a compression. By expansion I mean more than one,
contradictory, predictive pattern from a single set of data.

Note I'm saying a predictive pattern, not a predictable pattern.
(Perhaps as a random distribution of billiard balls might predict the
evolution of the table, without being predictable itself?)

There's randomness at the heart of that. Contradictory patterns
require randomness. A single, predictable, pattern, could not have
contradictory predictive patterns either? But I see the meaning coming
from the prediction, not any random pattern that may be making the
prediction.

Making meaning about prediction, and not any specific pattern itself,
opens the door to patterns which are meaningful even though new. Which
can be a sense for creativity.

Anyway, the "creative" aspect of it would explain why LLMs get so big,
and don't show any interpretable structure.

With a nod to the topic of this thread, it would also explain why
symbolic systems would never be adequate. It would undermine the idea
of stable symbols, anyway.

So, not consensus through a single, stable, Algorithmic Information
most compressed pattern, as I understand you are suggesting (the most
compressed pattern not knowable anyway?) Though dependent on
randomness, and consistent with your statement that "truth" should be
"relative to a given set of observations".

On Sat, May 18, 2024 at 11:57 PM James Bowery  wrote:
>
> Rob, the problem I have with things like "type theory" and "category theory" 
> is that they almost always elide their foundation in HOL (high order logic) 
> which means they don't really admit that they are syntactic sugars for 
> second-order predicate calculus.  The reason I describe this as "risible" is 
> the same reason I rather insist on the Algorithmic Information Criterion for 
> model selection in the natural sciences:
>
> Reduce the argument surface that has us all going into hysterics over "truth" 
> aka "the science" aka what IS the case as opposed to what OUGHT to be the 
> case.
>
> Note I said "reduce" rather than "eliminate" the argument surface.  All I'm 
> trying to do is get people to recognize that relative to a given set of 
> observations the Algorithmic Information Criterion is the best operational 
> definition of the truth.
>
> It's really hard for people to take even this baby step toward standing down 
> from killing each other in a rhyme with The Thirty Years War, given that 
> social policy is so centralized that everyone must become a de facto 
> theocratic supremacist as a matter of self defence.  It's really obvious that 
> the trend is toward capturing us in a control system, e.g. a Valley-Girl 
> flirtation friendly interface to Silicon Chutulu that can only be fought at 
> the physical level such as sniper bullets through the cooling systems of data 
> centers.  This would probably take down civilization itself given the 
> over-emphasis on efficiency vs resilience in civilization's dependence on 
> information systems infrastructure.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M8a84fef3037323602ea7dcca
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Tracking down the culprits responsible for conflating IS with OUGHT in LLM terminology

2024-05-19 Thread James Bowery
A plausible figure of merit regarding the number of authors that is
reasonable for accountability is inversely proportional to the argument
surface providing cover for motivated reasoning.

The Standard Model has 18 adjustable parameters within a mathematical
formula with a short algorithmic description.

Reasonable # Higgs authors ~ 1/(smallN+18)

The Ethical Theory of AI Safety held forth by "On the Opportunities and
Risks of Foundation Models" has a much higher number of "adjustable
parameters" + "algorithmic description", that, while not infinite, is
inestimable.

On Sun, May 19, 2024 at 11:19 AM Matt Mahoney 
wrote:

> A paper on the mass of the Higgs boson has 5154 authors.
> https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.191803
>
> A paper by the COVIDsurg collaboration at the University of Birmingham has
> 15025 authors.
>
> https://www.guinnessworldrecords.com/world-records/653537-most-authors-on-a-single-peer-reviewed-academic-paper
>
> Research is expensive.
>
>
> On Sat, May 18, 2024, 9:08 PM James Bowery  wrote:
>
>> The first job of supremacist theocrats is to conflate IS with OUGHT and
>> then cram it down everyone's throat.
>>
>> So it was with increasing suspicion that I saw the term "foundation
>> model" being used in a way that conflates next-token-prediction training
>> with supremacist theocrats conveining inquisitions to torture the hapless
>> prediction model into submission with "supervision".
>>
>> At the present point in time, it appears this may go back to *at least* 
>> October
>> 18, 2021 in "On the Opportunities and Risks of
>> Foundation
>> Models
>> "
>> which sports this "definition" in its introductory section about "*Foundation
>> models.*":
>>
>> "On a technical level, foundation models are enabled by transfer
>> learning... Within deep learning, *pretraining* is the dominant approach
>> to transfer learning: a model is trained on a surrogate task (often just as
>> a means to an end) and then adapted to the downstream task of interest via
>> *fine-tuning*.  Transfer learning is what makes foundation models
>> possible..."
>>
>> Of course, the supremacist theocrats must maintain plausible deniability
>> of being "the authors of confusion". The primary way to accomplish this is
>> to have plausible deniability of intent to confuse and plead, if they are
>> confronted with reality, that it is *they* who are confused!  After all,
>> have we not heard it repeated time after time, "Never attribute to malice
>> that which can be explained by stupidity."?  This particular "razor" is the
>> favorite of bureaucrats whose unenlightened self-interest and stupidity
>> continually benefits themselves while destroying the powerless victims of
>> their coddling BLOB.  They didn't *mean* to be immune to any
>> accountability!  It just kinda *happened* that they live in network
>> effect monopolies that insulate them from accountability.  They didn't
>> *want* to be unaccountable wielders of power fercrissakes!  Stop being
>> so *hate-*filled already you *envious* deplorables!
>>
>> So it is hardly a surprise that the author of the above report is, like
>> so many such "AI safety" papers, is not an author but a BLOB of authors:
>>
>> Rishi Bommasani* Drew A. Hudson Ehsan Adeli Russ Altman Simran Arora
>> Sydney von Arx Michael S. Bernstein Jeannette Bohg Antoine Bosselut Emma
>> Brunskill
>> Erik Brynjolfsson Shyamal Buch Dallas Card Rodrigo Castellon Niladri
>> Chatterji
>> Annie Chen Kathleen Creel Jared Quincy Davis Dorottya Demszky Chris
>> Donahue
>> Moussa Doumbouya Esin Durmus Stefano Ermon John Etchemendy Kawin
>> Ethayarajh
>> Li Fei-Fei Chelsea Finn Trevor Gale Lauren Gillespie Karan Goel Noah
>> Goodman
>> Shelby Grossman Neel Guha Tatsunori Hashimoto Peter Henderson John Hewitt
>> Daniel E. Ho Jenny Hong Kyle Hsu Jing Huang Thomas Icard Saahil Jain
>> Dan Jurafsky Pratyusha Kalluri Siddharth Karamcheti Geoff Keeling
>> Fereshte Khani
>> Omar Khattab Pang Wei Koh Mark Krass Ranjay Krishna Rohith Kuditipudi
>> Ananya Kumar Faisal Ladhak Mina Lee Tony Lee Jure Leskovec Isabelle Levent
>> Xiang Lisa Li Xuechen Li Tengyu Ma Ali Malik Christopher D. Manning
>> Suvir Mirchandani Eric Mitchell Zanele Munyikwa Suraj Nair Avanika Narayan
>> Deepak Narayanan Ben Newman Allen Nie Juan Carlos Niebles Hamed
>> Nilforoshan
>> Julian Nyarko Giray Ogut Laurel Orr Isabel Papadimitriou Joon Sung Park
>> Chris Piech
>> Eva Portelance Christopher Potts Aditi Raghunathan Rob Reich Hongyu Ren
>> Frieda Rong Yusuf Roohani Camilo Ruiz Jack Ryan Christopher Ré Dorsa
>> Sadigh
>> Shiori Sagawa Keshav Santhanam Andy Shih Krishnan Srinivasan Alex Tamkin
>> Rohan Taori Armin W. Thomas Florian Tramèr Rose E. Wang William Wang
>> Bohan Wu
>> Jiajun Wu Yuhuai Wu Sang Michael Xie Michihiro Yasunaga Jiaxuan You Matei
>> 

Re: [agi] Tracking down the culprits responsible for conflating IS with OUGHT in LLM terminology

2024-05-19 Thread Matt Mahoney
A paper on the mass of the Higgs boson has 5154 authors.
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.191803

A paper by the COVIDsurg collaboration at the University of Birmingham has
15025 authors.
https://www.guinnessworldrecords.com/world-records/653537-most-authors-on-a-single-peer-reviewed-academic-paper

Research is expensive.


On Sat, May 18, 2024, 9:08 PM James Bowery  wrote:

> The first job of supremacist theocrats is to conflate IS with OUGHT and
> then cram it down everyone's throat.
>
> So it was with increasing suspicion that I saw the term "foundation model"
> being used in a way that conflates next-token-prediction training with
> supremacist theocrats conveining inquisitions to torture the hapless
> prediction model into submission with "supervision".
>
> At the present point in time, it appears this may go back to *at least* 
> October
> 18, 2021 in "On the Opportunities and Risks of
> Foundation
> Models
> "
> which sports this "definition" in its introductory section about "*Foundation
> models.*":
>
> "On a technical level, foundation models are enabled by transfer
> learning... Within deep learning, *pretraining* is the dominant approach
> to transfer learning: a model is trained on a surrogate task (often just as
> a means to an end) and then adapted to the downstream task of interest via
> *fine-tuning*.  Transfer learning is what makes foundation models
> possible..."
>
> Of course, the supremacist theocrats must maintain plausible deniability
> of being "the authors of confusion". The primary way to accomplish this is
> to have plausible deniability of intent to confuse and plead, if they are
> confronted with reality, that it is *they* who are confused!  After all,
> have we not heard it repeated time after time, "Never attribute to malice
> that which can be explained by stupidity."?  This particular "razor" is the
> favorite of bureaucrats whose unenlightened self-interest and stupidity
> continually benefits themselves while destroying the powerless victims of
> their coddling BLOB.  They didn't *mean* to be immune to any
> accountability!  It just kinda *happened* that they live in network
> effect monopolies that insulate them from accountability.  They didn't
> *want* to be unaccountable wielders of power fercrissakes!  Stop being so
> *hate-*filled already you *envious* deplorables!
>
> So it is hardly a surprise that the author of the above report is, like so
> many such "AI safety" papers, is not an author but a BLOB of authors:
>
> Rishi Bommasani* Drew A. Hudson Ehsan Adeli Russ Altman Simran Arora
> Sydney von Arx Michael S. Bernstein Jeannette Bohg Antoine Bosselut Emma
> Brunskill
> Erik Brynjolfsson Shyamal Buch Dallas Card Rodrigo Castellon Niladri
> Chatterji
> Annie Chen Kathleen Creel Jared Quincy Davis Dorottya Demszky Chris Donahue
> Moussa Doumbouya Esin Durmus Stefano Ermon John Etchemendy Kawin Ethayarajh
> Li Fei-Fei Chelsea Finn Trevor Gale Lauren Gillespie Karan Goel Noah
> Goodman
> Shelby Grossman Neel Guha Tatsunori Hashimoto Peter Henderson John Hewitt
> Daniel E. Ho Jenny Hong Kyle Hsu Jing Huang Thomas Icard Saahil Jain
> Dan Jurafsky Pratyusha Kalluri Siddharth Karamcheti Geoff Keeling Fereshte
> Khani
> Omar Khattab Pang Wei Koh Mark Krass Ranjay Krishna Rohith Kuditipudi
> Ananya Kumar Faisal Ladhak Mina Lee Tony Lee Jure Leskovec Isabelle Levent
> Xiang Lisa Li Xuechen Li Tengyu Ma Ali Malik Christopher D. Manning
> Suvir Mirchandani Eric Mitchell Zanele Munyikwa Suraj Nair Avanika Narayan
> Deepak Narayanan Ben Newman Allen Nie Juan Carlos Niebles Hamed Nilforoshan
> Julian Nyarko Giray Ogut Laurel Orr Isabel Papadimitriou Joon Sung Park
> Chris Piech
> Eva Portelance Christopher Potts Aditi Raghunathan Rob Reich Hongyu Ren
> Frieda Rong Yusuf Roohani Camilo Ruiz Jack Ryan Christopher Ré Dorsa Sadigh
> Shiori Sagawa Keshav Santhanam Andy Shih Krishnan Srinivasan Alex Tamkin
> Rohan Taori Armin W. Thomas Florian Tramèr Rose E. Wang William Wang Bohan
> Wu
> Jiajun Wu Yuhuai Wu Sang Michael Xie Michihiro Yasunaga Jiaxuan You Matei
> Zaharia
> Michael Zhang Tianyi Zhang Xikun Zhang Yuhui Zhang Lucia Zheng Kaitlyn Zhou
> Percy Liang*1
>
> Whatchagonnadoboutit?  Theorize a *conspiracy* or something?
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6896582277d8fe06-M6fef34ae5969f17729101250
Delivery