Being completely ignorant of everything mentioned, here, I can't help but wonder 
whether there is a path from not-even-wrong to schema-for-the-data. Going back to 
EricS' prior comment regarding when a (time/speed) difference of scale becomes a 
difference of kind, I have trouble accepting the convexity (or even closure) of any 
of the referent spaces. (I have no trouble accepting the convexity and closure of 
the models, as defined/abstracted from the referent, just the fidelity of the 
assumptions.) Like Farrell & Shalizi imply in their comment, such models work 
well for description. The problems arise when the description is fed *back* into 
the control. LLMs currently have a sticky re-training hurdle, requiring 
hybridization in order to complete the loop. And that's also been the case with 
economic models. Rather than map to systems biology, I'd prefer to map to progress 
in cyber-physical systems, where the models are more tightly and granularly coupled 
with the systems they control.

It feels idealistic ("rationalist"?) to think that these models-in-a-vat (will?, can?, do?) capture 
the "tacit knowledge" adequately, faithfully. I'm reminded of the relationship between idealized 
neurons and neuronal networks, including neurotransmitters, hormones, glial cells, etc. Add to that long 
distance signals like proprioception, nociception, etc. and it seems clear that a monolithic LLM cannot be as 
good at "on the fly" model building as an organism can be.

Maybe it's obviously modeled as [a] hypergraph[s]. And that might be the only way it can 
be built to dynamically/appropriately adjust fine to coarse granularity and tight to 
loose coupling for any given subset of covariates ... *as* the data is extruded through 
the model[s] into the data[base|lake]. But for each node and edge in such a graph, it 
seems like it needs a complementary, shadow node and edge of parameters that regulate the 
graph. I guess the graph "plus" its complementing shadow is also a (larger?) 
graph. But are they different things? Or the same thing? And if they're different things, 
meta-things, is there an infinite regress lying about? (e.g. the parameter graph also 
needs its own parameter graph, etc.)

I know I shouldn't hit Send on this one....

On 6/24/23 20:03, David Eric Smith wrote:
Stephen, thank you for these,

Continuous your paragraphs at the bottom, there is a project I have wanted to 
pursue off and on for 25 years, and which gets cheaper each year.  I probably 
described it before on the list (maybe more than once), in which case apologies 
for the repeat.

The neoclassical paradigm from much of the past century turned on finding price 
systems as the separating hyperplanes that separated convex models of consumer 
preference and producer technology.  Besides the fact that those models are 
often not-even-wrong, lots else, like ecosystems, the polity, etc., are left 
out of the account altogether.

A conceptually easy piece of low-hanging fruit, though laborious to populate 
with data, would be to make an underlying model of the system you are trying to 
analyze economically as a real-goods input-output problem.  Then you could find 
the separating hyperplanes that are price systems relating it to whatever-other 
model you want to make of decision priorities.

Real-goods input-output analysis, with price systems as the separating 
hyperplanes, is ancient; it is called the von Neumann growth model.  Like many 
other things von Neumann, it was picked up, demonstrated, played with for a 
bit, and largely abandoned as people went wherever-else.

Today, of course, input-output models become far more useful than they ever 
could have been in von Neumann’s time, because big computation allows us to 
aggregate patchwork descriptions into larger models, which track the 
stoichiometric dependencies between the sectors.  This is some part of the 
information that the separating hyperplanes discard (by their nature and 
construction).  The models are of course hypergraphs, which means we know 
things about their topological analysis, and can study correlation of 
fluctuations as well as constraints on average behavior.  Systems biology now 
does this sort of thing routinely with models big enough that they are no 
longer just illustrative “toys”, where the separating hyperplanes are 
biological molecule inventories needed for cells to reproduce, and outputs of 
wastes to the surroundings can be tracked and their consequences computed as 
well.  All the usual stuff.

Most importantly, since ecology is already stoichiometric (in terms of much more than 
just chemical elements), we can put the Venn diagram in the right order, with the 
economy < polity < society < ecosphere, and at least represent ecological 
inputs and outputs as the containers for transient economic activity.

Another thing that would be a good use for the capacity of organizations like 
google to vacuum up data would be to embed lifecycle analysis of things like 
energy systems, water systems, or other factors impacted by human demography 
into whole-system cost analyses, where “costs” are first and foremost 
represented by real materials and embodied free energy, and we can later 
project them onto smaller decision variables (such as money prices) if those 
address particular problems.

I have a recently-graduated student who is enthusiastic about hypergraphs and 
looking for general things to do with them, and we might have some EU 
collaborators who will put in a proposal to do bits on this if they can get 
their time protected.  I don’t know if this goes anywhere, but the idea seems 
obvious, and it would be nice for somebody to have time and interest to work on 
it.  There must be some class of decision variables that could be served by 
such tools.

Anyway,

Eric



On Jun 25, 2023, at 7:27 AM, Stephen Guerin <stephen.gue...@simtable.com> wrote:

Thanks, Roger.

I put a copy of Shalizi and Farrell's paper for discussion here:
https://redfish.com/papers/temp20230624/shaliziFarrell_AI_Economist.pdf 
<https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpapers%2ftemp20230624%2fshaliziFarrell_AI_Economist.pdf&c=E,1,CJ3-rgYB-8TdQWaNu1p0AhBUqcXO0pO9dqRXDZBzKWVQ12oAlh1KuQzt_EhLnMxueuQiwhIBjLiSzOM8G39x-1OEu7gdvP8_MD3TlZkmo2oq5_yoDcKiXA,,&typo=1>

(As this is a not a public email list, I think it's fair use to post a link to 
the article for discussion. I will delete the file tomorrow so the public 
archive will have a dead link)

Also, here's a link to Weitzman's Hyperplane Theory referenced in the article.
https://scholar.harvard.edu/files/weitzman/files/economicsproofseparating.pdf 
<https://scholar.harvard.edu/files/weitzman/files/economicsproofseparating.pdf>

In some ways Bill Macready and Mohammed El-Beltagy (cc'd) were trying to build 
a version of Weitzman's Hyperplane for economic allocation with BiosGroup's 
Prowess Software 20 years ago extending price only auctions to the hyperplanes 
of price, time, quality and other multidimensional metrics.

Mohammed and I have been talking off list these last couple months of the same 
points as the article that modern corporations and governments were some fo the 
first AIs that we're struggling to understand proper governance and how the 
challenge of what AI governance may look like.

-Stephen

_______________________________________________________________________
stephen.gue...@simtable.com <mailto:stephen.gue...@simtable.com>
CEO, https://www.simtable.com 
<https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fwww.simtable.com%2f&c=E,1,4XrPURB_FCp-WXaQMvTLTW1xqvMkkKCTBaK7-Ku8lkt8BMYtA_py3VocBDX-We9fkc0hgOHqz0PdKUueGHWw0JWwVK86BBIPFP-ULgqi75TH&typo=1>
1600 Lena St #D1, Santa Fe, NM 87505
office: (505)995-0206 mobile: (505)577-5828


On Sat, Jun 24, 2023 at 2:55 PM Roger Critchlow <r...@elf.org 
<mailto:r...@elf.org>> wrote:

    I was trawling through my saved bookmarks looking for insights into Prigozhin's mutiny, when I 
stumbled to http://bactra.org/weblog/ 
<https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fbactra.org%2fweblog%2f&c=E,1,pE742UlOItFx5PPLUwST8PMDa8MKcLa5OUqvojIZKT-gGjoxhOOXLn5tNOUcOOWdnwn1tVtxHmRAw0repRi6-LwnW0g1Nl8b1Rr1jSJojFC2qrOv0GvEnA,,&typo=1>
 and found that Henry Farrell and Cosma Shalizi have just published an essay in The Economist, 
https://www.economist.com/by-invitation/2023/06/21/artificial-intelligence-is-a-familiar-looking-monster-say-henry-farrell-and-cosma-shalizi
 
<https://www.economist.com/by-invitation/2023/06/21/artificial-intelligence-is-a-familiar-looking-monster-say-henry-farrell-and-cosma-shalizi>,
 paywalled of course, but there is a twitter listicle version at 
https://twitter.com/henryfarrell/status/1671547591262191618 
<https://twitter.com/henryfarrell/status/1671547591262191618>

    -- rec --

--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to