So we’ve always known (or, in the modern, post-philosopher-driven era) that 
human perception is an active process, interrogating the world with 
pre-registering and presumptive frameworks to host “experience”, which are then 
activated by whatever is “out there” providing stimuli.  (In fairness, Kant was 
okay on this point already, though giving Kant credit for having seen modern 
insights is often somewhat like giving newspaper psychics credit for seeing 
those futures: it involves a bit of projection.  A UT prof for whom I used to 
work claimed that giving Kant credit for conceptualizing galaxies is such a 
projection; his philosophical reasoning was some bizarre metaphoric thing that 
has essentially no overlap with the modern physical system conceiving of 
galaxies.  So caveat emptor etc.)

And now this:
https://physics.aps.org/articles/v16/63?utm_campaign=weekly&utm_medium=email&utm_source=emailalert
(Commentary, IMO very lovely.)

Much of the BH observation wasn’t really to test general relativity any more, 
as its parameters are heavily enough constrained that there isn’t much 
wiggle-room for it to differ in the regime where these large, low-curvature BHs 
exist (and down to considerably smaller ones than that).  Really what the 
observations are about is testing astrophysics, using GR as the registering 
framework.

In a sense, though, the AIoSphere now has a perception phenomenon that no 
person within it has.  It can “see” BHs that are not, case by case, producing 
the data “seen”.  The usual scientist’s impulse is to panic, that we are losing 
control of the validation of things because we now mix too much 
pre-registration with “the data”.  But of course in a Bayesian-updating view of 
science, where the integration of everything ever measured is also part of the 
data, not just the proximal input, it is no more odd that GR constrained by 
gravitational-wave ringdown and other features should be a prior constraint on 
the telescope images, than that people should see the world with eyes filtered 
through accreting billions of years of biological evolution.  Particle physics 
has been finding rare events in accelerator snow for decades using 
pre-registering models; without them detection would be strictly impossible.

The upshot, though: as the whole human-social-cultural-scientific community, we 
are building up increasingly thick and autonomous layers of “seeing”, and AI 
pre-processing is going to rapidly and probably chaotically add to that.

What do we “think" about this?  Is there anything that needs to be thought 
about it?  Or is it an extension of what we already think is basically sound, 
and just needs to be embedded in new layers of feedback controllers, to 
function properly?

Eric


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to