Last Friday I was waiting to take a left across traffic.  There was a Muni 
train coming in my direction that was going to block my turn for a few seconds. 
 Rushing it was possible, but I had a passenger.  Also, I remembered the 
obscured downhill right lane I'd be entering had some row houses along it, and 
some street parking.   Accelerating hard into that lane could spook a 
pedestrian or someone getting into their car.  I could see this short delay was 
going to happen so as I slowed to a stop, while signaling, I tucked as close to 
the center as I could, with enough margin to avoid getting clipped by the wide 
Muni.  I did this so people behind me that wanted to go straight or take a 
right could do so if they were cautious to shimmy through.  But that wasn't 
good enough, and the person behind me started waving their hands and yelling at 
me. [1] In this case I thought the person behind me couldn't possibly miss the 
huge approaching Muni and would estimate the few seconds they could wait.   
Somehow, there are drivers that really can't imagine that another driver is 
trying to help them or visualize even simple mechanics of a few moving objects. 
   They can only lay on their horn.    My dog has way more finesse in herding 
and pursuit situations.    So, yeah, there are things that will be tricky for a 
ML system to model that are semi-cognitive, but the bar for human drivers is 
pretty low.  A subset of city drivers are about as smart as insects -- all 
signaling, no lookahead, and no modeling.   At least autonomous driving systems 
will get far more scrutiny than your typical 16 year old and a DMV test.

[1] Now I am a little ashamed to admit it, but in such circumstances, I have 
been known to take another block just to avoid holding anyone up, and then 
doubling back.   As I get older I see these situations as opportunities to 
frustrate people that deserve to be frustrated.
-----Original Message-----
From: Friam <friam-boun...@redfish.com> On Behalf Of Steve Smith
Sent: Tuesday, April 12, 2022 7:07 PM
To: friam@redfish.com
Subject: Re: [FRIAM] Selective cultural processes generate adaptive heuristics


On 4/12/22 5:10 PM, glen wrote:
> Ha! 8^D
>
> But neither the ANN clone, nor the *stereotyped* heuristics generated 
> by an autonomous car capture the high-dimensional opportunity I 
> believe meat organisms experience. Yes, the subsequent evolution of 
> the ANNs and the stereotyped out-group are more concrete than most 
> synthetic minds. But my claim, were I to actually hold it and try to 
> state it more clearly, is that meat, living in meat space, is more 
> open than those 2 examples. It's the openness that provides the meat 
> with the opportunity. The ANNs and autonomous car are more fixed, more 
> closed.
>
> However, I do believe machine intelligence *will* reach meat 
> intelligence. But it'll have to look a lot more like meat intelligence 
> to do so. It's already looking a lot more like meat intelligence than 
> it was even 10 years ago. And if we stay at this supralinear rate (or 
> higher), it'll happen sooner than I, this meat bag, thinks.

As a "meat bag" driving a car, I find myself *regularly* imagining what a 
driverless car would do in the current/instantaneous moment as I interrupt my 
own automatic driving instincts to run a supervisory analysis consciously to 
make sure that my instincts are doing "the right thing" which usually yields 
something like an infinite regress of second guessing which only 
ends/completes/halts because the clock runs out... 
the race condition converges between my reactions/instincts/conscious-review 
and the events-of-the-world.

As I've had *very* few (but not-zero) out-of-control moments in a vehicle and 
as I write this, I feel like I should (if I could) collect up my most authentic 
memories of those moments and do some kind of meta-analysis of them.   What I 
have done in the moment (after the chaotic events returned to somewhat linear 
in my apprehension) is usually to peg some overly conservative heuristic over 
the top of my more regular suite of heuristics that seem to keep me on my own 
side of the road and maintaining appropriate speeds and yields in most 
contexts.   With time, the precedence of those heuristics fade or get papered 
over by "yet other" heuristics for various other competing goals (hurrying, 
staying entertained, testing my abilities, etc.).

In any case, I have no trouble imagining the car I'm driving (with the mildest 
of safety features like sonar proximity alarms in bumpers for parking and "your 
turn-signal is on, dipshit" and automatic
driving/headlamps) incrementally getting better and better at helping me drive 
until *I* am either letting it do the driving and running my "supervisory 
heuristics" as I observe and consider intervening, OR even just putting on my 
VR goggles and watching a movie or playing a game or hacking lamely at some 
code while the car drives me off a cliff. Chances are the ability of my car 
will exceed my ability as a highway/urban driver sometime before *I* become a 
radical hazard as a driver (NM DMV just allowed me to renew my DL for 8 years 
sight unseen).

If my car emulates *my* style of driving (because it trains on my actual
driving?) somewhere down the road (pun recognized but not intended), I can't 
even imagine what it would mean for it to have a subjective, conscious 
experience like my own, even if somehow there is a monitor running that layers 
in heuristics uncannily like the ones *I* paste up in my imagination. Of 
course, I *know* that a proper self-driving vehicle will be trained on some 
maximal ensemble of *good drivers* and not on my personal ideosyncratic style.  
It might be different if it were training for Stock Car racing on a dirt track 
where my ideosyncracies might be somehow useful (or more likely entertaining).

I don't know if anyone else has these kinds of self-reflective episodes about 
who we might (already) be becoming as the boundaries between our phenotypic 
selves and our techno-extended selves blur.   I have been riding a bicycle 
since I was about 4, a motorcycle since I was about 15, and driving a car since 
soon after that.   My entire CN system all the way out to the fine-structure is 
co-evolved with those activities, even though those devices didn't even exist 
until about 200 and 130 years respectively.  Yesterday I fumbled my newer, more 
manageable bicycle (as of 4 months ago) into a spill and though it has been a 
couple of decades since I fell off of a bike, it seems my CNS/muscle memory 
remembered how to fall yet better than it remembered how to not-fall.  
Neolithic (or even mesolithic) hunters probably had an even more acute merging 
with their various tools than I have with my own (esp. vehicles, keyboards, 
tracking devices, hammers, chainsaws, alexa, siri ... ).


>
> On 4/12/22 15:58, Marcus Daniels wrote:
>> Now it is entirely possible to take a massive pre-trained neural net 
>> like GPT3 and run it in two places at once or have different 
>> instances use a baseline and take divergent paths from different 
>> training.
>> None of that is possible for humans, at least yet.    Some autonomous 
>> cars even know enough to be afraid of the police! (Regarding
>> concreteness.)
>> https://electrek.co/2022/04/10/gm-cruise-autonomous-taxi-pulled-over-
>> by-police-in-san-francisco-without-humans-bolts-off-u-cruise-responds
>> /
>>
>> -----Original Message-----
>> From: Friam <friam-boun...@redfish.com> On Behalf Of glen
>> Sent: Tuesday, April 12, 2022 3:47 PM
>> To: friam@redfish.com
>> Subject: Re: [FRIAM] Selective cultural processes generate adaptive 
>> heuristics
>>
>> Exactly. Both of these (low turnover wisdom propagation & "flat" 
>> infoscape) fail in my conception because they lack the concrete
>> (definit) particulars. Even if we have one 400 year old vampire 
>> telling funny stories to a 30 year vampire about a now-exploded 
>> vampire from 700 years ago, the sheer *number* of anecdotes required 
>> to capture a 400 year lifespan *forces* some abstraction ... some 
>> leaving out of important detail.
>>
>> And even if the concrete details of why, say, Galileo was such an OCD 
>> journaling nerd can be found in biographies or whatnot, actually 
>> reading and learning about all the persnickety nonsense that was
>> *crucial* to the arrival at, emergence of, any given inflection 
>> point, ... even if that concrete detail is logged/documented out 
>> there somewhere, nobody can learn it all. Each learner is forced to 
>> take an abstracted slice through it.
>>
>> What the commitment to meat space interactions is, is a way to ensure 
>> that the concreteness remains ... at least within *some* small "open 
>> ball", you're getting a high-dimensional opportunity. I think of it 
>> in terms of the space vs time tradeoff and (yes, broken record) the 
>> parallelism theorem. Sure, a sequential system can simulate a 
>> parallel one perfectly, but only if you give it the time to do so ...
>> and the amount of time it takes to do it is related to the amount of 
>> space the parallel system uses. Another way to think of it is the 
>> project management triangle: cheap, fast, or good. But those are 
>> low-dimensional. The space being balanced by organisms in the world 
>> is high-dimensional.
>>
>> On 4/12/22 14:19, Steve Smith wrote:
>>> Generations past (and under-mobile near-subsistence cultures today) 
>>> have more intergenerational households and neighborhoods providing 
>>> the heterarchical/holarchical connection/communication you suggest.   
>>> Or so my "just so" story relates.
>>>
>>> The expansive breadth offered by global (near-instantaneous, global) 
>>> communication/publication/relationship connections possibly makes up 
>>> for that in the large, a major refactoring of problems and solutions.
>>>
>>> I personally suffer from the lack of cross-cultural, cross-class 
>>> experience of frequenting a neighborhood "watering hole"
>>> (pub/tavern/saloon) in the way Glen seems to enjoy (cultivate). My 
>>> oldest regular drinking-philosophy buddy would be over 110 today (he 
>>> died over 20 years ago from alcohol-related illness) and until about
>>> 5 years ago I had a small cohort of 30ish imbibing interlocutors.  I 
>>> blame COVID, but the reasons are probably larger and more nefarious.
>>
>>
>

.-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn UTC-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:
 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to