On 9/16/22 12:44 PM, Marcus Daniels wrote:
Given the how normal extreme inequality is, probably the they/us distinction is already happening.  Technology could accelerate it, though.
I think we are in agreement.   Technology *has* increased it... technology *IS* the basis of the increase.  It is not that homo sapiens is evolving, but rather our extended phenotype is. Whatever evolutionary event(s) equipped us to significantly extend our phenotype... (all toolmaking/using) and then to do it *collectively* (readin', 'ritin, 'rithmatik) started this.
Some people will have direct and indirect cognitive assists, some will have designer babies and some won’t, etc.
And of course, the relatively new ability to modify the genome *directly* is yet another significant qualitative change in this. Selective breeding is probably at least as old in humans as it is in domestic animals.  I believe Sarbajit has spoken to this from his own personal heritage.
 Over a few generations we might not really recognize one another.
We already have a hard time "recognizing one another" *without* any more technological enhancement than shared language, basic literacy, advanced education, access to advanced materials and tooling, economics as our "differences".    A great deal of our inability to "recognize one another", however seems to be a form of willful ignorance/ignorant willfulness...  and that I believe is something of a choice... not a simple one...   but a choice... a personal one and a (sub)cultural one.   In principle, I think this is the fundamental feature that distinguishes the "conservative" from the "liberal" in the US... maybe throughout the West (or across all "advanced civilizations")?
 Whether that is utopian or dystopian or neither is subjective.

To "nationalists" and other stylizations of "chauvanists" it (inability to recognize one another) is likely utopian, to those seeking/celebrating diversity and inclusion it seems more complex.   The Nazis seemed to believe that the only way for humanity to move forward was to dominate and then exterminate everyone who didn't fit their narrow definition of "the ubermenchen".

In the spirit of "might makes right",  I am highly mistrustful of the "might" of technological leverage.   While I often present as a full-on luddite, those of you who know me well, also recognize that I've got a strong substrate of techno-utopian as the backdrop for that.  I can hardly hear of a new technology without getting excited at "all the ways this could make lives more better, or at least undermine the arbitrarily large suite of insults that we currently endure.   Unfortunately many of these are the unintended consequence of a previous turn of this very same crank and to turn the crank another time is to risk the Red Queen paradox, turning the crank faster and faster, just to keep ahead of the unintended consequences nipping at our heels (dragging us down and eating us).

So the (a) question is if it is "inevitable", how do we exercise our own agency to find our way through this rapidly changing landscape?  Do I defer to the Kurzweils/Diamandis/Musks to "lead me" into that landscape (and more to the point, push my grand/children forward into it)?  Who might I seek out who has a better vantage than I in such navigations?

My latest candidates for hints in this direction include Dietrich Bonhoeffer (anti-nazi theologian) and James Bridle <https://en.wikipedia.org/wiki/James_Bridle#cite_note-5> (contemporary artist/writer)

- Steve

PS.   Or is the landscape metaphor flawed?  I only see a "hellride" in the Zelazny-Amber sense...  riding across a multiverse manifold stretched roughly between the poles of Logos and Chaos?   probably an image only DaveW and Glen have references for?


On Sep 16, 2022, at 10:31 AM, Steve Smith <sasm...@swcp.com> wrote:



Responding first to Marcus point:

    "I think there will be a transition toward a more advanced form
    of life, but I don’t think there will be a clear connection
    between how they think and how humans think.  Human culture won’t
    be important to how they scale, but may be relevant to a bootstrap."

I believe we are "in transition" toward a more advanced form of life, though it is hard to demarcate any particular beginning of that transition.  The post/trans-humanists among us often seem to have a utopian/dystopian urge about all this that I am resistant to. Kotler's <https://www.goodreads.com/author/show/10960.Steven_Kotler> works (Abundance, Rise of the Superman, Tomorrowland, Art of the impossible, etc.) are representative of this genre, but since I know him also to be a grounded, thoughtful, compassionate person, I try hard to listen between the lines of what normally reads to me as egoist utopian fantasy.   His works are always well researched and he's fairly good at being clear what is speculation and what is fact in his writing/reporting, even though his bias is still a very techno-utopian optimism.

I really liked Spike Jonze movie "Her" <https://en.wikipedia.org/wiki/Her_(film)> as a compassionate-utopian story of a fairly abrupt AI transition/emergence ...  a fantasy by any measure of course, but an interesting twist on compassionate abandonment by our "children".

With Glen's re-statements, I found specifically the following:

Simulation in place of Symbols -  I don't know all that Marcus intended or Glen imputes with this but I think it might be very important in some fundamental way.  I wonder at the possibility that this fits into Glen's stuck-bit about "episodic" vs "diachronic" identity (and experience?) modes.

I haven't been able to parse the following very completely and look forward to more discussion?

    - percolation from concrete, participative, perceptual intuition
    and imagination (or perhaps the inverse, a wandering from
    abstract/formal *toward* embodiment as we see with the rise of
    GANs, zero-shot, and online learning AI)

and in fact, all of these as well... good stuff.


    - a more heterarchical, high-dimensional, or high-order
    understanding of "fitness costs" - fitness of fitnesses
    - holes or dense regions in a taxonomy of SAMs - including my
    favorite: cross-species mind-reading
    - game-theoretic (infinite and meta-gaming) logics of cognition
    (including simulation of simulation and fitness of fitnesses)

I introduced "deictec error" because I think it is maybe core to *my* struggles with this whole topic, so I'm glad Glen referenced it, and also look forward to possibly more discussion of that in regard to the rest.

- Steve


On 9/16/22 10:25 AM, glen∉ℂ wrote:
I do see us trying to identify the distinguishing markers of ... "cognition we can't imagine". That's fantastic. I'll try to collate some of them going backwards from Marcus':

- novelty - dissimilarity from "cognition as we know it"
- graded separation from human culture/sociality
- simulation in place of symbols (I failed to come up with a better phrase)
- accelerated look-ahead
- percolation from concrete, participative, perceptual intuition and imagination (or perhaps the inverse, a wandering from abstract/formal *toward* embodiment as we see with the rise of GANs, zero-shot, and online learning AI) - a more heterarchical, high-dimensional, or high-order understanding of "fitness costs" - fitness of fitnesses - holes or dense regions in a taxonomy of SAMs - including my favorite: cross-species mind-reading - game-theoretic (infinite and meta-gaming) logics of cognition (including simulation of simulation and fitness of fitnesses)

It seems like all these are attempts to at least circumscribe what we can know about what we can imagine. And if so, it's like a convex hull beyond which is what we can't imagine. I wanted to place "deictic error" in there. But it seems to apply to several of the other categories. In particular, part of Dave and SteveS' irritation with the arrogance of abstraction is that symbols only ever *hook* to their groundings. Logics over those symbols may or may not preserve the grounding. Like the rather obvious idiocy of classical logic suggesting that anything can be concluded from inconsistent premises. When/if an entity can fully replace all shunted/truncated symbols with (perhaps participatory) simulations, it might reach the tight coupling with the simulated (possible) worlds in the same way Dave implies we couple tightly (concretely) with our (actual) world.


On 9/15/22 21:16, Marcus Daniels wrote:
I think there will be a transition toward a more advanced form of life, but I don’t think there will be a clear connection between how they think and how humans think.  Human culture won’t be important to how they scale, but may be relevant to a bootstrap.  I would be surprised if compression, deconstruction, and reductionism went unused by this species.  I would be surprised if such a species would struggle with quantification.   I would also be surprised if they did not use simulation in place of symbols.   I think they will have dreams of entire human lives, of the rise and fall of nations, and regard our aspirations like I regard my dog dreaming of her encounters at the park.

On Sep 15, 2022, at 4:11 PM, Prof David West <profw...@fastmail.fm> wrote:


Just to be clear, I have zero antipathy towards Wolpert or his efforts at steelmanning. I think Wolpert does an excellent job of phrasing as questions what I perceive "Scientists" and "Computationalists" to merely assert as Truth. I have long tilted at that particular windmill and I applaud Wolpert, and glen for bringing him to our attention, for exposing the assertions such that counter arguments might be made.

And when it comes to "computationalism" and AI; I know it is not the 1970s and things have "advanced" significantly. And although I do not comprehend the details as well as most of you, I do understand sufficiently, I believe, to advance the claim that they are suffering from the exact same blind spot (with variable details) as Simon and Newell, et. al. who championed GOFAI. Plus you all have heard of Simon and Newell but most of you are unfamiliar with McGilchrist and similar contemporary critics.

My antipathy toward "Scientists" and "Computationalists" arises from what I perceive as an absolute refusal to credit any science, math, or ways/means of acquiring/expressing knowledge and understanding other than theirs. Dismissing neolithic and pre-modern science is one example. Failing to acknowledge the intelligence (and probably SAM) of other species—especially octopi—simply because they do not build atomic bombs or computers, is another.

A really good book that would inform a discussion of Wolpert's questions, #4 in particular, is: /Other Minds: The Octopus, the sea, and the deep origins of consciousness/, by Peter Godfrey-Smith.  A blurb follows.

/Although mammals and birds are widely regarded as the smartest creatures on earth, it has lately become clear that a very distant branch of the tree of life has also sprouted higher intelligence: the cephalopods, consisting of the squid, the cuttlefish, and above all the octopus. In captivity, octopuses have been known to identify individual human keepers, raid neighboring tanks for food, turn off light bulbs by spouting jets of water, plug drains, and make daring escapes. How is it that a creature with such gifts evolved through an evolutionary lineage so radically distant from our own? What does it mean that evolution built minds not once but at least twice? The octopus is the closest we will come to meeting an intelligent alien. What can we learn from the encounter? /

davew


On Thu, Sep 15, 2022, at 12:22 PM, Steve Smith wrote:
>>There is some kind of diectic error in our response.
>
> Korrekshun - "deictic"


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021 http://friam.383.s1.nabble.com/

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021 http://friam.383.s1.nabble.com/

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p 
Zoomhttps://bit.ly/virtualfriam
to (un)subscribehttp://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIChttp://friam-comic.blogspot.com/
archives:  5/2017 thru presenthttps://redfish.com/pipermail/friam_redfish.com/
   1/2003 thru 6/2021http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to