Well, I could be wrong. But both Nick and EricC seem to argue there's no 
privilege "in the limit" ... i.e. with infeasibly extensible resources, perfect 
observability, etc. It's just a reactionary position against those who believe 
in souls or a cartesian cut. Ignore it. >8^D

But I don't think there can be *complete* privilege. Every time we think we 
come up with a way to keep the black hats out, they either find a way in ... or 
find a way to infer what's happening like with power or audio profiles.

I don't think anyone's arguing that peeks are expensive. The argument centers 
around the impact of that peek, how it's used. Your idea of compiling in 
diagnostics would submit to Nick's allegation of a *model*. I would argue we 
need even lower level self-organization. I vacillate between thinking digital 
computers could [not] be conscious because of this argument; the feedback loops 
may have to be very close to the metal, like fpga close. Maybe consciousness 
has to be analog in order to realize meta-programming at all scales?

On 11/2/21 7:36 AM, Marcus Daniels wrote:
> My point was that the cost to probe some memory address is low.   And all 
> there is, is I/O and memory.  
> 
>  It does become difficult to track thousands of addresses at once:  Think of 
> a debugger that has millions of watchpoints.   However, one could have 
> diagnostics compiled in to the code to check invariants from time to time.   
> I don't know why Nick says there is no privilege.   There can be complete 
> privilege.   Extracting meaning from that access is rarely easy, of course.  
> Just as debugging any given problem can be hard.
> 
> -----Original Message-----
> From: Friam <friam-boun...@redfish.com> On Behalf Of u?l? ?>$
> Sent: Monday, November 1, 2021 3:20 PM
> To: friam@redfish.com
> Subject: Re: [FRIAM] lurking
> 
> Literal self-awareness is possible. The flaw in your argument is that "self" 
> is ambiguous in the way you're using it. It's not ambiguous in the way me or 
> Marcus intend it. You can see this nicely if you elide "know" from your 
> argument.  We know nothing. The machine knows nothing. Just don't use the 
> word "know" or the concept it references.  There need not be a model 
> involved, either, only sensors and things to be sensed. 
> 
> Self-sensing means there is a feedback loop between the sensor and the thing 
> it senses. So, the sensor measures the sensed and the sensed measures the 
> sensor. That is self-awareness. There's no need for any of the psychological 
> hooha you often object to. There's no need for privileged information 
> *except* that there has to be a loop. If anything is privileged, it's the 
> causal loop.
> 
> The real trick is composing multiple self-self loops into something 
> resembling what we call a conscious agent. We can get to the uncanny valley 
> with regular old self-sensing control theory and robotics. Getting beyond the 
> valley is difficult: https://youtu.be/D8_VmWWRJgE A similar demonstration is 
> here: https://youtu.be/7ncDPoa_n-8
> 
> 
> 
> On 11/1/21 2:08 PM, thompnicks...@gmail.com wrote:
>> In fact, strictly speaking, I think literal self-awareness is impossible.  
>> Because, whatever a machine knows about itself, it is a MODEL of itself 
>> based on well situated sensors of its own activities, just like you are and 
>> I am.  There is no privileged access, just bettah or wussah access.
> 

-- 
"Better to be slapped with the truth than kissed with a lie."
☤>$ uǝlƃ


.-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn UTC-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:
 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to