Re: Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-23 Thread Craig Weinberg


On Saturday, January 12, 2013 6:46:23 AM UTC-5, telmo_menezes wrote:
>
>
>
>
> On Fri, Jan 11, 2013 at 6:10 AM, Craig Weinberg 
> 
> > wrote:
>
>>
>>
>> On Thursday, January 10, 2013 4:58:32 PM UTC-5, telmo_menezes wrote:
>>
>>> Hi Craig,
>>>
>>> I tend to agree with what you say (or what I understand of it). Despite 
>>> my belief that it is possible to extract memories (or their 3p shadows) 
>>> from a brain,
>>>
>>
>> As long as you have another brain to experience the extracted memories in 
>> 1p, then I wouldn't rule out the possibility of a 3p transmission of some 
>> experiential content from one brain to another.
>>  
>>
>>> I do not believe in the neuroscience hypothesis that consciousness 
>>> emerges from brain activity. I'm not sure I believe that there is a degree 
>>> of consciousness in everything, but it sounds more plausible than the 
>>> emergence from complexity idea.
>>>
>>> Still I feel that you avoid some questions. Maybe it's just my lack of 
>>> understanding of what you're saying. For example: what is the primary 
>>> "stuff" in your theory? In the same sense that for materialists it's 
>>> subatomic particles and for comp it's N, +, *. What's yours?
>>>
>>
>> For me the primary stuff is sensory-motor presence.
>>
>
> It's very hard for me to grasp this.
>

It's supposed to be hard to grasp. We are supposed to watch the movie, not 
try to figure out who the actors really are and how the camera works.  

 
>
>>  Particles are public sense representations. N, +, * are private sense 
>> representations. Particles represent the experience of sensory-motor 
>> obstruction as topological bodies. Integers and arithmetic operators 
>> represent the sensory-motor relations of public objects as private logical 
>> figures.
>>
>> Craig
>>
>>
>>>
>>> On Wed, Jan 9, 2013 at 2:50 PM, Craig Weinberg wrote:
>>>


 On Wednesday, January 9, 2013 6:18:37 AM UTC-5, telmo_menezes wrote:
>
>
> Hi Craig,
>  
>
>>
>> Cool. I actually would have agreed with you and a lot of people here 
>> at different times in my life. It's only been lately in the last five 
>> years 
>> or so that I have put together this other way of understanding 
>> everything. 
>> It gets lost in the debating, because I feel like I have to make my 
>> points 
>> about what is different or new about how I see things, but I do 
>> understand 
>> that other ways of looking at it make a lot of sense too - so much so 
>> that 
>> I suppose I am drawn only to digging into the weak spots to try to  get 
>> others to see the secret exit that I think I've found...
>>
>
> Ok, this sounds interesting and I'd like to know more. I've been away 
> from the mailing list in the last few years, so maybe you've talked about 
> it before. Would you tell me about that secret exit?
>

 The secret exit is to reverse the assumption that consciousness occurs 
 from functions or substances. Even though our human consciousness depends 
 on a living human body (as far as we know for sure), that may be because 
 of 
 the degree of elaboration required to develop a human quality of 
 experience, not because the fundamental capacity to perceive and 
 participate depends on anything at all.

 Being inside of a human experience means being inside of an animal 
 experience, an organism's experience, a cellular and molecular level 
 experience. The alternative means picking an arbitrary level at which 
 total 
 lack of awareness suddenly changes into perception and participation for 
 no 
 conceivable reason. Instead of hanging on to the hope of finding such a 
 level or gate, the secret is to see that there are many levels and gates 
 but that they are qualitative, with each richer integration of qualia 
 reframing the levels left behind in a particular way, and that way 
 (another 
 key) is to reduce it from a personal, animistic temporal flow of 1p 
 meaning 
 and significant preference  to impersonal, mechanistic spatial bodies 
 ruled 
 by cause-effect and chance/probability. 1p and 3p are relativistic, but 
 what joins them is the capacity to discern the difference. 

 Rather than sense i/o being a function or logic take for granted, flip 
 it over so that logic is the 3p shadow of sense. The 3p view is a frozen 
 snapshot of countless 1p views as seen from the outside, and the qualities 
 of the 3p view depend entirely on the nature of the 1p 
 perceiver-partcipant. Sense is semiotic. Its qualitative layers are 
 partitioned by habit and interpretive inertia, just as an ambiguous image 
 looks different depending on how you personally direct your perception, or 
 how a book that you read when you are 12 years old can have different 
 meanings at 18 or 35. The meaning isn't just 'out there', it's literally, 

Re: Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-12 Thread Telmo Menezes
On Fri, Jan 11, 2013 at 6:10 AM, Craig Weinberg wrote:

>
>
> On Thursday, January 10, 2013 4:58:32 PM UTC-5, telmo_menezes wrote:
>
>> Hi Craig,
>>
>> I tend to agree with what you say (or what I understand of it). Despite
>> my belief that it is possible to extract memories (or their 3p shadows)
>> from a brain,
>>
>
> As long as you have another brain to experience the extracted memories in
> 1p, then I wouldn't rule out the possibility of a 3p transmission of some
> experiential content from one brain to another.
>
>
>> I do not believe in the neuroscience hypothesis that consciousness
>> emerges from brain activity. I'm not sure I believe that there is a degree
>> of consciousness in everything, but it sounds more plausible than the
>> emergence from complexity idea.
>>
>> Still I feel that you avoid some questions. Maybe it's just my lack of
>> understanding of what you're saying. For example: what is the primary
>> "stuff" in your theory? In the same sense that for materialists it's
>> subatomic particles and for comp it's N, +, *. What's yours?
>>
>
> For me the primary stuff is sensory-motor presence.
>

It's very hard for me to grasp this.


> Particles are public sense representations. N, +, * are private sense
> representations. Particles represent the experience of sensory-motor
> obstruction as topological bodies. Integers and arithmetic operators
> represent the sensory-motor relations of public objects as private logical
> figures.
>
> Craig
>
>
>>
>> On Wed, Jan 9, 2013 at 2:50 PM, Craig Weinberg wrote:
>>
>>>
>>>
>>> On Wednesday, January 9, 2013 6:18:37 AM UTC-5, telmo_menezes wrote:


 Hi Craig,


>
> Cool. I actually would have agreed with you and a lot of people here
> at different times in my life. It's only been lately in the last five 
> years
> or so that I have put together this other way of understanding everything.
> It gets lost in the debating, because I feel like I have to make my points
> about what is different or new about how I see things, but I do understand
> that other ways of looking at it make a lot of sense too - so much so that
> I suppose I am drawn only to digging into the weak spots to try to  get
> others to see the secret exit that I think I've found...
>

 Ok, this sounds interesting and I'd like to know more. I've been away
 from the mailing list in the last few years, so maybe you've talked about
 it before. Would you tell me about that secret exit?

>>>
>>> The secret exit is to reverse the assumption that consciousness occurs
>>> from functions or substances. Even though our human consciousness depends
>>> on a living human body (as far as we know for sure), that may be because of
>>> the degree of elaboration required to develop a human quality of
>>> experience, not because the fundamental capacity to perceive and
>>> participate depends on anything at all.
>>>
>>> Being inside of a human experience means being inside of an animal
>>> experience, an organism's experience, a cellular and molecular level
>>> experience. The alternative means picking an arbitrary level at which total
>>> lack of awareness suddenly changes into perception and participation for no
>>> conceivable reason. Instead of hanging on to the hope of finding such a
>>> level or gate, the secret is to see that there are many levels and gates
>>> but that they are qualitative, with each richer integration of qualia
>>> reframing the levels left behind in a particular way, and that way (another
>>> key) is to reduce it from a personal, animistic temporal flow of 1p meaning
>>> and significant preference  to impersonal, mechanistic spatial bodies ruled
>>> by cause-effect and chance/probability. 1p and 3p are relativistic, but
>>> what joins them is the capacity to discern the difference.
>>>
>>> Rather than sense i/o being a function or logic take for granted, flip
>>> it over so that logic is the 3p shadow of sense. The 3p view is a frozen
>>> snapshot of countless 1p views as seen from the outside, and the qualities
>>> of the 3p view depend entirely on the nature of the 1p
>>> perceiver-partcipant. Sense is semiotic. Its qualitative layers are
>>> partitioned by habit and interpretive inertia, just as an ambiguous image
>>> looks different depending on how you personally direct your perception, or
>>> how a book that you read when you are 12 years old can have different
>>> meanings at 18 or 35. The meaning isn't just 'out there', it's literally,
>>> physically "in here". If this is true, then the entire physical universe
>>> doubles in size, or really is squared as every exterior surface is a 3p
>>> representation of an entire history of 1p experience. Each acorn is a
>>> potential for oak tree forest, an encyclopedia of evolution and cosmology,
>>> so that the acorn is just a semiotic placeholder which is scaled and
>>> iconicized appropriately as a consequence of the relation of our human
>>> quali

Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-12 Thread Telmo Menezes
On Fri, Jan 11, 2013 at 8:20 PM, meekerdb  wrote:

>  On 1/11/2013 2:12 AM, Telmo Menezes wrote:
>
>
>
>
> On Fri, Jan 11, 2013 at 1:33 AM, meekerdb  wrote:
>
>>  On 1/10/2013 4:23 PM, Telmo Menezes wrote:
>>
>>  Do you think there can be something that is intelligent but not complex
>>> (and use whatever definitions of "intelligent" and "complex" you want).
>>>
>>
>>  A thermostat is much less complex than a human brain but intelligent
>> under my definition.
>>
>>
>> But much less intelligent.
>>
>
>  That's your conclusion, not mine. According to my definition you can
> only compare thermostats being good at being thermostats and Brents being
> good at being Brents. Because you can only compare intelligence against a
> same set of goals. Otherwise you're just saying that intelligence A is more
> complex than intelligence B. Human intelligence requires a certain level of
> complexity, bacteria intelligence another. That's all.
>
>
> So you've removed all meaning from intelligence.  Rocks are smart at being
> rocks, we just have to recognize their goal is be rocks.
>

I just claim that we can only talk quantitatively about intelligence in
relation to a certain agent and a certain set of goals. Isn't it a bit of a
stretch to say that I removed all meaning from the concept?


>
> Maybe we can stop dancing around the question by referring to
> human-level-intelligence and then rephrasing the question as, "Do you think
> human-like-intelligence requires human-like-complexity?"
>

Ok. Yes, I think that human-like-intelligence requires
human-like-complexity.


>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-12 Thread Bruno Marchal


On 11 Jan 2013, at 22:08, meekerdb wrote:


On 1/11/2013 11:44 AM, Bruno Marchal wrote:



On 10 Jan 2013, at 23:28, Telmo Menezes wrote:





On Thu, Jan 10, 2013 at 11:15 PM, meekerdb   
wrote:

On 1/10/2013 1:58 PM, Telmo Menezes wrote:


Hi Craig,

I tend to agree with what you say (or what I understand of it).  
Despite my belief that it is possible to extract memories (or  
their 3p shadows) from a brain, I do not believe in the  
neuroscience hypothesis that consciousness emerges from brain  
activity. I'm not sure I believe that there is a degree of  
consciousness in everything, but it sounds more plausible than  
the emergence from complexity idea.


Do you agree that intelligence requires complexity?

I'm not sure intelligence and complexity are two different things.


Hmm...

I have a theory of intelligence. It has strong defect, as it makes  
many things intelligent. But not everyone.


The machine X is intelligent, if it is not stupid.

And the machine X is stupid in two circumstances. Either she  
asserts that Y is intelligent, or she assert that Y is stupid. (Y  
can be equal to X).


So if X is smart she asserts Y is not intelligent or Y is not  
stupid.  :-)


Lol. No problem, she can assert that, but the OR is necessarily non  
constructive (non intuitionist). It is indeed a way to say that she  
does not know.


Bruno





Brent



In that theory, a pebble is intelligent, as no one has ever heard a  
pebble asserting that some other pebble or whatever, is stupid, or  
is intelligent.


(that theory is almost only a identification of intelligence with  
consistency (Dt)).


Intelligence is needed to develop competences.

But competences can have a negative feedback on intelligence.

Bruno



--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-12 Thread Bruno Marchal


On 11 Jan 2013, at 14:07, Craig Weinberg wrote:




On Friday, January 11, 2013 12:27:54 AM UTC-5, Brent wrote:
On 1/10/2013 9:20 PM, Craig Weinberg wrote:




On Thursday, January 10, 2013 7:33:06 PM UTC-5, Brent wrote:
On 1/10/2013 4:23 PM, Telmo Menezes wrote:


Do you think there can be something that is intelligent but not  
complex (and use whatever definitions of "intelligent" and  
"complex" you want).


A thermostat is much less complex than a human brain but  
intelligent under my definition.


But much less intelligent.  So in effect you think there is a  
degree of intelligence in everything, just like you believe there's  
a degree of consciousness in everything.  And the degree of  
intelligence correlates with the degree of complexity ...but you  
don't think the same about consciousness?


Brent

I was thinking today that a decent way of defining intelligence is  
just 'The ability to know "what's going on"'.


This makes it clear that intelligence refers to the degree of  
sophistication of awareness, not just complexity of function or  
structure. This is why a computer which has complex function and  
structure has no authentic intelligence and has no idea 'what's  
going on'. Intelligence however has everything to do with  
sensitivity, integration, and mobilization of awareness as an  
asset, i.e. to be directed for personal gain or shared enjoyment,  
progress, etc. Knowing what's going on implicitly means caring what  
goes on, which also supervenes on biological quality investment in  
experience.


Which is why I think an intelligent machine must be one that acts in  
its environment.  Simply 'being aware' or 'knowing' are meaningless  
without the ability and motives to act on them.


Sense and motive are inseparable ontologically, although they can be  
interleaved by level. A plant for instance has no need to act on the  
world to the same degree as an organism which can move its location,  
but the cells that make up the plant act to grow and direct it  
toward light, extend roots to water and nutrients, etc.  
Ontologically however, there is no way to really have awareness  
which matters without some participatory opportunity or potential  
for that opportunity.


The problem with a machine (any machine) is that at the level which  
is it a machine, it has no way to participate. By definition a  
machine does whatever it is designed to do.


We can argue that the "natural machine" are not designed but selected.  
Even partially self-selected through choice of sexual partners.




Anything that we use as a machine has to be made of something which  
we can predict and control reliably,


Human made machine are designed in this way.




so that its sensory-motive capacities are very limited by  
definition.  Its range of 'what's going on' has to be very narrow.  
The internet, for instance, passes a tremendous number of events  
through electronic circuits, but the content of all of it is  
entirely lost on it. We use the internet to increase our sense and  
inform our motives, but its sense and motive does not increase at all.


Our computer are not encouraged to develop themselves. They are sort  
of slaves. But machines in general are not predictable, unless we  
limit them in some way, as we do usually (a bit less so in AI  
research, but still so for the applications: the consumers want  
obedient machines).
You have a still a pre-Gödel or pre-Turing conception of machine. We  
just don't know what universal machine/number are capable of.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-11 Thread meekerdb

On 1/11/2013 11:44 AM, Bruno Marchal wrote:


On 10 Jan 2013, at 23:28, Telmo Menezes wrote:





On Thu, Jan 10, 2013 at 11:15 PM, meekerdb > wrote:


On 1/10/2013 1:58 PM, Telmo Menezes wrote:

Hi Craig,

I tend to agree with what you say (or what I understand of it). Despite my 
belief
that it is possible to extract memories (or their 3p shadows) from a brain, 
I do
not believe in the neuroscience hypothesis that consciousness emerges from 
brain
activity. I'm not sure I believe that there is a degree of consciousness in
everything, but it sounds more plausible than the emergence from complexity 
idea.


Do you agree that intelligence requires complexity?


I'm not sure intelligence and complexity are two different things.


Hmm...

I have a theory of intelligence. It has strong defect, as it makes many things 
intelligent. But not everyone.


The machine X is intelligent, if it is not stupid.

And the machine X is stupid in two circumstances. Either she asserts that Y is 
intelligent, or she assert that Y is stupid. (Y can be equal to X).


So if X is smart she asserts Y is not intelligent or Y is not stupid.  :-)

Brent



In that theory, a pebble is intelligent, as no one has ever heard a pebble asserting 
that some other pebble or whatever, is stupid, or is intelligent.


(that theory is almost only a identification of intelligence with consistency 
(Dt)).

Intelligence is needed to develop competences.

But competences can have a negative feedback on intelligence.

Bruno


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-11 Thread Bruno Marchal


On 10 Jan 2013, at 23:28, Telmo Menezes wrote:





On Thu, Jan 10, 2013 at 11:15 PM, meekerdb   
wrote:

On 1/10/2013 1:58 PM, Telmo Menezes wrote:


Hi Craig,

I tend to agree with what you say (or what I understand of it).  
Despite my belief that it is possible to extract memories (or their  
3p shadows) from a brain, I do not believe in the neuroscience  
hypothesis that consciousness emerges from brain activity. I'm not  
sure I believe that there is a degree of consciousness in  
everything, but it sounds more plausible than the emergence from  
complexity idea.


Do you agree that intelligence requires complexity?

I'm not sure intelligence and complexity are two different things.


Hmm...

I have a theory of intelligence. It has strong defect, as it makes  
many things intelligent. But not everyone.


The machine X is intelligent, if it is not stupid.

And the machine X is stupid in two circumstances. Either she asserts  
that Y is intelligent, or she assert that Y is stupid. (Y can be equal  
to X).


In that theory, a pebble is intelligent, as no one has ever heard a  
pebble asserting that some other pebble or whatever, is stupid, or is  
intelligent.


(that theory is almost only a identification of intelligence with  
consistency (Dt)).


Intelligence is needed to develop competences.

But competences can have a negative feedback on intelligence.

Bruno






Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.



--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-11 Thread meekerdb

On 1/11/2013 2:12 AM, Telmo Menezes wrote:




On Fri, Jan 11, 2013 at 1:33 AM, meekerdb > wrote:


On 1/10/2013 4:23 PM, Telmo Menezes wrote:


Do you think there can be something that is intelligent but not complex 
(and
use whatever definitions of "intelligent" and "complex" you want).


A thermostat is much less complex than a human brain but intelligent under 
my
definition.


But much less intelligent.


That's your conclusion, not mine. According to my definition you can only compare 
thermostats being good at being thermostats and Brents being good at being Brents. 
Because you can only compare intelligence against a same set of goals. Otherwise you're 
just saying that intelligence A is more complex than intelligence B. Human intelligence 
requires a certain level of complexity, bacteria intelligence another. That's all.


So you've removed all meaning from intelligence.  Rocks are smart at being rocks, we just 
have to recognize their goal is be rocks.


Maybe we can stop dancing around the question by referring to human-level-intelligence and 
then rephrasing the question as, "Do you think human-like-intelligence requires 
human-like-complexity?"


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-11 Thread Craig Weinberg


On Friday, January 11, 2013 12:27:54 AM UTC-5, Brent wrote:
>
>  On 1/10/2013 9:20 PM, Craig Weinberg wrote: 
>
>
>
> On Thursday, January 10, 2013 7:33:06 PM UTC-5, Brent wrote: 
>>
>>  On 1/10/2013 4:23 PM, Telmo Menezes wrote: 
>>
>>  Do you think there can be something that is intelligent but not complex 
>>> (and use whatever definitions of "intelligent" and "complex" you want).
>>>  
>>
>>  A thermostat is much less complex than a human brain but intelligent 
>> under my definition.
>>
>>
>> But much less intelligent.  So in effect you think there is a degree of 
>> intelligence in everything, just like you believe there's a degree of 
>> consciousness in everything.  And the degree of intelligence correlates 
>> with the degree of complexity ...but you don't think the same about 
>> consciousness?
>>
>> Brent
>>  
>
> I was thinking today that a decent way of defining intelligence is just 
> 'The ability to know "what's going on"'. 
>
> This makes it clear that intelligence refers to the degree of 
> sophistication of awareness, not just complexity of function or structure. 
> This is why a computer which has complex function and structure has no 
> authentic intelligence and has no idea 'what's going on'. Intelligence 
> however has everything to do with sensitivity, integration, and 
> mobilization of awareness as an asset, i.e. to be directed for personal 
> gain or shared enjoyment, progress, etc. Knowing what's going on implicitly 
> means caring what goes on, which also supervenes on biological quality 
> investment in experience.
>  
>
> Which is why I think an intelligent machine must be one that acts in its 
> environment.  Simply 'being aware' or 'knowing' are meaningless without the 
> ability and motives to act on them.
>

Sense and motive are inseparable ontologically, although they can be 
interleaved by level. A plant for instance has no need to act on the world 
to the same degree as an organism which can move its location, but the 
cells that make up the plant act to grow and direct it toward light, extend 
roots to water and nutrients, etc. Ontologically however, there is no way 
to really have awareness which matters without some participatory 
opportunity or potential for that opportunity.

The problem with a machine (any machine) is that at the level which is it a 
machine, it has no way to participate. By definition a machine does 
whatever it is designed to do. Anything that we use as a machine has to be 
made of something which we can predict and control reliably, so that its 
sensory-motive capacities are very limited by definition.  Its range of 
'what's going on' has to be very narrow. The internet, for instance, passes 
a tremendous number of events through electronic circuits, but the content 
of all of it is entirely lost on it. We use the internet to increase our 
sense and inform our motives, but its sense and motive does not increase at 
all.

Craig

>
> Brent
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/pf0w53nZsoMJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-11 Thread Telmo Menezes
On Fri, Jan 11, 2013 at 1:33 AM, meekerdb  wrote:

>  On 1/10/2013 4:23 PM, Telmo Menezes wrote:
>
>  Do you think there can be something that is intelligent but not complex
>> (and use whatever definitions of "intelligent" and "complex" you want).
>>
>
>  A thermostat is much less complex than a human brain but intelligent
> under my definition.
>
>
> But much less intelligent.
>

That's your conclusion, not mine. According to my definition you can only
compare thermostats being good at being thermostats and Brents being good
at being Brents. Because you can only compare intelligence against a same
set of goals. Otherwise you're just saying that intelligence A is more
complex than intelligence B. Human intelligence requires a certain level of
complexity, bacteria intelligence another. That's all.

General Artificial Intelligence is not general at all - what we really want
it is for it to be specifically good at interacting with humans and
pursuing human goals (ours, no the AI's - otherwise people will say it's
dumb).


> So in effect you think there is a degree of intelligence in everything,
> just like you believe there's a degree of consciousness in everything.
>

I said I'm more inclined to believe in a degree of consciousness in
everything than in intelligence emerging from complexity.


> And the degree of intelligence correlates with the degree of complexity
>

Again, your conclusion, not mine.


> ...but you don't think the same about consciousness?
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-10 Thread meekerdb

On 1/10/2013 9:20 PM, Craig Weinberg wrote:



On Thursday, January 10, 2013 7:33:06 PM UTC-5, Brent wrote:

On 1/10/2013 4:23 PM, Telmo Menezes wrote:


Do you think there can be something that is intelligent but not complex 
(and
use whatever definitions of "intelligent" and "complex" you want).


A thermostat is much less complex than a human brain but intelligent under 
my
definition.


But much less intelligent.  So in effect you think there is a degree of 
intelligence
in everything, just like you believe there's a degree of consciousness in
everything.  And the degree of intelligence correlates with the degree of 
complexity
...but you don't think the same about consciousness?

Brent


I was thinking today that a decent way of defining intelligence is just 'The ability to 
know "what's going on"'.


This makes it clear that intelligence refers to the degree of sophistication of 
awareness, not just complexity of function or structure. This is why a computer which 
has complex function and structure has no authentic intelligence and has no idea 'what's 
going on'. Intelligence however has everything to do with sensitivity, integration, and 
mobilization of awareness as an asset, i.e. to be directed for personal gain or shared 
enjoyment, progress, etc. Knowing what's going on implicitly means caring what goes on, 
which also supervenes on biological quality investment in experience.


Which is why I think an intelligent machine must be one that acts in its environment.  
Simply 'being aware' or 'knowing' are meaningless without the ability and motives to act 
on them.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-10 Thread Craig Weinberg


On Thursday, January 10, 2013 7:33:06 PM UTC-5, Brent wrote:
>
>  On 1/10/2013 4:23 PM, Telmo Menezes wrote: 
>
>  Do you think there can be something that is intelligent but not complex 
>> (and use whatever definitions of "intelligent" and "complex" you want).
>>  
>
>  A thermostat is much less complex than a human brain but intelligent 
> under my definition.
>
>
> But much less intelligent.  So in effect you think there is a degree of 
> intelligence in everything, just like you believe there's a degree of 
> consciousness in everything.  And the degree of intelligence correlates 
> with the degree of complexity ...but you don't think the same about 
> consciousness?
>
> Brent
>

I was thinking today that a decent way of defining intelligence is just 
'The ability to know "what's going on"'. 

This makes it clear that intelligence refers to the degree of 
sophistication of awareness, not just complexity of function or structure. 
This is why a computer which has complex function and structure has no 
authentic intelligence and has no idea 'what's going on'. Intelligence 
however has everything to do with sensitivity, integration, and 
mobilization of awareness as an asset, i.e. to be directed for personal 
gain or shared enjoyment, progress, etc. Knowing what's going on implicitly 
means caring what goes on, which also supervenes on biological quality 
investment in experience.

Craig 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/4H86jbpmVrsJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-10 Thread Craig Weinberg


On Thursday, January 10, 2013 4:58:32 PM UTC-5, telmo_menezes wrote:
>
> Hi Craig,
>
> I tend to agree with what you say (or what I understand of it). Despite my 
> belief that it is possible to extract memories (or their 3p shadows) from a 
> brain,
>

As long as you have another brain to experience the extracted memories in 
1p, then I wouldn't rule out the possibility of a 3p transmission of some 
experiential content from one brain to another.
 

> I do not believe in the neuroscience hypothesis that consciousness emerges 
> from brain activity. I'm not sure I believe that there is a degree of 
> consciousness in everything, but it sounds more plausible than the 
> emergence from complexity idea.
>
> Still I feel that you avoid some questions. Maybe it's just my lack of 
> understanding of what you're saying. For example: what is the primary 
> "stuff" in your theory? In the same sense that for materialists it's 
> subatomic particles and for comp it's N, +, *. What's yours?
>

For me the primary stuff is sensory-motor presence. Particles are public 
sense representations. N, +, * are private sense representations. Particles 
represent the experience of sensory-motor obstruction as topological 
bodies. Integers and arithmetic operators represent the sensory-motor 
relations of public objects as private logical figures.

Craig


>
> On Wed, Jan 9, 2013 at 2:50 PM, Craig Weinberg 
> 
> > wrote:
>
>>
>>
>> On Wednesday, January 9, 2013 6:18:37 AM UTC-5, telmo_menezes wrote:
>>>
>>>
>>> Hi Craig,
>>>  
>>>

 Cool. I actually would have agreed with you and a lot of people here at 
 different times in my life. It's only been lately in the last five years 
 or 
 so that I have put together this other way of understanding everything. It 
 gets lost in the debating, because I feel like I have to make my points 
 about what is different or new about how I see things, but I do understand 
 that other ways of looking at it make a lot of sense too - so much so that 
 I suppose I am drawn only to digging into the weak spots to try to  get 
 others to see the secret exit that I think I've found...

>>>
>>> Ok, this sounds interesting and I'd like to know more. I've been away 
>>> from the mailing list in the last few years, so maybe you've talked about 
>>> it before. Would you tell me about that secret exit?
>>>
>>
>> The secret exit is to reverse the assumption that consciousness occurs 
>> from functions or substances. Even though our human consciousness depends 
>> on a living human body (as far as we know for sure), that may be because of 
>> the degree of elaboration required to develop a human quality of 
>> experience, not because the fundamental capacity to perceive and 
>> participate depends on anything at all.
>>
>> Being inside of a human experience means being inside of an animal 
>> experience, an organism's experience, a cellular and molecular level 
>> experience. The alternative means picking an arbitrary level at which total 
>> lack of awareness suddenly changes into perception and participation for no 
>> conceivable reason. Instead of hanging on to the hope of finding such a 
>> level or gate, the secret is to see that there are many levels and gates 
>> but that they are qualitative, with each richer integration of qualia 
>> reframing the levels left behind in a particular way, and that way (another 
>> key) is to reduce it from a personal, animistic temporal flow of 1p meaning 
>> and significant preference  to impersonal, mechanistic spatial bodies ruled 
>> by cause-effect and chance/probability. 1p and 3p are relativistic, but 
>> what joins them is the capacity to discern the difference. 
>>
>> Rather than sense i/o being a function or logic take for granted, flip it 
>> over so that logic is the 3p shadow of sense. The 3p view is a frozen 
>> snapshot of countless 1p views as seen from the outside, and the qualities 
>> of the 3p view depend entirely on the nature of the 1p 
>> perceiver-partcipant. Sense is semiotic. Its qualitative layers are 
>> partitioned by habit and interpretive inertia, just as an ambiguous image 
>> looks different depending on how you personally direct your perception, or 
>> how a book that you read when you are 12 years old can have different 
>> meanings at 18 or 35. The meaning isn't just 'out there', it's literally, 
>> physically "in here". If this is true, then the entire physical universe 
>> doubles in size, or really is squared as every exterior surface is a 3p 
>> representation of an entire history of 1p experience. Each acorn is a 
>> potential for oak tree forest, an encyclopedia of evolution and cosmology, 
>> so that the acorn is just a semiotic placeholder which is scaled and 
>> iconicized appropriately as a consequence of the relation of our human 
>> quality awareness and that of the evolutionary-historical-possible future 
>> contexts which we share with it (or the whole ensemble of experiences 

Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-10 Thread meekerdb

On 1/10/2013 4:23 PM, Telmo Menezes wrote:


Do you think there can be something that is intelligent but not complex 
(and use
whatever definitions of "intelligent" and "complex" you want).


A thermostat is much less complex than a human brain but intelligent under my 
definition.


But much less intelligent.  So in effect you think there is a degree of intelligence in 
everything, just like you believe there's a degree of consciousness in everything.  And 
the degree of intelligence correlates with the degree of complexity ...but you don't think 
the same about consciousness?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-10 Thread Telmo Menezes
On Fri, Jan 11, 2013 at 12:58 AM, meekerdb  wrote:

>  On 1/10/2013 3:15 PM, Telmo Menezes wrote:
>
>
>
>
> On Fri, Jan 11, 2013 at 12:01 AM, meekerdb  wrote:
>
>>  On 1/10/2013 2:28 PM, Telmo Menezes wrote:
>>
>>
>>
>>
>> On Thu, Jan 10, 2013 at 11:15 PM, meekerdb  wrote:
>>
>>>  On 1/10/2013 1:58 PM, Telmo Menezes wrote:
>>>
>>> Hi Craig,
>>>
>>>  I tend to agree with what you say (or what I understand of it).
>>> Despite my belief that it is possible to extract memories (or their 3p
>>> shadows) from a brain, I do not believe in the neuroscience hypothesis that
>>> consciousness emerges from brain activity. I'm not sure I believe that
>>> there is a degree of consciousness in everything, but it sounds more
>>> plausible than the emergence from complexity idea.
>>>
>>>
>>> Do you agree that intelligence requires complexity?
>>>
>>
>>  I'm not sure intelligence and complexity are two different things.
>>
>>
>>  Of course they're two different things. An oak tree is complex but not
>> intelligent. The question is whether you think something can be intelligent
>> without being complex?
>>
>
>  I don't agree that an oak tree is not intelligent. It changes itself and
> its environment in non-trivial ways that promote its continuing existence.
> What's your definition of intelligence?
>
>
> What's yours?  I don't care what example you use, trees, rocks, bacteria,
> sewing machines...
>

If you allow for the concepts of agent, perception, action and goal, my
definition is: the degree to which an agent can achieve its goals by
perceiving itself and its environment and using that information to predict
the outcome of its actions, for the purpose of choosing the actions that
has the highest probability of leading to a future state where the goal are
achieved. Intelligence can then be quantified by comparing the
effectiveness of the agent in achieving its goals to that of an agent
acting randomly.

But you can only compare intelligence in relation to a set of goals. How do
you compare the intelligence of two agents with different goals and
environments? Any criteria is arbitrary. We like to believe we're more
intelligent because we're more complex, but you can also believe that
bacteria are more intelligent because they are more resilient to extinction.


> Are you going to contend that everything is intelligent and everything is
> complex, so that the words loose all meaning?
>

I never said that. I do think that intelligence is a mushy concept to begin
with, and that's not my fault.


> Do you think there can be something that is intelligent but not complex
> (and use whatever definitions of "intelligent" and "complex" you want).
>

A thermostat is much less complex than a human brain but intelligent under
my definition.


>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-10 Thread meekerdb

On 1/10/2013 3:15 PM, Telmo Menezes wrote:




On Fri, Jan 11, 2013 at 12:01 AM, meekerdb > wrote:


On 1/10/2013 2:28 PM, Telmo Menezes wrote:




On Thu, Jan 10, 2013 at 11:15 PM, meekerdb mailto:meeke...@verizon.net>> wrote:

On 1/10/2013 1:58 PM, Telmo Menezes wrote:

Hi Craig,

I tend to agree with what you say (or what I understand of it). Despite 
my
belief that it is possible to extract memories (or their 3p shadows) 
from a
brain, I do not believe in the neuroscience hypothesis that 
consciousness
emerges from brain activity. I'm not sure I believe that there is a 
degree of
consciousness in everything, but it sounds more plausible than the 
emergence
from complexity idea.


Do you agree that intelligence requires complexity?


I'm not sure intelligence and complexity are two different things.


Of course they're two different things. An oak tree is complex but not 
intelligent.
The question is whether you think something can be intelligent without 
being complex?


I don't agree that an oak tree is not intelligent. It changes itself and its environment 
in non-trivial ways that promote its continuing existence. What's your definition of 
intelligence?


What's yours?  I don't care what example you use, trees, rocks, bacteria, sewing 
machines... Are you going to contend that everything is intelligent and everything is 
complex, so that the words loose all meaning?  Do you think there can be something that is 
intelligent but not complex (and use whatever definitions of "intelligent" and "complex" 
you want).


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-10 Thread Telmo Menezes
On Fri, Jan 11, 2013 at 12:01 AM, meekerdb  wrote:

>  On 1/10/2013 2:28 PM, Telmo Menezes wrote:
>
>
>
>
> On Thu, Jan 10, 2013 at 11:15 PM, meekerdb  wrote:
>
>>  On 1/10/2013 1:58 PM, Telmo Menezes wrote:
>>
>> Hi Craig,
>>
>>  I tend to agree with what you say (or what I understand of it). Despite
>> my belief that it is possible to extract memories (or their 3p shadows)
>> from a brain, I do not believe in the neuroscience hypothesis that
>> consciousness emerges from brain activity. I'm not sure I believe that
>> there is a degree of consciousness in everything, but it sounds more
>> plausible than the emergence from complexity idea.
>>
>>
>> Do you agree that intelligence requires complexity?
>>
>
>  I'm not sure intelligence and complexity are two different things.
>
>
> Of course they're two different things. An oak tree is complex but not
> intelligent. The question is whether you think something can be intelligent
> without being complex?
>

I don't agree that an oak tree is not intelligent. It changes itself and
its environment in non-trivial ways that promote its continuing existence.
What's your definition of intelligence?


>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-10 Thread meekerdb

On 1/10/2013 2:28 PM, Telmo Menezes wrote:




On Thu, Jan 10, 2013 at 11:15 PM, meekerdb > wrote:


On 1/10/2013 1:58 PM, Telmo Menezes wrote:

Hi Craig,

I tend to agree with what you say (or what I understand of it). Despite my 
belief
that it is possible to extract memories (or their 3p shadows) from a brain, 
I do
not believe in the neuroscience hypothesis that consciousness emerges from 
brain
activity. I'm not sure I believe that there is a degree of consciousness in
everything, but it sounds more plausible than the emergence from complexity 
idea.


Do you agree that intelligence requires complexity?


I'm not sure intelligence and complexity are two different things.


Of course they're two different things. An oak tree is complex but not intelligent. The 
question is whether you think something can be intelligent without being complex?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-10 Thread Telmo Menezes
On Thu, Jan 10, 2013 at 11:15 PM, meekerdb  wrote:

>  On 1/10/2013 1:58 PM, Telmo Menezes wrote:
>
> Hi Craig,
>
>  I tend to agree with what you say (or what I understand of it). Despite
> my belief that it is possible to extract memories (or their 3p shadows)
> from a brain, I do not believe in the neuroscience hypothesis that
> consciousness emerges from brain activity. I'm not sure I believe that
> there is a degree of consciousness in everything, but it sounds more
> plausible than the emergence from complexity idea.
>
>
> Do you agree that intelligence requires complexity?
>

I'm not sure intelligence and complexity are two different things.


>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-10 Thread meekerdb

On 1/10/2013 1:58 PM, Telmo Menezes wrote:

Hi Craig,

I tend to agree with what you say (or what I understand of it). Despite my belief that 
it is possible to extract memories (or their 3p shadows) from a brain, I do not believe 
in the neuroscience hypothesis that consciousness emerges from brain activity. I'm not 
sure I believe that there is a degree of consciousness in everything, but it sounds more 
plausible than the emergence from complexity idea.


Do you agree that intelligence requires complexity?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-10 Thread Telmo Menezes
Hi Craig,

I tend to agree with what you say (or what I understand of it). Despite my
belief that it is possible to extract memories (or their 3p shadows) from a
brain, I do not believe in the neuroscience hypothesis that consciousness
emerges from brain activity. I'm not sure I believe that there is a degree
of consciousness in everything, but it sounds more plausible than the
emergence from complexity idea.

Still I feel that you avoid some questions. Maybe it's just my lack of
understanding of what you're saying. For example: what is the primary
"stuff" in your theory? In the same sense that for materialists it's
subatomic particles and for comp it's N, +, *. What's yours?


On Wed, Jan 9, 2013 at 2:50 PM, Craig Weinberg wrote:

>
>
> On Wednesday, January 9, 2013 6:18:37 AM UTC-5, telmo_menezes wrote:
>>
>>
>> Hi Craig,
>>
>>
>>>
>>> Cool. I actually would have agreed with you and a lot of people here at
>>> different times in my life. It's only been lately in the last five years or
>>> so that I have put together this other way of understanding everything. It
>>> gets lost in the debating, because I feel like I have to make my points
>>> about what is different or new about how I see things, but I do understand
>>> that other ways of looking at it make a lot of sense too - so much so that
>>> I suppose I am drawn only to digging into the weak spots to try to  get
>>> others to see the secret exit that I think I've found...
>>>
>>
>> Ok, this sounds interesting and I'd like to know more. I've been away
>> from the mailing list in the last few years, so maybe you've talked about
>> it before. Would you tell me about that secret exit?
>>
>
> The secret exit is to reverse the assumption that consciousness occurs
> from functions or substances. Even though our human consciousness depends
> on a living human body (as far as we know for sure), that may be because of
> the degree of elaboration required to develop a human quality of
> experience, not because the fundamental capacity to perceive and
> participate depends on anything at all.
>
> Being inside of a human experience means being inside of an animal
> experience, an organism's experience, a cellular and molecular level
> experience. The alternative means picking an arbitrary level at which total
> lack of awareness suddenly changes into perception and participation for no
> conceivable reason. Instead of hanging on to the hope of finding such a
> level or gate, the secret is to see that there are many levels and gates
> but that they are qualitative, with each richer integration of qualia
> reframing the levels left behind in a particular way, and that way (another
> key) is to reduce it from a personal, animistic temporal flow of 1p meaning
> and significant preference  to impersonal, mechanistic spatial bodies ruled
> by cause-effect and chance/probability. 1p and 3p are relativistic, but
> what joins them is the capacity to discern the difference.
>
> Rather than sense i/o being a function or logic take for granted, flip it
> over so that logic is the 3p shadow of sense. The 3p view is a frozen
> snapshot of countless 1p views as seen from the outside, and the qualities
> of the 3p view depend entirely on the nature of the 1p
> perceiver-partcipant. Sense is semiotic. Its qualitative layers are
> partitioned by habit and interpretive inertia, just as an ambiguous image
> looks different depending on how you personally direct your perception, or
> how a book that you read when you are 12 years old can have different
> meanings at 18 or 35. The meaning isn't just 'out there', it's literally,
> physically "in here". If this is true, then the entire physical universe
> doubles in size, or really is squared as every exterior surface is a 3p
> representation of an entire history of 1p experience. Each acorn is a
> potential for oak tree forest, an encyclopedia of evolution and cosmology,
> so that the acorn is just a semiotic placeholder which is scaled and
> iconicized appropriately as a consequence of the relation of our human
> quality awareness and that of the evolutionary-historical-possible future
> contexts which we share with it (or the whole ensemble of experiences in
> which 'we' are both embedded as strands of the story of the universe rather
> than just human body and acorn body or cells and cells etc).
>
> To understand the common thread for all of it, always go back to the
> juxtaposition of 1p vs 3p, not *that* there is a difference, but the
> qualities of *what* those differences are - the sense of the juxtaposition.
>
> http://media.tumblr.com/tumblr_m9y9by2XXw1qe3q3v.jpg
> http://media.tumblr.com/tumblr_m9y9boN5rP1qe3q3v.jpg
>
> That's were I get sense and motive or perception and participation. The
> symmetry is more primitive than either matter or mind, so that it isn't one
> which builds a bridge to the other but sense which divides itself on one
> level while retaining unity on another, creating not just dualism but a
> continuum 

Re: Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-09 Thread Craig Weinberg


On Wednesday, January 9, 2013 6:18:37 AM UTC-5, telmo_menezes wrote:
>
>
> Hi Craig,
>  
>
>>
>> Cool. I actually would have agreed with you and a lot of people here at 
>> different times in my life. It's only been lately in the last five years or 
>> so that I have put together this other way of understanding everything. It 
>> gets lost in the debating, because I feel like I have to make my points 
>> about what is different or new about how I see things, but I do understand 
>> that other ways of looking at it make a lot of sense too - so much so that 
>> I suppose I am drawn only to digging into the weak spots to try to  get 
>> others to see the secret exit that I think I've found...
>>
>
> Ok, this sounds interesting and I'd like to know more. I've been away from 
> the mailing list in the last few years, so maybe you've talked about it 
> before. Would you tell me about that secret exit?
>

The secret exit is to reverse the assumption that consciousness occurs from 
functions or substances. Even though our human consciousness depends on a 
living human body (as far as we know for sure), that may be because of the 
degree of elaboration required to develop a human quality of experience, 
not because the fundamental capacity to perceive and participate depends on 
anything at all.

Being inside of a human experience means being inside of an animal 
experience, an organism's experience, a cellular and molecular level 
experience. The alternative means picking an arbitrary level at which total 
lack of awareness suddenly changes into perception and participation for no 
conceivable reason. Instead of hanging on to the hope of finding such a 
level or gate, the secret is to see that there are many levels and gates 
but that they are qualitative, with each richer integration of qualia 
reframing the levels left behind in a particular way, and that way (another 
key) is to reduce it from a personal, animistic temporal flow of 1p meaning 
and significant preference  to impersonal, mechanistic spatial bodies ruled 
by cause-effect and chance/probability. 1p and 3p are relativistic, but 
what joins them is the capacity to discern the difference. 

Rather than sense i/o being a function or logic take for granted, flip it 
over so that logic is the 3p shadow of sense. The 3p view is a frozen 
snapshot of countless 1p views as seen from the outside, and the qualities 
of the 3p view depend entirely on the nature of the 1p 
perceiver-partcipant. Sense is semiotic. Its qualitative layers are 
partitioned by habit and interpretive inertia, just as an ambiguous image 
looks different depending on how you personally direct your perception, or 
how a book that you read when you are 12 years old can have different 
meanings at 18 or 35. The meaning isn't just 'out there', it's literally, 
physically "in here". If this is true, then the entire physical universe 
doubles in size, or really is squared as every exterior surface is a 3p 
representation of an entire history of 1p experience. Each acorn is a 
potential for oak tree forest, an encyclopedia of evolution and cosmology, 
so that the acorn is just a semiotic placeholder which is scaled and 
iconicized appropriately as a consequence of the relation of our human 
quality awareness and that of the evolutionary-historical-possible future 
contexts which we share with it (or the whole ensemble of experiences in 
which 'we' are both embedded as strands of the story of the universe rather 
than just human body and acorn body or cells and cells etc).

To understand the common thread for all of it, always go back to the 
juxtaposition of 1p vs 3p, not *that* there is a difference, but the 
qualities of *what* those differences are - the sense of the juxtaposition. 

http://media.tumblr.com/tumblr_m9y9by2XXw1qe3q3v.jpg
http://media.tumblr.com/tumblr_m9y9boN5rP1qe3q3v.jpg

That's were I get sense and motive or perception and participation. The 
symmetry is more primitive than either matter or mind, so that it isn't one 
which builds a bridge to the other but sense which divides itself on one 
level while retaining unity on another, creating not just dualism but a 
continuum of monism, dualism, dialectic, trichotomy, syzygy, etc. Many 
levels and perspectives on sense within sense.

http://multisenserealism.com/about/

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/elwBNPr92z4J.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-09 Thread Telmo Menezes
Hi Craig,


>
> Cool. I actually would have agreed with you and a lot of people here at
> different times in my life. It's only been lately in the last five years or
> so that I have put together this other way of understanding everything. It
> gets lost in the debating, because I feel like I have to make my points
> about what is different or new about how I see things, but I do understand
> that other ways of looking at it make a lot of sense too - so much so that
> I suppose I am drawn only to digging into the weak spots to try to  get
> others to see the secret exit that I think I've found...
>

Ok, this sounds interesting and I'd like to know more. I've been away from
the mailing list in the last few years, so maybe you've talked about it
before. Would you tell me about that secret exit?

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-07 Thread Craig Weinberg
 wasn't trying to do either, although I admit it was condescending. 
I was trying to point out that it seems like you were saying that brain 
activity was decoded into visual pixels. I'm not clear really on what your 
understanding of it is.

 
>
>>  
>>
>>>
>>> The hypothesis is that the brain has some encoding for images. 
>>>
>>
>> Where are the encoded images decoded into what we actually see?
>>
>
> In the computer that runs the Bayesian algorithm.
>

I'm talking about where in the brain are the images that we actually see 
'decoded'?
 

>  
>
>>  
>>
>>> These images can come from the optic nerve, they could be stored in 
>>> memory or they could be constructed by sophisticated cognitive processes 
>>> related to creativity, pattern matching and so on. But if you believe that 
>>> the brain's neural network is a computer responsible for our cognitive 
>>> processes, the information must be stores there, physically, somehow.
>>>
>>
>> That is the assumption, but it is not necessarily a good one. The problem 
>> is that information is only understandable in the context of some form of 
>> awareness - an experience of being informed. A machine with no user can 
>> only produce different kinds of noise as there is nothing ultimately to 
>> discern the difference between a signal and a non-signal.
>>
>
> Sure. That's why the algorithm has to be trained with known videos. So it 
> can learn which brain activity correlates with what 3p accessible images we 
> can all agree upon.
>

Images aren't 3p. Images are 1p visual experiences inferred through 3p 
optical presentations. The algorithm can't learn anything about images 
because it will never experience them in any way.
 

>  
>
>>
>>  
>>> It's horribly hard to decode what's going on in the brain.
>>>
>>
>> Yet every newborn baby learns to do it all by themselves, without any 
>> sign of any decoding theater.
>>
>
> Yes. The newborn baby comes with the genetic material that generates the 
> optimal decoder.
>  
>
>>  
>>
>>>
>>> These researchers thought of a clever shortcut. They expose people to a 
>>> lot of images and record come measures of brain activity in the visual 
>>> cortex. Then they use machine learning to match brain states to images. Of 
>>> course it's probabilistic and noisy. But then they got a video that 
>>> actually approximates the real images. 
>>>
>>
>> You might get the same result out of precisely mapping the movements of 
>> the eyes instead.
>>
>
> Maybe. That's not where they took the information from though. They took 
> it from the visual cortex.
>

That's what makes people jump to the conclusion that they are looking at 
something that came from a brain rather than YouTube + video editing + 
simple formula + data sets from experiments that have no particular 
relation to brains or consciousness.
 

>  
>
>> What they did may have absolutely nothing to do with how the brain 
>> encodes or experiences images, no more than your Google history can 
>> approximate the shape of your face.
>>
>
> Google history can only approximate the shape of my face if there is a 
> correlation between the two. In which case my Google history is, in fact, 
> also a description of the shape of my face.
>

Why would there by a correlation between your Google history and the shape 
of your face?
 

>  
>
>>  
>>
>>> So there must be some way to decode brain activity into images.
>>>
>>> The killer argument against that is that the brain has no sync signals 
>>>> to generate
>>>> the raster lines.
>>>>
>>>
>>> Neither does reality, but we somehow manage to show a representation of 
>>> it on tv, right?
>>>
>>
>> What human beings see on TV simulates one optical environment with 
>> another optical environment. You need to be a human being with a human 
>> visual system to be able to watch TV and mistake it for a representation of 
>> reality. Some household pets might be briefly fooled also, but mostly other 
>> species have no idea why we are staring at that flickering rectangle, or 
>> buzzing plastic sheet, or that large collection of liquid crystal flags. 
>> Representation is psychological, not material. The map is not the territory.
>>
>
> I agree. I never claimed this was an insight into 1p or anything to do 
> with consciousness. Just that you can extract information from 

Re: Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-07 Thread Telmo Menezes
 able to watch TV and mistake it for a representation of
> reality. Some household pets might be briefly fooled also, but mostly other
> species have no idea why we are staring at that flickering rectangle, or
> buzzing plastic sheet, or that large collection of liquid crystal flags.
> Representation is psychological, not material. The map is not the territory.
>

I agree. I never claimed this was an insight into 1p or anything to do with
consciousness. Just that you can extract information from human brains,
because that information is represented there somehow. But you're only
going to get 3p information.


>
> Craig
>
>
>>
>>>
>>>
>>> [Roger Clough], [rcl...@verizon.net]
>>> 1/6/2013
>>> "Forever is a long time, especially near the end." - Woody Allen
>>>
>>> - Receiving the following content -
>>> *From:* Craig Weinberg
>>> *Receiver:* everything-list
>>> *Time:* 2013-01-05, 11:37:17
>>> *Subject:* Re: Subjective states can be somehow extracted from brains
>>> via acomputer
>>>
>>>
>>>
>>> On Saturday, January 5, 2013 10:43:32 AM UTC-5, rclough wrote:
>>>>
>>>>
>>>> Subjective states can somehow be extracted from brains via a computer.
>>>>
>>>
>>> No, they can't.
>>>
>>>
>>>>
>>>> The ingenius folks who were miraculously able to extract an image from
>>>> the brain
>>>> that we saw recently
>>>>
>>>
>>>
>>>> http://gizmodo.com/5843117/**sci**entists-reconstruct-video-**clip**
>>>> s-from-brain-activity<http://gizmodo.com/5843117/scientists-reconstruct-video-clips-from-brain-activity>
>>>>
>>>> somehow did it entirely through computation. How was that possible?
>>>>
>>>
>>> By passing off a weak Bayesian regression analysis as a terrific
>>> consciousness breakthrough. Look again at the image comparisons. There is
>>> nothing being reconstructed, there is only the visual noise of many
>>> superimposed shapes which least dis-resembles the test image. It's not even
>>> stage magic, it's just a search engine.
>>>
>>>
>>>>
>>>> There are at least two imaginable theories, neither of which I can
>>>> explain step by step:
>>>>
>>>
>>>
>>> What they did was take lots of images and correlate patterns in the V1
>>> region of the brain with those that corresponded V1 patterns in others who
>>> had viewed the known images. It's statistical guesswork and it is complete
>>> crap.
>>>
>>> "The computer analyzed 18 million seconds of random YouTube video,
>>> building a database of potential brain activity for each clip. From all
>>> these videos, the software picked the one hundred clips that caused a brain
>>> activity more similar to the ones the subject watched, combining them into
>>> one final movie"
>>>
>>> Crick and Koch found in their 1995 study that
>>>
>>> "The conscious visual representation is likely to be distributed over
>>>> more than one area of the cerebral cortex and possibly over certain
>>>> subcortical structures as well. We have argued (Crick and Koch, 1995a) that
>>>> in primates, contrary to most received opinion, it is not located in
>>>> cortical area V1 (also called the striate cortex or area 17). Some of the
>>>> experimental evidence in support of this hypothesis is outlined below. This
>>>> is not to say that what goes on in V1 is not important, and indeed may be
>>>> crucial, for most forms of vivid visual awareness. What we suggest is that
>>>> the neural activity there is not directly correlated with what is seen."
>>>>
>>>
>>> http://www.klab.caltech.edu/~**koch/crick-koch-cc-97.html<http://www.klab.caltech.edu/~koch/crick-koch-cc-97.html>
>>>
>>> What was found in their study, through experiments which isolated the
>>> effects in the brain which are related to looking (i.e. directing your
>>> eyeballs to move around) from those related to seeing (the appearance of
>>> images, colors, etc) is that the activity in the V1 is exactly the same
>>> whether the person sees anything or not.
>>>
>>> What the visual reconstruction is based on is the activity in the
>>> occipitotemporal visual cortex. (downstream of V1
>>> http://www.sciencedirect.com/**sc

Re: Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-07 Thread Craig Weinberg


On Monday, January 7, 2013 6:19:33 AM UTC-5, telmo_menezes wrote:
>
>
>
>
> On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough 
> > wrote:
>
>>  Hi Craig Weinberg 
>>  
>> Sorry, everybody, I was snookered into believing that they had really 
>> accomplished the impossible.
>>
>
> So you think this paper is fiction and the video is fabricated? Do people 
> here know something I don't about the authors?
>

The paper doesn't claim that images from the brain have been decoded, but 
the sensational headlines imply that is what they did. The video isn't 
supposed to be anything but fabricated. It's a muddle of YouTube videos 
superimposed upon each other according to a Bayesian probability reduction. 
Did you think that the video was coming from a brain feed like a TV 
broadcast? It is certainly not that at all.
 

>
> The hypothesis is that the brain has some encoding for images. 
>

Where are the encoded images decoded into what we actually see?
 

> These images can come from the optic nerve, they could be stored in memory 
> or they could be constructed by sophisticated cognitive processes related 
> to creativity, pattern matching and so on. But if you believe that the 
> brain's neural network is a computer responsible for our cognitive 
> processes, the information must be stores there, physically, somehow.
>

That is the assumption, but it is not necessarily a good one. The problem 
is that information is only understandable in the context of some form of 
awareness - an experience of being informed. A machine with no user can 
only produce different kinds of noise as there is nothing ultimately to 
discern the difference between a signal and a non-signal.


> It's horribly hard to decode what's going on in the brain.
>

Yet every newborn baby learns to do it all by themselves, without any sign 
of any decoding theater.
 

>
> These researchers thought of a clever shortcut. They expose people to a 
> lot of images and record come measures of brain activity in the visual 
> cortex. Then they use machine learning to match brain states to images. Of 
> course it's probabilistic and noisy. But then they got a video that 
> actually approximates the real images. 
>

You might get the same result out of precisely mapping the movements of the 
eyes instead. What they did may have absolutely nothing to do with how the 
brain encodes or experiences images, no more than your Google history can 
approximate the shape of your face.
 

> So there must be some way to decode brain activity into images.
>
> The killer argument against that is that the brain has no sync signals to 
>> generate
>> the raster lines.
>>
>
> Neither does reality, but we somehow manage to show a representation of it 
> on tv, right?
>

What human beings see on TV simulates one optical environment with another 
optical environment. You need to be a human being with a human visual 
system to be able to watch TV and mistake it for a representation of 
reality. Some household pets might be briefly fooled also, but mostly other 
species have no idea why we are staring at that flickering rectangle, or 
buzzing plastic sheet, or that large collection of liquid crystal flags. 
Representation is psychological, not material. The map is not the territory.

Craig

 
>
>>   
>>  
>> [Roger Clough], [rcl...@verizon.net] 
>> 1/6/2013 
>> "Forever is a long time, especially near the end." - Woody Allen
>>
>> - Receiving the following content - 
>> *From:* Craig Weinberg  
>> *Receiver:* everything-list  
>> *Time:* 2013-01-05, 11:37:17
>> *Subject:* Re: Subjective states can be somehow extracted from brains 
>> via acomputer
>>
>>  
>>
>> On Saturday, January 5, 2013 10:43:32 AM UTC-5, rclough wrote: 
>>>
>>>
>>> Subjective states can somehow be extracted from brains via a computer. 
>>>
>>
>> No, they can't.
>>  
>>
>>>
>>> The ingenius folks who were miraculously able to extract an image from 
>>> the brain 
>>> that we saw recently 
>>>
>>  
>>  
>>> http://gizmodo.com/5843117/**scientists-reconstruct-video-**
>>> clips-from-brain-activity<http://gizmodo.com/5843117/scientists-reconstruct-video-clips-from-brain-activity>
>>>  
>>>
>>> somehow did it entirely through computation. How was that possible? 
>>>
>>
>> By passing off a weak Bayesian regression analysis as a terrific 
>> consciousness breakthrough. Look again at the image comparisons. There is 
>> nothing being reconstructed, there is only the visual noise of many 
>> superimpos

Re: Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-07 Thread Telmo Menezes
On Sun, Jan 6, 2013 at 8:55 PM, Roger Clough  wrote:

>  Hi Craig Weinberg
>
> Sorry, everybody, I was snookered into believing that they had really
> accomplished the impossible.
>

So you think this paper is fiction and the video is fabricated? Do people
here know something I don't about the authors?

The hypothesis is that the brain has some encoding for images. These images
can come from the optic nerve, they could be stored in memory or they could
be constructed by sophisticated cognitive processes related to creativity,
pattern matching and so on. But if you believe that the brain's neural
network is a computer responsible for our cognitive processes, the
information must be stores there, physically, somehow.

It's horribly hard to decode what's going on in the brain.

These researchers thought of a clever shortcut. They expose people to a lot
of images and record come measures of brain activity in the visual cortex.
Then they use machine learning to match brain states to images. Of course
it's probabilistic and noisy. But then they got a video that actually
approximates the real images. So there must be some way to decode brain
activity into images.

The killer argument against that is that the brain has no sync signals to
> generate
> the raster lines.
>

Neither does reality, but we somehow manage to show a representation of it
on tv, right?


>
>
> [Roger Clough], [rclo...@verizon.net] 
> 1/6/2013
> "Forever is a long time, especially near the end." - Woody Allen
>
> - Receiving the following content -
> *From:* Craig Weinberg 
> *Receiver:* everything-list 
> *Time:* 2013-01-05, 11:37:17
> *Subject:* Re: Subjective states can be somehow extracted from brains via
> acomputer
>
>
>
> On Saturday, January 5, 2013 10:43:32 AM UTC-5, rclough wrote:
>>
>>
>> Subjective states can somehow be extracted from brains via a computer.
>>
>
> No, they can't.
>
>
>>
>> The ingenius folks who were miraculously able to extract an image from
>> the brain
>> that we saw recently
>>
>
>
>> http://gizmodo.com/5843117/**scientists-reconstruct-video-**
>> clips-from-brain-activity<http://gizmodo.com/5843117/scientists-reconstruct-video-clips-from-brain-activity>
>>
>> somehow did it entirely through computation. How was that possible?
>>
>
> By passing off a weak Bayesian regression analysis as a terrific
> consciousness breakthrough. Look again at the image comparisons. There is
> nothing being reconstructed, there is only the visual noise of many
> superimposed shapes which least dis-resembles the test image. It's not even
> stage magic, it's just a search engine.
>
>
>>
>> There are at least two imaginable theories, neither of which I can
>> explain step by step:
>>
>
>
> What they did was take lots of images and correlate patterns in the V1
> region of the brain with those that corresponded V1 patterns in others who
> had viewed the known images. It's statistical guesswork and it is complete
> crap.
>
> "The computer analyzed 18 million seconds of random YouTube video,
> building a database of potential brain activity for each clip. From all
> these videos, the software picked the one hundred clips that caused a brain
> activity more similar to the ones the subject watched, combining them into
> one final movie"
>
> Crick and Koch found in their 1995 study that
>
> "The conscious visual representation is likely to be distributed over more
>> than one area of the cerebral cortex and possibly over certain subcortical
>> structures as well. We have argued (Crick and Koch, 1995a) that in
>> primates, contrary to most received opinion, it is not located in cortical
>> area V1 (also called the striate cortex or area 17). Some of the
>> experimental evidence in support of this hypothesis is outlined below. This
>> is not to say that what goes on in V1 is not important, and indeed may be
>> crucial, for most forms of vivid visual awareness. What we suggest is that
>> the neural activity there is not directly correlated with what is seen."
>>
>
> http://www.klab.caltech.edu/~koch/crick-koch-cc-97.html
>
> What was found in their study, through experiments which isolated the
> effects in the brain which are related to looking (i.e. directing your
> eyeballs to move around) from those related to seeing (the appearance of
> images, colors, etc) is that the activity in the V1 is exactly the same
> whether the person sees anything or not.
>
> What the visual reconstruction is based on is the activity in the
> occipitotemporal visual cortex. (downstream of V1
> http://www.sciencedirec

Re: Re: Subjective states can be somehow extracted from brains via acomputer

2013-01-06 Thread Roger Clough
Hi Craig Weinberg 

Sorry, everybody, I was snookered into believing that they had really 
accomplished the impossible.
The killer argument against that is that the brain has no sync signals to 
generate
the raster lines.


[Roger Clough], [rclo...@verizon.net]
1/6/2013 
"Forever is a long time, especially near the end." - Woody Allen
- Receiving the following content - 
From: Craig Weinberg 
Receiver: everything-list 
Time: 2013-01-05, 11:37:17
Subject: Re: Subjective states can be somehow extracted from brains via 
acomputer




On Saturday, January 5, 2013 10:43:32 AM UTC-5, rclough wrote:

Subjective states can somehow be extracted from brains via a computer. 


No, they can't.
 


The ingenius folks who were miraculously able to extract an image from the 
brain 
that we saw recently 


http://gizmodo.com/5843117/scientists-reconstruct-video-clips-from-brain-activity
 

somehow did it entirely through computation. How was that possible? 


By passing off a weak Bayesian regression analysis as a terrific consciousness 
breakthrough. Look again at the image comparisons. There is nothing being 
reconstructed, there is only the visual noise of many superimposed shapes which 
least dis-resembles the test image. It's not even stage magic, it's just a 
search engine.
 


There are at least two imaginable theories, neither of which I can explain step 
by step: 



What they did was take lots of images and correlate patterns in the V1 region 
of the brain with those that corresponded V1 patterns in others who had viewed 
the known images. It's statistical guesswork and it is complete crap.

"The computer analyzed 18 million seconds of random YouTube video, building a 
database of potential brain activity for each clip. From all these videos, the 
software picked the one hundred clips that caused a brain activity more similar 
to the ones the subject watched, combining them into one final movie"

Crick and Koch found in their 1995 study that


"The conscious visual representation is likely to be distributed over more than 
one area of the cerebral cortex and possibly over certain subcortical 
structures as well. We have argued (Crick and Koch, 1995a) that in primates, 
contrary to most received opinion, it is not located in cortical area V1 (also 
called the striate cortex or area 17). Some of the experimental evidence in 
support of this hypothesis is outlined below. This is not to say that what goes 
on in V1 is not important, and indeed may be crucial, for most forms of vivid 
visual awareness. What we suggest is that the neural activity there is not 
directly correlated with what is seen."


http://www.klab.caltech.edu/~koch/crick-koch-cc-97.html

What was found in their study, through experiments which isolated the effects 
in the brain which are related to looking (i.e. directing your eyeballs to move 
around) from those related to seeing (the appearance of images, colors, etc) is 
that the activity in the V1 is exactly the same whether the person sees 
anything or not. 

What the visual reconstruction is based on is the activity in the 
occipitotemporal visual cortex. (downstream of V1 
http://www.sciencedirect.com/science/article/pii/S0079612305490196)


"Here we present a new motion-energy [10,
11] encoding model that largely overcomes this limitation.
The model describes fast visual information and slow hemodynamics
by separate components. We recorded BOLD
signals in occipitotemporal visual cortex of human subjects
who watched natural movies and fit the model separately
to individual voxels." 
https://sites.google.com/site/gallantlabucb/publications/nishimoto-et-al-2011


So what they did is analogous to tracing the rectangle pattern that your eyes 
make when generally tracing the contrast boundary of a door-like image and then 
comparing that pattern to patterns made by other people's eyes tracing the 
known images of doors. It's really no closer to any direct access to your 
interior state than any data-mining advertiser gets by chasing after your web 
history to determine that you might buy prostate vitamins if you are watching a 
Rolling Stones YouTube.



a) Computers are themselves conscious (which can neither be proven nor 
disproven) 
and are therefore capable of perception. 


Nothing can be considered conscious unless it has the capacity to act in its 
own interest. Computers, by virtue of their perpetual servitude to human will, 
are not conscious.
 


or 

2) The flesh of the brain is simultaneously objective and subjective. 
Thus an ordinary (by which I mean not conscious) computer can work on it 
objectively yet produce a subjective image by some manipulation of the 
flesh 
of the brain. One perhaps might call this "milking" of the brain.   


The flesh of the brain is indeed simultaneously objective and subjective (as 
are all living cells and perhaps all molecules and atoms),