On 31 May 2012, at 08:02, Jason Resch wrote:



On Tue, May 29, 2012 at 12:55 PM, Bruno Marchal <marc...@ulb.ac.be> wrote:

On 29 May 2012, at 16:32, Jason Resch wrote:




The question I have in mind is "Does a brain produce consciousness, or does the brain filter consciousness?

I had some thoughts on this same topic a few months ago. I was thinking about what the difference is between a God-mind that knows everything, and an empty mind that knew nothing. Both contain zero information (in an information theoretic sense), so perhaps if someone has no brain they become omniscient (in a certain sense).

"In a certain sense". OK. (The devil is there). But an empty mind has still to be the mind of a machine, probably the virgin (unprogrammed) universal machine, or the Löbian one (I still dunno).


If we consider RSSA, our consciousness followed some path to get to the current moment.

Key point. I just used this in a reply on the FOAR list (where I explain UDA/AUDA).



If we look at brain development, we find our consciousness formed from what was previously not conscious matter.

Not really. It is counter-intuitive, but matter is the last thing that emanates from the ONE (in Plato/Plotinus, and in comp, and even in the information theoretic view of QM as explained by Ron Garrett and that you compare rightly to the comp consequence). Matter can even be seen as what God loose control on. It is almost pure absolute indetermination. The primitive matter is really a product of consciousness differentiation (cf UDA). But I see what you mean. I think.



Therefore, there is some path from a (null conscious state)->(you), and perhaps, there are paths from the null state to every possible conscious state.

Yes, and vice versa by amnesia, plausibly.


If so, then every time we go to sleep, or go under anesthesia, or die, we can wake up as anyone.

In a sense, we do that all the time. This points to the idea that there is only one (universal) dreaming person, and that personal identity is a relative illusion.




We "know" that consciousness is in "platonia", and that local brains are just relative universal numbers making possible for a person (in a large sense which can include an amoeba) to manifest itself relatively to its most probable computation/environment. But this does not completely answer the question. I think that many thinks that the more a brain is big, the more it can be conscious, which is not so clear when you take the reversal into account. It might be the exact contrary.

I think there are many tricks the brain employs against itself to aid the selfish propagation of its genes. One example is the concept of the ego (having an identity).

Agreed. As I said just above.


Many drugs can temporarily disable whatever mechanism in our brain creates this feeling, leading to ego death, feelings of connectedness, oneness with other or the universe, etc. Perhaps one of our ancestors always felt this way, but died out when the egoist gene developed and made its carriers exploitative of the egoless.

Probably. I think so.




And this might be confirmed by studies showing that missing some part of the brain, like an half hippocampus, can lead to to a permanent feeling of presence. Recently this has been confirmed by the showing that LSD and psilocybe decrease the activity of the brain during the hallucinogenic phases. And dissociative drugs disconnect parts of the brain, with similar increase of the first person experience. Clinical studies of Near death experiences might also put evidence in that direction. haldous Huxley made a similar proposal for mescaline.

This is basically explained with the Bp & Dt hypostases. By suppressing material in the brain you make the "B" poorer (you eliminate belief), but then you augment the possibility so you make the consistency Dt stronger. Eventually you come back to the universal consciousness of the virgin simple universal numbers, perhaps.

Here are some recent papers on this:

http://www.scientificamerican.com/article.cfm?id=do-psychedelics-expand-mind-reducing-brain-activity&WT.mc_id=SA_WR_20120523

http://www.pnas.org/content/early/2012/01/17/1119598109.short


Thanks for the links and your thoughts. They are, as always, very interesting.

Thanks Jason,

Bruno



PS I asked Colin on the FOR list if he is aware of the European Brain Project, which is relevant for this thread. Especially that they are aware of "simulating nature at some level":

http://www.humanbrainproject.eu/introduction.html



Has he replied on the FOR list? It seems he has been absent from this list for the past few days.

He has disappeared again, apparently.

Best,

Bruno




Jason





If you have _everything_ in your model (external world included), then you can simulate it. But you don’t. So you can’t simulate it.


Would you stop behaving intelligently if the gravity and light from Andromeda stopped reaching us? If not, is _everything_ truly required?

C-T Thesis is 100% right _but 100% irrelevant to the process at hand: encountering the unknown.


It is not irrelevant in the theoretical sense. It implies: "_If_ we knew what algorithms to use, we could implement human-level intelligence in a computer." Do you agree with this?





The C-T Thesis is irrelevant, so you need to get a better argument from somewhere and start to answer some of the points in my story:



Q. Why doesn’t a computed model of fire burst into flames?



If this question is a serious, it indicates to me that you might not understand what a computers is. If its not serious, why ask it?

There is a burst of flames (in the computed model). Just as in a computed model of a brain, there will be intelligence within the model. We can peer into the model to obtain the results of the intelligent behavior, as intelligent behavior can be represented as information.

Similarly we can peer into the model of the fire to obtain an understanding of what happened during the combustion and see all the by-products. What we cannot do, is peer into a simulated model of fire to obtain the byproducts of the combustion. Nor can we peer into the model of the simulated brain and extract neurotransmitters or blood vessels.

To me, this "fire argument" is as empty as saying "We can't take physical objects from our dreams with us into our waking life. Therefore we cannot dream."





This should the natural expectation by anyone that thinks a computed model of cognition physics is cognition. You should be expected answer this. Until this is answered I have no need to justify my position on building AGI. That is what my story is about. I am not assuming an irrelevant principle or that I know how cognition works. I will build cognition physics and then learn how it works using it. Like we normally do.


What will you build them out of? Biological neurons, or something else? What theory will you use to guide your pursuit, or will you, like Edison, try hundreds or thousands of different materials until you find one that works?




I don’t know how computer science got to the state it is in, but it’s got to stop. In this one special area it has done us a disservice.



This is my answer to everyone. I know all I’ll get is the usual party lines. Lavoisier had his phlogiston. I’ve got computationalism. Lucky me.



Cya!



Colin




--
You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything- l...@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .


--
You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything- l...@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .

http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to