[agi] Glocal memory

2008-11-24 Thread Ben Goertzel
A semi-technical essay on the global/local (aka glocal) nature of
memory is linked to from here

http://multiverseaccordingtoben.blogspot.com/

I wrote this a long while ago but just got around to posting it now...

ben



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"The empires of the future are the empires of the mind."
-- Sir Winston Churchill


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] JAGI submission

2008-11-24 Thread Trent Waddington
On Tue, Nov 25, 2008 at 11:31 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> One of the problems in defining RSI in a mathematically vigorous way is 
> coming up with a definition that is also useful. If a system has input, then 
> there is really no definition that distinguishes self improvement from 
> learning, at least not one that people can agree on.

Why would you want to?  Who said RSI wasn't about learning?  It's
entirely about learning!

> Of course, a practical AGI is going to have input, so my definition seems to 
> be of little practical use. Nevertheless there are proposals along these 
> lines. My goal is to prove the limitations of these systems.

Which proposals?  By who?  Maybe you should cite them in your paper.

> One example of a system without input would be a chess playing program that 
> improved its game by playing itself. One could imagine many approaches. For 
> example, suppose the program makes random variations in its source code and 
> plays these copies against each other in timed matches, keeping only the 
> winning variations. What are the limitations of this approach? Or consider a 
> more general approach to intelligence, where the parent gives its offspring 
> hard problems. Is it possible for superhuman intelligence to arise 
> spontaneously?

What enforces the rules of the game?  That said, I don't think a
self-playing chess program can learn anything useful about chess.. but
hey, maybe someone else has said, in print, that they do and made the
argument that this is a good way to go about making an intelligent
system.. you need to track down that work and address it directly.
Otherwise you're just arguing with yourself, and isn't your paper
saying that is futile?  :)

> What I show is that if we measure intelligence by computational efficiency, 
> then yes, but if we measure it by amount of knowledge, then no.

Sure, but I *think* everyone already knew that.  If you want to say
someone is wrong, you need to say who you are talking to.

> Anyway, I appreciate any comments that can be used to improve the paper. AGI 
> is a hard subject to write about, given the wide range of opinions and the 
> lack of proven results.

On that note, the paragraph on batch vs interactive is not indented
the same as the rest of the paragraphs.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-24 Thread Ben Goertzel
Richard,

It might be more useful to discuss more recent papers by the same
authors regarding the same topic, such as the more accurately-titled

***
Sparse but not "Grandmother-cell" coding in the medial temporal lobe.
Quian Quiroga R, Kreiman G, Koch C and Fried I.
Trends in Cognitive Sciences. 12: 87-91; 2008
***

at

http://www2.le.ac.uk/departments/engineering/extranet/research-groups/neuroengineering-lab/

-- Ben G

On Mon, Nov 24, 2008 at 1:32 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Ben Goertzel wrote:
>>
>> Hi,
>>
>> BTW, I just read this paper
>>
>>
>>> For example, in Loosemore & Harley (in press) you can find an analysis of
>>> a
>>> paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the
>>> latter
>>> try to claim they have evidence in favor of grandmother neurons (or
>>> sparse
>>> collections of grandmother neurons) and against the idea of distributed
>>> representations.
>>
>> which I found at
>>
>>  http://www.vis.caltech.edu/~rodri/
>>
>> and I strongly disagree that
>>
>>> We showed their conclusion to be incoherent.  It was deeply implausible,
>>> given the empirical data they reported.
>
>
> The claim that Harley and I made - which you quote above - was the
> *conclusion* sentence that summarized a detailed explanation of our
> reasoning.
>
> That reasoning was in our original paper, and I also went to the trouble of
> providing a longer version of it in one of my last posts on this thread.  I
> showed, in that argument, that their claims about sparse vs distributed
> representations were incoherent, because they had not thought through the
> implications contained in their own words - part of which you quote below.
>
> Merely quoting their words again, without resolving the inconsistencies that
> we pointed out, proves nothing.
>
> We analyzed that paper because it was one of several that engendered a huge
> amount of publicity.  All of that publicity - which, as far as we can see,
> the authors did not have any problem with - had to do with the claims about
> grandmother cells, sparseness and distributed representations.  Nobody - not
> I, not Harley, and nobody else as far as I know - disputes that the
> empirical data were interesting, but that is not the point:  we attacked
> their paper because of their conclusion about the theoretical issue of
> sparse vs distributed representations, and the wider issue about grandmother
> cells.  In that context, it is not true that, as you put it below, the
> authors "only [claimed] to have gathered some information on empirical
> constraints on how neural knowledge representation may operate".  They went
> beyond just claiming that they had gathered some relevant data:  they tried
> to say what that data implied.
>
>
>
> Richard Loosemore
>
>
>
>
>
>
>
>> Their conclusion, to quote them, is that
>>
>> "
>> How neurons encode different percepts is one of the most intriguing
>> questions in neuroscience. Two extreme hypotheses are
>> schemes based on the explicit representations by highly selective
>> (cardinal, gnostic or grandmother) neurons and schemes that rely on
>> an implicit representation over a very broad and distributed population
>> of neurons1–4,6. In the latter case, recognition would require the
>> simultaneous activation of a large number of cells and therefore we
>> would expect each cell to respond to many pictures with similar basic
>> features. This is in contrast to the sparse firing we observe, because
>> most MTL cells do not respond to the great majority of images seen
>> by the patient. Furthermore, cells signal a particular individual or
>> object in an explicit manner27, in the sense that the presence of the
>> individual can, in principle, be reliably decoded from a very small
>> number of neurons.We do not mean to imply the existence of single
>> neurons coding uniquely for discrete percepts for several reasons:
>> first, some of these units responded to pictures of more than one
>> individual or object; second, given the limited duration of our
>> recording sessions, we can only explore a tiny portion of stimulus
>> space; and third, the fact that we can discover in this short time some
>> images—such as photographs of Jennifer Aniston—that drive the
>> cells suggests that each cell might represent more than one class of
>> images. Yet, this subset of MTL cells is selectively activated by
>> different views of individuals, landmarks, animals or objects. This
>> is quite distinct from a completely distributed population code and
>> suggests a sparse, explicit and invariant encoding of visual percepts in
>> MTL.
>> "
>>
>> The only thing that bothers me about the paper is that the title
>>
>> "
>> Invariant visual representation by single neurons in
>> the human brain
>> "
>>
>> does not actually reflect the conclusions drawn.  A title like
>>
>> "
>> Invariant visual representation by sparse neuronal population encodings
>> the human brain
>> "
>>
>> would have reflected their actual conclus

Re: [agi] JAGI submission

2008-11-24 Thread Matt Mahoney
One of the problems in defining RSI in a mathematically vigorous way is coming 
up with a definition that is also useful. If a system has input, then there is 
really no definition that distinguishes self improvement from learning, at 
least not one that people can agree on.

Of course, a practical AGI is going to have input, so my definition seems to be 
of little practical use. Nevertheless there are proposals along these lines. My 
goal is to prove the limitations of these systems.

One example of a system without input would be a chess playing program that 
improved its game by playing itself. One could imagine many approaches. For 
example, suppose the program makes random variations in its source code and 
plays these copies against each other in timed matches, keeping only the 
winning variations. What are the limitations of this approach? Or consider a 
more general approach to intelligence, where the parent gives its offspring 
hard problems. Is it possible for superhuman intelligence to arise 
spontaneously?

What I show is that if we measure intelligence by computational efficiency, 
then yes, but if we measure it by amount of knowledge, then no.

Anyway, I appreciate any comments that can be used to improve the paper. AGI is 
a hard subject to write about, given the wide range of opinions and the lack of 
proven results.

-- Matt Mahoney, [EMAIL PROTECTED]


--- On Mon, 11/24/08, Trent Waddington <[EMAIL PROTECTED]> wrote:

> From: Trent Waddington <[EMAIL PROTECTED]>
> Subject: Re: [agi] JAGI submission
> To: agi@v2.listbox.com
> Date: Monday, November 24, 2008, 7:58 PM
> I read the paper.
> 
> 
> 
> Although I see what you're trying to achieve in this
> paper, I think
> your conclusions are far from being, well, conclusive. 
> You've taken a
> couple of terms that are thrown around the AI/Singularity
> community,
> assigned an arbitrary mathematical definition of your own
> devising,
> then claimed (not very rigoriously I might add) that your
> hypothesis
> is right.
> 
> This is basically half of a straw man paper.  You've
> come up with a
> definition that you must agree no-one else shares, and then
> you've
> failed to knock it down.
> 
> But at least for a little while this paper managed to
> capture my
> interest, and for that I thank you.
> 
> 
> 
> RSI has nothing to do with quines.  It's neat that you
> can write a
> program that will output itself.. but I'm not aware of
> anyone who has
> ever thought of RSI as involving such.  The futility of
> this paper is
> summed up in the last two words of the abstract:
> "without input".  Who
> ever said that RSI had anything to do with programs that
> had no input?
>  The whole freakin' purpose of intelligence is to react
> to a
> non-random but *complex* environment.  Input is what makes
> intelligence hard.  A program which has "more
> intelligence" than
> another program is the one that reacts better to the
> environment by
> some measure of fitness.  By taking input out of the
> argument you've
> taken intelligence out of the argument.
> 
> On Tue, Nov 25, 2008 at 10:20 AM, Matt Mahoney
> <[EMAIL PROTECTED]> wrote:
> > I submitted my paper "A Model for Recursively
> Self Improving Programs" to JAGI and it is ready for
> open review. For those who have already read it, it is
> essentially the same paper except that I have expanded the
> abstract. The paper describes a mathematical model of RSI in
> closed environments (e.g. boxed AI) and shows that such
> programs exist in a certain sense. It can be found here:
> >
> >
> http://journal.agi-network.org/Submissions/tabid/99/ctrl/ViewOneU/ID/9/Default.aspx
> >
> > JAGI has an open review process where anyone can
> comment, but you will need to register to do so. You
> don't need to register to read the paper. This is a new
> journal started by Pei Wang.
> >
> > -- Matt Mahoney, [EMAIL PROTECTED]
> >
> >
> > ---
> > agi
> > Archives:
> https://www.listbox.com/member/archive/303/=now
> > RSS Feed:
> https://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription:
> https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
> 
> 
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] JAGI submission

2008-11-24 Thread Trent Waddington
I read the paper.



Although I see what you're trying to achieve in this paper, I think
your conclusions are far from being, well, conclusive.  You've taken a
couple of terms that are thrown around the AI/Singularity community,
assigned an arbitrary mathematical definition of your own devising,
then claimed (not very rigoriously I might add) that your hypothesis
is right.

This is basically half of a straw man paper.  You've come up with a
definition that you must agree no-one else shares, and then you've
failed to knock it down.

But at least for a little while this paper managed to capture my
interest, and for that I thank you.



RSI has nothing to do with quines.  It's neat that you can write a
program that will output itself.. but I'm not aware of anyone who has
ever thought of RSI as involving such.  The futility of this paper is
summed up in the last two words of the abstract: "without input".  Who
ever said that RSI had anything to do with programs that had no input?
 The whole freakin' purpose of intelligence is to react to a
non-random but *complex* environment.  Input is what makes
intelligence hard.  A program which has "more intelligence" than
another program is the one that reacts better to the environment by
some measure of fitness.  By taking input out of the argument you've
taken intelligence out of the argument.

On Tue, Nov 25, 2008 at 10:20 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> I submitted my paper "A Model for Recursively Self Improving Programs" to 
> JAGI and it is ready for open review. For those who have already read it, it 
> is essentially the same paper except that I have expanded the abstract. The 
> paper describes a mathematical model of RSI in closed environments (e.g. 
> boxed AI) and shows that such programs exist in a certain sense. It can be 
> found here:
>
> http://journal.agi-network.org/Submissions/tabid/99/ctrl/ViewOneU/ID/9/Default.aspx
>
> JAGI has an open review process where anyone can comment, but you will need 
> to register to do so. You don't need to register to read the paper. This is a 
> new journal started by Pei Wang.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] JAGI submission

2008-11-24 Thread Pei Wang
On Mon, Nov 24, 2008 at 7:20 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> I submitted my paper "A Model for Recursively Self Improving Programs" to 
> JAGI and it is ready for open review. For those who have already read it, it 
> is essentially the same paper except that I have expanded the abstract. The 
> paper describes a mathematical model of RSI in closed environments (e.g. 
> boxed AI) and shows that such programs exist in a certain sense. It can be 
> found here:
>
> http://journal.agi-network.org/Submissions/tabid/99/ctrl/ViewOneU/ID/9/Default.aspx
>
> JAGI has an open review process where anyone can comment, but you will need 
> to register to do so. You don't need to register to read the paper. This is a 
> new journal started by Pei Wang.
>
> -- Matt Mahoney, [EMAIL PROTECTED]

Thanks Matt for supporting JAGI, and I hope more and more people in
this mailing list will be willing to put their ideas in a more
organized manner, as well as to treat other people's ideas in the same
way.

Two minor corrections:

(1) JAGI is started not just by me, but by a group of researchers ---
see http://journal.agi-network.org/EditorialBoard/tabid/92/Default.aspx

(2) The public review of JAGI is considered as part of "peer review"
process of the journal. Therefore, it doesn't really allow anyone to
post their comment in the journal website (of course they can post it
elsewhere). Instead, only "AGI Network Members" can post reviews at
the journal website. Roughly speaking, "AGI Network Members" are the
researchers who have an AI related graduate degree or graduate
students working toward such a degree. The other people are approved
in a case-by-case basis.

Pei


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Entheogins, understainding the brain, and AGI

2008-11-24 Thread Ed Porter
Ben, 

Thanks for the clarification.

Unless Dennett is using words in an unusual manner, it would seem that some
of his statements below contradict what, I believe, are most peoples' own,
clear sense of having a subjective experience.  Even if its is just a
construct of our minds, if we subjectively experience it, it is real in a
certain, very important to us, sense.

Ed Porter

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 24, 2008 4:59 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Entheogins, understainding the brain, and AGI

Dennett argues that qualia do not exist ... he defines consciousness
as a purely external property, ignoring the notion of "subjective
experience" altogether

Tongue-halfway-in-cheek, he claimed that "we are all zombies"  ;-)

But, parsing out the particularities of his wording and its explicit
and implied semantics would require more work than I'm willing to put
into this right now...

ben

On Mon, Nov 24, 2008 at 4:36 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Ben
>
>
>
> Is Dennett saying (a) there is no real sense of awareness associated with
> human consciousness, or only (b) that --- in a manner similar to the way
in
> which life is created out of the complexity of biochemistry --- the human
> sense of consciousness is a construct created out of the complexity of the
> computations that take place in the human brain.
>
>
>
> The two are different.  Position (a) strikes me as stupid statement
(unless
> Dennett is a p-zombie, which is theoretically possible).  But (b) is quite
> reasonable, although not necessarily provable.
>
>
>
> I believe all of reality is "aware," in the sense that all its parts
> individually, in effect, sense and reacts to their surroundings, often
> creating complex feedback loops and vibes.  Presumably it would be clear,
> even according to position (b) above, that consciousness would be crafted
> out of such lower level consciousness.
>
>
>
> But it is not clear that the basic levels of awareness that the laws of
> physic tell us reality has within its own computing is sufficiently
similar
> to the sense of awareness we humans call consciousness, to consider that
our
> consciousness comes directly from that lower level awareness --- other
than
> by the fact that the existence and computing of physical reality provides
> the substrate for layers of complexity and self-organization, that allow
> modeling and computations based on of the regularities of sensed reality
---
> modeling and computations that operate at such a much higher level of
> organization than the basic level of awareness inherent in all of reality
> --- that human consciousness would appear to be something very different
> indeed.
>
>
>
> I have seen no aspect of brain science or other scientific information
that
> indicates there is any direct connect between the level of awareness
> experienced by basic physical particles, atoms, and molecules, and
> subjective sense of awareness created in the human mind, other than that
> basic physical reality provides the foundations of being and computation
> upon which the much higher levels of organization that provide human
> awareness are built.
>
>
>
> If you have communicable evidence to the contrary, please enlighten me.
>
>
>
> Ed Porter
>
>
>
>
>
> -Original Message-
> From: Ben Goertzel [mailto:[EMAIL PROTECTED]
> Sent: Monday, November 24, 2008 2:57 PM
> To: agi@v2.listbox.com
> Subject: Re: [agi] Entheogins, understainding the brain, and AGI
>
>
>
> On Mon, Nov 24, 2008 at 1:30 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
>
>> Since I assume Ben, as well as a lot of the rest of us, want the AGI
>
>> movement to receive respectability in the academic and particularly in
the
>
>> funding community, it is probably best that other than brain-science- or
>
>> AGI-focused discussions of the effects of drugs should not become too
>> common
>
>> on the AGI list itself.  Ben, of course, is the ultimate decider of that.
>
>
>
>
>
>
>
> I'm never one to be overly concerned about "image" ;-) .. the question for
> me, regarding this topic on this list, is what the discussion contributes
>
> to the pursuit[Ed Porter]  of building AGI?
>
>
>
> If use of mind-expanding drugs, or study of their neurological effects,
> reveals
>
> something of use in creating AGI, then discussion of the topic here is
> certainly
>
> welcome!
>
>
>
> For me, my experimentation with these substances did cement my previous
>
> inclination toward a panpsychist view of consciousness ... which led me to
> feel
>
> even more strongly that "engineering raw awareness" is something AGI
> designers
>
> don't need to worry about.  Raw awareness is already there, in the
universe
> ...
>
> and different entities focus/manifest it in different ways.
>
>
>
> However, others may come to the conclusion that "engineering raw
awareness"
>
> is not something they need to worry about in AGI design from a totally
> different
>
> directi

[agi] JAGI submission

2008-11-24 Thread Matt Mahoney
I submitted my paper "A Model for Recursively Self Improving Programs" to JAGI 
and it is ready for open review. For those who have already read it, it is 
essentially the same paper except that I have expanded the abstract. The paper 
describes a mathematical model of RSI in closed environments (e.g. boxed AI) 
and shows that such programs exist in a certain sense. It can be found here:

http://journal.agi-network.org/Submissions/tabid/99/ctrl/ViewOneU/ID/9/Default.aspx

JAGI has an open review process where anyone can comment, but you will need to 
register to do so. You don't need to register to read the paper. This is a new 
journal started by Pei Wang.

-- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Entheogins, understainding the brain, and AGI

2008-11-24 Thread Ben Goertzel
Dennett argues that qualia do not exist ... he defines consciousness
as a purely external property, ignoring the notion of "subjective
experience" altogether

Tongue-halfway-in-cheek, he claimed that "we are all zombies"  ;-)

But, parsing out the particularities of his wording and its explicit
and implied semantics would require more work than I'm willing to put
into this right now...

ben

On Mon, Nov 24, 2008 at 4:36 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Ben
>
>
>
> Is Dennett saying (a) there is no real sense of awareness associated with
> human consciousness, or only (b) that --- in a manner similar to the way in
> which life is created out of the complexity of biochemistry --- the human
> sense of consciousness is a construct created out of the complexity of the
> computations that take place in the human brain.
>
>
>
> The two are different.  Position (a) strikes me as stupid statement (unless
> Dennett is a p-zombie, which is theoretically possible).  But (b) is quite
> reasonable, although not necessarily provable.
>
>
>
> I believe all of reality is "aware," in the sense that all its parts
> individually, in effect, sense and reacts to their surroundings, often
> creating complex feedback loops and vibes.  Presumably it would be clear,
> even according to position (b) above, that consciousness would be crafted
> out of such lower level consciousness.
>
>
>
> But it is not clear that the basic levels of awareness that the laws of
> physic tell us reality has within its own computing is sufficiently similar
> to the sense of awareness we humans call consciousness, to consider that our
> consciousness comes directly from that lower level awareness --- other than
> by the fact that the existence and computing of physical reality provides
> the substrate for layers of complexity and self-organization, that allow
> modeling and computations based on of the regularities of sensed reality ---
> modeling and computations that operate at such a much higher level of
> organization than the basic level of awareness inherent in all of reality
> --- that human consciousness would appear to be something very different
> indeed.
>
>
>
> I have seen no aspect of brain science or other scientific information that
> indicates there is any direct connect between the level of awareness
> experienced by basic physical particles, atoms, and molecules, and
> subjective sense of awareness created in the human mind, other than that
> basic physical reality provides the foundations of being and computation
> upon which the much higher levels of organization that provide human
> awareness are built.
>
>
>
> If you have communicable evidence to the contrary, please enlighten me.
>
>
>
> Ed Porter
>
>
>
>
>
> -Original Message-
> From: Ben Goertzel [mailto:[EMAIL PROTECTED]
> Sent: Monday, November 24, 2008 2:57 PM
> To: agi@v2.listbox.com
> Subject: Re: [agi] Entheogins, understainding the brain, and AGI
>
>
>
> On Mon, Nov 24, 2008 at 1:30 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
>
>> Since I assume Ben, as well as a lot of the rest of us, want the AGI
>
>> movement to receive respectability in the academic and particularly in the
>
>> funding community, it is probably best that other than brain-science- or
>
>> AGI-focused discussions of the effects of drugs should not become too
>> common
>
>> on the AGI list itself.  Ben, of course, is the ultimate decider of that.
>
>
>
>
>
>
>
> I'm never one to be overly concerned about "image" ;-) .. the question for
> me, regarding this topic on this list, is what the discussion contributes
>
> to the pursuit[Ed Porter]  of building AGI?
>
>
>
> If use of mind-expanding drugs, or study of their neurological effects,
> reveals
>
> something of use in creating AGI, then discussion of the topic here is
> certainly
>
> welcome!
>
>
>
> For me, my experimentation with these substances did cement my previous
>
> inclination toward a panpsychist view of consciousness ... which led me to
> feel
>
> even more strongly that "engineering raw awareness" is something AGI
> designers
>
> don't need to worry about.  Raw awareness is already there, in the universe
> ...
>
> and different entities focus/manifest it in different ways.
>
>
>
> However, others may come to the conclusion that "engineering raw awareness"
>
> is not something they need to worry about in AGI design from a totally
> different
>
> direction ... for instance, because they don't believe raw awareness exists
> at
>
> all in any  meaningful sense (this would seem to be Dennett's
>
> perspective).  That's
>
> OK too ... it seems to suggest he and I could find the same AGI
>
> designs acceptable
>
> for totally different reasons!
>
>
>
> Ben G
>
>
>
>
>
> ---
>
> agi
>
> Archives: https://www.listbox.com/member/archive/303/=now
>
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>
> Modify Your Subscription: https://www.listbox.com/member/?&;
>
> Powered by Listbox: ht

RE: [agi] Entheogins, understainding the brain, and AGI

2008-11-24 Thread Ed Porter
Ben

 

Is Dennett saying (a) there is no real sense of awareness associated with
human consciousness, or only (b) that --- in a manner similar to the way in
which life is created out of the complexity of biochemistry --- the human
sense of consciousness is a construct created out of the complexity of the
computations that take place in the human brain.  

 

The two are different.  Position (a) strikes me as stupid statement (unless
Dennett is a p-zombie, which is theoretically possible).  But (b) is quite
reasonable, although not necessarily provable.

 

I believe all of reality is "aware," in the sense that all its parts
individually, in effect, sense and reacts to their surroundings, often
creating complex feedback loops and vibes.  Presumably it would be clear,
even according to position (b) above, that consciousness would be crafted
out of such lower level consciousness.

 

But it is not clear that the basic levels of awareness that the laws of
physic tell us reality has within its own computing is sufficiently similar
to the sense of awareness we humans call consciousness, to consider that our
consciousness comes directly from that lower level awareness --- other than
by the fact that the existence and computing of physical reality provides
the substrate for layers of complexity and self-organization, that allow
modeling and computations based on of the regularities of sensed reality ---
modeling and computations that operate at such a much higher level of
organization than the basic level of awareness inherent in all of reality
--- that human consciousness would appear to be something very different
indeed.

 

I have seen no aspect of brain science or other scientific information that
indicates there is any direct connect between the level of awareness
experienced by basic physical particles, atoms, and molecules, and
subjective sense of awareness created in the human mind, other than that
basic physical reality provides the foundations of being and computation
upon which the much higher levels of organization that provide human
awareness are built.

 

If you have communicable evidence to the contrary, please enlighten me.

 

Ed Porter

 

 

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 24, 2008 2:57 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Entheogins, understainding the brain, and AGI

 

On Mon, Nov 24, 2008 at 1:30 PM, Ed Porter <[EMAIL PROTECTED]> wrote:

> Since I assume Ben, as well as a lot of the rest of us, want the AGI

> movement to receive respectability in the academic and particularly in the

> funding community, it is probably best that other than brain-science- or

> AGI-focused discussions of the effects of drugs should not become too
common

> on the AGI list itself.  Ben, of course, is the ultimate decider of that.

 

 

 

I'm never one to be overly concerned about "image" ;-) ... the question for
me, regarding this topic on this list, is what the discussion contributes

to the pursuit[Ed Porter]  of building AGI?

 

If use of mind-expanding drugs, or study of their neurological effects,
reveals

something of use in creating AGI, then discussion of the topic here is
certainly

welcome!

 

For me, my experimentation with these substances did cement my previous

inclination toward a panpsychist view of consciousness ... which led me to
feel

even more strongly that "engineering raw awareness" is something AGI
designers

don't need to worry about.  Raw awareness is already there, in the universe
..

and different entities focus/manifest it in different ways.

 

However, others may come to the conclusion that "engineering raw awareness"

is not something they need to worry about in AGI design from a totally
different

direction ... for instance, because they don't believe raw awareness exists
at

all in any  meaningful sense (this would seem to be Dennett's

perspective).  That's

OK too ... it seems to suggest he and I could find the same AGI

designs acceptable

for totally different reasons!

 

Ben G

 

 

---

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription:
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Entheogins, understainding the brain, and AGI

2008-11-24 Thread Ed Porter
That there is so much other discussion of drug experiences on the web is one
of the reason I think discussions of such experiences here should be limited
to discussions that attempt to add to the understanding of AGI or related
aspects of brain science.

-Original Message-
From: BillK [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 24, 2008 3:16 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Entheogins, understainding the brain, and AGI

On Mon, Nov 24, 2008 at 7:51 PM, Eric Burton  wrote:
> This is a really good avenue of discussion for me.


You'all probably should join  rec.drugs.psychedelic



People are still posting there, so the black helicopters haven't taken
them all away yet.
(Of course they might all be FBI agents. It's happened before)  :)

There are many similar interest groups for you to choose from.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Entheogins, understainding the brain, and AGI

2008-11-24 Thread Douglas Solomon
Ed Porter wrote:
> Since I assume Ben, as well as a lot of the rest of us, want the AGI
> movement to receive respectability in the academic and particularly in the 
> funding community, it is probably best that other than brain-science- or 
> AGI-focused discussions of the effects of drugs should not become too common 
> on the AGI list itself.  Ben, of course, is the ultimate decider of that.
>
> I remember the excitement I had over 3 to 4 decades ago when I experimented 
> with psychedelics (although at relatively low dosages), so I can sympathize 
> with the enthusiasms of current experimenters.  And I find some of the 
> written accounts of such experiments that I have read on the web to be very 
> thoughtful, at time reminiscent, and very interesting from a brain 
> science/AGI point of view.  But right now I am sufficiently busy with more 
> concrete realities that I am not in the market for such encounters.  
>
> I do think psychedelic experiences can shed valuable light on the extent to 
> which all perception is hallucination, just normally it is well turned and 
> controlled hallucination.  
>
> For example, the experiences some have reported, including off list in this 
> discussion, of the sense of 3+1 D spacetime being shattered, or being 
> perceived as very different, is not a surprise if one considers that your 
> normal perception of space and time is an extremely complex and carefully 
> controlled hallucination.  If you substantially remove that control, it is 
> not surprising that, for example, a cubist-like deconstruction of special 
> perception might occur.  After all, your mind has to stitch together its 
> normal visual continuousness of 3D spatial reality from stereographic 
> projections onto V1, which because of jerky saccades of the eye, are a rapid, 
> disjointed, succession of grossly fish-eyed projections.  So when 
> psychedelics interfere with the normal process of stitching together 
> projections from V1 and/or V2 and from remembered matching patterns of shapes 
> and objects --- each having their own set of dimensions --- it is not 
> surprising that a very different perception of space could arise, including a 
> perception of a dis-joint set of many more than than 3+1 dimensions.
>
> With regard to perceptions of direct communicating with a myriad of other 
> consciousnesses, such as elves, this is not surprising either, since the 
> concept of unity of consciousness is also a construct generated by mental 
> behavior and mental models, as is the construct of 3D space.  Your brain is 
> capable of generating many voices, many senses of awareness at once.  But it 
> normally works best, for generating behavior that helps humans survive, to 
> have a greater, more distinct divide between what is conscious and what is 
> kept in the subconscious, so that greater focus on the problems and behaviors 
> at hand can be achieved.
>
> I am not, in any way trying to belittle the importance, nor "realness" of 
> psychedelic experiences, but I am saying that my study of brain science and 
> my own experiences decades ago with psychedelics make me think that one 
> cannot always trust one's perceptions, particularly when one is on 
> psychedelics.  
>
> All perception can be considered hallucinations, that is, constructs of the 
> brain --- but some hallucinations are more valuable for certain tasks than 
> others.
>
> I think psychedelics, if properly used, can be of sufficient worth, in
> helping humans better understand our own minds and spirits and their
> relationship to reality --- that --- if our society were more rational ---
> it probably should have some limited ritualized used of psychedelics, as have 
> many primitive societies.  But it is not clear to me yet how rational our 
> society is capable of being, particularly if drug use is too widely spread.  
> Our society is changing so rapidly that much of traditional folk wisdom is 
> out of date, and much of what has replaced it has be generated by 
> commercially driven culture, that is, by its very nature exploitative.
>
> I think such drugs can have great danger of removing people from important 
> aspects of reality.  As humanity starts spiraling ever faster into the 
> wormhole of the singularity, and as the world becomes more and more crowded, 
> polluted, and competitive, and the have-nots increasingly have more power, 
> and as the media can provide increasingly seductive non-realities, and as 
> machine superintelligences increasingly decrease the relative value of human
> work and human thought, I fear that truly mind-altering drugs, if use too 
> widely, could increase, rather than decrease, the chance that humanity will 
> fare well --- as civilization, as we know it, is increasingly and more 
> rapidly distorted by the momentus changes that face us.
>
> But I am 60 years old, so maybe my viewpoint is out of date.
>
> Ed Porter
>   
When I read this (silently [per this mailing list, as is my almost ever
p

Re: [agi] Entheogins, understainding the brain, and AGI

2008-11-24 Thread BillK
On Mon, Nov 24, 2008 at 7:51 PM, Eric Burton  wrote:
> This is a really good avenue of discussion for me.


You'all probably should join  rec.drugs.psychedelic



People are still posting there, so the black helicopters haven't taken
them all away yet.
(Of course they might all be FBI agents. It's happened before)  :)

There are many similar interest groups for you to choose from.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Entheogins, understainding the brain, and AGI

2008-11-24 Thread Ed Porter
A corrections (in all caps) to the second to the last sentence of my below
post:

"I fear that truly mind-altering drugs, if use too widely, could DECREASE,
rather than INCREASE, the chance that humanity will fare well --- as
civilization, as we know it, is increasingly and more rapidly distorted by
the momentous changes that face us."

-Original Message-
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 24, 2008 1:30 PM
To: agi@v2.listbox.com
Subject: [agi] Entheogins, understainding the brain, and AGI

Since I assume Ben, as well as a lot of the rest of us, want
the AGI movement to receive respectability in the academic and particularly
in the funding community, it is probably best that other than brain-science-
or AGI-focused discussions of the effects of drugs should not become too
common on the AGI list itself.  Ben, of course, is the ultimate decider of
that.

I remember the excitement I had over 3 to 4 decades ago when
I experimented with psychedelics (although at relatively low dosages), so I
can sympathize with the enthusiasms of current experimenters.  And I find
some of the written accounts of such experiments that I have read on the web
to be very thoughtful, at time reminiscent, and very interesting from a
brain science/AGI point of view.  But right now I am sufficiently busy with
more concrete realities that I am not in the market for such encounters.  

I do think psychedelic experiences can shed valuable light
on the extent to which all perception is hallucination, just normally it is
well turned and controlled hallucination.  

For example, the experiences some have reported, including
off list in this discussion, of the sense of 3+1 D spacetime being
shattered, or being perceived as very different, is not a surprise if one
considers that your normal perception of space and time is an extremely
complex and carefully controlled hallucination.  If you substantially remove
that control, it is not surprising that, for example, a cubist-like
deconstruction of special perception might occur.  After all, your mind has
to stitch together its normal visual continuousness of 3D spatial reality
from stereographic projections onto V1, which because of jerky saccades of
the eye, are a rapid, disjointed, succession of grossly fish-eyed
projections.  So when psychedelics interfere with the normal process of
stitching together projections from V1 and/or V2 and from remembered
matching patterns of shapes and objects --- each having their own set of
dimensions --- it is not surprising that a very different perception of
space could arise, including a perception of a dis-joint set of many more
than than 3+1 dimensions.

With regard to perceptions of direct communicating with a
myriad of other consciousnesses, such as elves, this is not surprising
either, since the concept of unity of consciousness is also a construct
generated by mental behavior and mental models, as is the construct of 3D
space.  Your brain is capable of generating many voices, many senses of
awareness at once.  But it normally works best, for generating behavior that
helps humans survive, to have a greater, more distinct divide between what
is conscious and what is kept in the subconscious, so that greater focus on
the problems and behaviors at hand can be achieved.

I am not, in any way trying to belittle the importance, nor
"realness" of psychedelic experiences, but I am saying that my study of
brain science and my own experiences decades ago with psychedelics make me
think that one cannot always trust one's perceptions, particularly when one
is on psychedelics.  

All perception can be considered hallucinations, that is,
constructs of the brain --- but some hallucinations are more valuable for
certain tasks than others.

I think psychedelics, if properly used, can be of sufficient
worth, in helping humans better understand our own minds and spirits and
their relationship to reality --- that --- if our society were more rational
--- it probably should have some limited ritualized used of psychedelics, as
have many primitive societies.  But it is not clear to me yet how rational
our society is capable of being, particularly if drug use is too widely
spread.  Our society is changing so rapidly that much of traditional folk
wisdom is out of date, and much of what has replaced it has be generated by
commercially driven culture, that is, by its very nature exploitative.

I think such drugs can have great danger of removing people
from important aspects of reality.  As humanity starts spiraling ever faster
into the wormhole of the singularity, and as the world becomes more and more
crowded, polluted, and competitive, and the have-nots increasingly have more
power, and as the media can provide increas

RE: [agi] Entheogins, understainding the brain, and AGI

2008-11-24 Thread Ed Porter
We could probably easily change various operating parameters of a conscious
AGI to give it altered states of consciousness.  For example, it might
occasionally be useful to de-tune normal operation of the machine, when
looking for novel approaches to problems. Or, for example, when trying to
have the machine create artistic works, it might be valuable to de-tune
certain aspects of its visual perception to inspire it to create new styles
of visual representation.

 

-Original Message-
From: Robin Gane-McCalla [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 24, 2008 2:23 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Entheogins, understainding the brain, and AGI

 

I think psychedelics and the psychedelic experience are much more
complicated than most people realize and you only go into a small instance
of their complexity.  However I'm not sure how useful they will be in trying
to build intelligence on a computer.  Computers can't take psychedelics,
psychedelics are substrate dependent, so much so that they affect humans
differently.  Hypothetically we could design psychedelics for computers but
I don't think that would be a good idea.

On Mon, Nov 24, 2008 at 10:30 AM, Ed Porter <[EMAIL PROTECTED]> wrote:

Since I assume Ben, as well as a lot of the rest of us, want the AGI
movement to receive respectability in the academic and particularly in the
funding community, it is probably best that other than brain-science- or
AGI-focused discussions of the effects of drugs should not become too common
on the AGI list itself.  Ben, of course, is the ultimate decider of that.

I remember the excitement I had over 3 to 4 decades ago when I experimented
with psychedelics (although at relatively low dosages), so I can sympathize
with the enthusiasms of current experimenters.  And I find some of the
written accounts of such experiments that I have read on the web to be very
thoughtful, at time reminiscent, and very interesting from a brain
science/AGI point of view.  But right now I am sufficiently busy with more
concrete realities that I am not in the market for such encounters.

I do think psychedelic experiences can shed valuable light on the extent to
which all perception is hallucination, just normally it is well turned and
controlled hallucination.

For example, the experiences some have reported, including off list in this
discussion, of the sense of 3+1 D spacetime being shattered, or being
perceived as very different, is not a surprise if one considers that your
normal perception of space and time is an extremely complex and carefully
controlled hallucination.  If you substantially remove that control, it is
not surprising that, for example, a cubist-like deconstruction of special
perception might occur.  After all, your mind has to stitch together its
normal visual continuousness of 3D spatial reality from stereographic
projections onto V1, which because of jerky saccades of the eye, are a
rapid, disjointed, succession of grossly fish-eyed projections.  So when
psychedelics interfere with the normal process of stitching together
projections from V1 and/or V2 and from remembered matching patterns of
shapes and objects --- each having their own set of dimensions --- it is not
surprising that a very different perception of space could arise, including
a perception of a dis-joint set of many more than than 3+1 dimensions.

With regard to perceptions of direct communicating with a myriad of other
consciousnesses, such as elves, this is not surprising either, since the
concept of unity of consciousness is also a construct generated by mental
behavior and mental models, as is the construct of 3D space.  Your brain is
capable of generating many voices, many senses of awareness at once.  But it
normally works best, for generating behavior that helps humans survive, to
have a greater, more distinct divide between what is conscious and what is
kept in the subconscious, so that greater focus on the problems and
behaviors at hand can be achieved.

I am not, in any way trying to belittle the importance, nor "realness" of
psychedelic experiences, but I am saying that my study of brain science and
my own experiences decades ago with psychedelics make me think that one
cannot always trust one's perceptions, particularly when one is on
psychedelics.

All perception can be considered hallucinations, that is, constructs of the
brain --- but some hallucinations are more valuable for certain tasks than
others.

I think psychedelics, if properly used, can be of sufficient worth, in
helping humans better understand our own minds and spirits and their
relationship to reality --- that --- if our society were more rational ---
it probably should have some limited ritualized used of psychedelics, as
have many primitive societies.  But it is not clear to me yet how rational
our society is capable of being, particularly if drug use is too widely
spread.  Our society is changing so rapidly that much of traditional folk
wisdom is 

Re: [agi] Entheogins, understainding the brain, and AGI

2008-11-24 Thread Ben Goertzel
On Mon, Nov 24, 2008 at 1:30 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Since I assume Ben, as well as a lot of the rest of us, want the AGI
> movement to receive respectability in the academic and particularly in the
> funding community, it is probably best that other than brain-science- or
> AGI-focused discussions of the effects of drugs should not become too common
> on the AGI list itself.  Ben, of course, is the ultimate decider of that.



I'm never one to be overly concerned about "image" ;-) ... the question for me,
regarding this topic on this list, is what the discussion contributes
to the pursuit
of building AGI?

If use of mind-expanding drugs, or study of their neurological effects, reveals
something of use in creating AGI, then discussion of the topic here is certainly
welcome!

For me, my experimentation with these substances did cement my previous
inclination toward a panpsychist view of consciousness ... which led me to feel
even more strongly that "engineering raw awareness" is something AGI designers
don't need to worry about.  Raw awareness is already there, in the universe ...
and different entities focus/manifest it in different ways.

However, others may come to the conclusion that "engineering raw awareness"
is not something they need to worry about in AGI design from a totally different
direction ... for instance, because they don't believe raw awareness exists at
all in any  meaningful sense (this would seem to be Dennett's
perspective).  That's
OK too ... it seems to suggest he and I could find the same AGI
designs acceptable
for totally different reasons!

Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Entheogins, understainding the brain, and AGI

2008-11-24 Thread Eric Burton
This is a really good avenue of discussion for me. Mind-changing
experiences are fully within my conversational comfort zone. I
actually think psychedelics are very nearly on topic for the AGI list
inasmuch as they are like a microscope or a telescope for the mind.
They produce new points of view and to some extent, a window into
otherwise invisible worlds. The difficulty in the use of psychedelics
as analytic apparatus would seem to be data collection ,_,

On 11/24/08, Robin Gane-McCalla <[EMAIL PROTECTED]> wrote:
> I think psychedelics and the psychedelic experience are much more
> complicated than most people realize and you only go into a small instance
> of their complexity.  However I'm not sure how useful they will be in trying
> to build intelligence on a computer.  Computers can't take psychedelics,
> psychedelics are substrate dependent, so much so that they affect humans
> differently.  Hypothetically we could design psychedelics for computers but
> I don't think that would be a good idea.
>
> On Mon, Nov 24, 2008 at 10:30 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
>
>> Since I assume Ben, as well as a lot of the rest of us, want the AGI
>> movement to receive respectability in the academic and particularly in the
>> funding community, it is probably best that other than brain-science- or
>> AGI-focused discussions of the effects of drugs should not become too
>> common
>> on the AGI list itself.  Ben, of course, is the ultimate decider of that.
>>
>> I remember the excitement I had over 3 to 4 decades ago when I
>> experimented
>> with psychedelics (although at relatively low dosages), so I can
>> sympathize
>> with the enthusiasms of current experimenters.  And I find some of the
>> written accounts of such experiments that I have read on the web to be
>> very
>> thoughtful, at time reminiscent, and very interesting from a brain
>> science/AGI point of view.  But right now I am sufficiently busy with more
>> concrete realities that I am not in the market for such encounters.
>>
>> I do think psychedelic experiences can shed valuable light on the extent
>> to
>> which all perception is hallucination, just normally it is well turned and
>> controlled hallucination.
>>
>> For example, the experiences some have reported, including off list in
>> this
>> discussion, of the sense of 3+1 D spacetime being shattered, or being
>> perceived as very different, is not a surprise if one considers that your
>> normal perception of space and time is an extremely complex and carefully
>> controlled hallucination.  If you substantially remove that control, it is
>> not surprising that, for example, a cubist-like deconstruction of special
>> perception might occur.  After all, your mind has to stitch together its
>> normal visual continuousness of 3D spatial reality from stereographic
>> projections onto V1, which because of jerky saccades of the eye, are a
>> rapid, disjointed, succession of grossly fish-eyed projections.  So when
>> psychedelics interfere with the normal process of stitching together
>> projections from V1 and/or V2 and from remembered matching patterns of
>> shapes and objects --- each having their own set of dimensions --- it is
>> not
>> surprising that a very different perception of space could arise,
>> including
>> a perception of a dis-joint set of many more than than 3+1 dimensions.
>>
>> With regard to perceptions of direct communicating with a myriad of other
>> consciousnesses, such as elves, this is not surprising either, since the
>> concept of unity of consciousness is also a construct generated by mental
>> behavior and mental models, as is the construct of 3D space.  Your brain
>> is
>> capable of generating many voices, many senses of awareness at once.  But
>> it
>> normally works best, for generating behavior that helps humans survive, to
>> have a greater, more distinct divide between what is conscious and what is
>> kept in the subconscious, so that greater focus on the problems and
>> behaviors at hand can be achieved.
>>
>> I am not, in any way trying to belittle the importance, nor "realness" of
>> psychedelic experiences, but I am saying that my study of brain science
>> and
>> my own experiences decades ago with psychedelics make me think that one
>> cannot always trust one's perceptions, particularly when one is on
>> psychedelics.
>>
>> All perception can be considered hallucinations, that is, constructs of
>> the
>> brain --- but some hallucinations are more valuable for certain tasks than
>> others.
>>
>> I think psychedelics, if properly used, can be of sufficient worth, in
>> helping humans better understand our own minds and spirits and their
>> relationship to reality --- that --- if our society were more rational ---
>> it probably should have some limited ritualized used of psychedelics, as
>> have many primitive societies.  But it is not clear to me yet how rational
>> our society is capable of being, particularly if drug use is too widely
>> spread.  Our so

Re: [agi] Entheogins, understainding the brain, and AGI

2008-11-24 Thread Eric Burton
> they are like a microscope or a telescope for the mind.

Meant to read as "for the study of the mind". What I am trying to get
at is the value of brain-change to brain design...

On 11/24/08, Eric Burton <[EMAIL PROTECTED]> wrote:
> This is a really good avenue of discussion for me. Mind-changing
> experiences are fully within my conversational comfort zone. I
> actually think psychedelics are very nearly on topic for the AGI list
> inasmuch as they are like a microscope or a telescope for the mind.
> They produce new points of view and to some extent, a window into
> otherwise invisible worlds. The difficulty in the use of psychedelics
> as analytic apparatus would seem to be data collection ,_,
>
> On 11/24/08, Robin Gane-McCalla <[EMAIL PROTECTED]> wrote:
>> I think psychedelics and the psychedelic experience are much more
>> complicated than most people realize and you only go into a small instance
>> of their complexity.  However I'm not sure how useful they will be in
>> trying
>> to build intelligence on a computer.  Computers can't take psychedelics,
>> psychedelics are substrate dependent, so much so that they affect humans
>> differently.  Hypothetically we could design psychedelics for computers
>> but
>> I don't think that would be a good idea.
>>
>> On Mon, Nov 24, 2008 at 10:30 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
>>
>>> Since I assume Ben, as well as a lot of the rest of us, want the AGI
>>> movement to receive respectability in the academic and particularly in
>>> the
>>> funding community, it is probably best that other than brain-science- or
>>> AGI-focused discussions of the effects of drugs should not become too
>>> common
>>> on the AGI list itself.  Ben, of course, is the ultimate decider of that.
>>>
>>> I remember the excitement I had over 3 to 4 decades ago when I
>>> experimented
>>> with psychedelics (although at relatively low dosages), so I can
>>> sympathize
>>> with the enthusiasms of current experimenters.  And I find some of the
>>> written accounts of such experiments that I have read on the web to be
>>> very
>>> thoughtful, at time reminiscent, and very interesting from a brain
>>> science/AGI point of view.  But right now I am sufficiently busy with
>>> more
>>> concrete realities that I am not in the market for such encounters.
>>>
>>> I do think psychedelic experiences can shed valuable light on the extent
>>> to
>>> which all perception is hallucination, just normally it is well turned
>>> and
>>> controlled hallucination.
>>>
>>> For example, the experiences some have reported, including off list in
>>> this
>>> discussion, of the sense of 3+1 D spacetime being shattered, or being
>>> perceived as very different, is not a surprise if one considers that your
>>> normal perception of space and time is an extremely complex and carefully
>>> controlled hallucination.  If you substantially remove that control, it
>>> is
>>> not surprising that, for example, a cubist-like deconstruction of special
>>> perception might occur.  After all, your mind has to stitch together its
>>> normal visual continuousness of 3D spatial reality from stereographic
>>> projections onto V1, which because of jerky saccades of the eye, are a
>>> rapid, disjointed, succession of grossly fish-eyed projections.  So when
>>> psychedelics interfere with the normal process of stitching together
>>> projections from V1 and/or V2 and from remembered matching patterns of
>>> shapes and objects --- each having their own set of dimensions --- it is
>>> not
>>> surprising that a very different perception of space could arise,
>>> including
>>> a perception of a dis-joint set of many more than than 3+1 dimensions.
>>>
>>> With regard to perceptions of direct communicating with a myriad of other
>>> consciousnesses, such as elves, this is not surprising either, since the
>>> concept of unity of consciousness is also a construct generated by mental
>>> behavior and mental models, as is the construct of 3D space.  Your brain
>>> is
>>> capable of generating many voices, many senses of awareness at once.  But
>>> it
>>> normally works best, for generating behavior that helps humans survive,
>>> to
>>> have a greater, more distinct divide between what is conscious and what
>>> is
>>> kept in the subconscious, so that greater focus on the problems and
>>> behaviors at hand can be achieved.
>>>
>>> I am not, in any way trying to belittle the importance, nor "realness" of
>>> psychedelic experiences, but I am saying that my study of brain science
>>> and
>>> my own experiences decades ago with psychedelics make me think that one
>>> cannot always trust one's perceptions, particularly when one is on
>>> psychedelics.
>>>
>>> All perception can be considered hallucinations, that is, constructs of
>>> the
>>> brain --- but some hallucinations are more valuable for certain tasks
>>> than
>>> others.
>>>
>>> I think psychedelics, if properly used, can be of sufficient worth, in
>>> helping humans better understand

Re: [agi] Entheogins, understainding the brain, and AGI

2008-11-24 Thread Robin Gane-McCalla
I think psychedelics and the psychedelic experience are much more
complicated than most people realize and you only go into a small instance
of their complexity.  However I'm not sure how useful they will be in trying
to build intelligence on a computer.  Computers can't take psychedelics,
psychedelics are substrate dependent, so much so that they affect humans
differently.  Hypothetically we could design psychedelics for computers but
I don't think that would be a good idea.

On Mon, Nov 24, 2008 at 10:30 AM, Ed Porter <[EMAIL PROTECTED]> wrote:

> Since I assume Ben, as well as a lot of the rest of us, want the AGI
> movement to receive respectability in the academic and particularly in the
> funding community, it is probably best that other than brain-science- or
> AGI-focused discussions of the effects of drugs should not become too
> common
> on the AGI list itself.  Ben, of course, is the ultimate decider of that.
>
> I remember the excitement I had over 3 to 4 decades ago when I experimented
> with psychedelics (although at relatively low dosages), so I can sympathize
> with the enthusiasms of current experimenters.  And I find some of the
> written accounts of such experiments that I have read on the web to be very
> thoughtful, at time reminiscent, and very interesting from a brain
> science/AGI point of view.  But right now I am sufficiently busy with more
> concrete realities that I am not in the market for such encounters.
>
> I do think psychedelic experiences can shed valuable light on the extent to
> which all perception is hallucination, just normally it is well turned and
> controlled hallucination.
>
> For example, the experiences some have reported, including off list in this
> discussion, of the sense of 3+1 D spacetime being shattered, or being
> perceived as very different, is not a surprise if one considers that your
> normal perception of space and time is an extremely complex and carefully
> controlled hallucination.  If you substantially remove that control, it is
> not surprising that, for example, a cubist-like deconstruction of special
> perception might occur.  After all, your mind has to stitch together its
> normal visual continuousness of 3D spatial reality from stereographic
> projections onto V1, which because of jerky saccades of the eye, are a
> rapid, disjointed, succession of grossly fish-eyed projections.  So when
> psychedelics interfere with the normal process of stitching together
> projections from V1 and/or V2 and from remembered matching patterns of
> shapes and objects --- each having their own set of dimensions --- it is
> not
> surprising that a very different perception of space could arise, including
> a perception of a dis-joint set of many more than than 3+1 dimensions.
>
> With regard to perceptions of direct communicating with a myriad of other
> consciousnesses, such as elves, this is not surprising either, since the
> concept of unity of consciousness is also a construct generated by mental
> behavior and mental models, as is the construct of 3D space.  Your brain is
> capable of generating many voices, many senses of awareness at once.  But
> it
> normally works best, for generating behavior that helps humans survive, to
> have a greater, more distinct divide between what is conscious and what is
> kept in the subconscious, so that greater focus on the problems and
> behaviors at hand can be achieved.
>
> I am not, in any way trying to belittle the importance, nor "realness" of
> psychedelic experiences, but I am saying that my study of brain science and
> my own experiences decades ago with psychedelics make me think that one
> cannot always trust one's perceptions, particularly when one is on
> psychedelics.
>
> All perception can be considered hallucinations, that is, constructs of the
> brain --- but some hallucinations are more valuable for certain tasks than
> others.
>
> I think psychedelics, if properly used, can be of sufficient worth, in
> helping humans better understand our own minds and spirits and their
> relationship to reality --- that --- if our society were more rational ---
> it probably should have some limited ritualized used of psychedelics, as
> have many primitive societies.  But it is not clear to me yet how rational
> our society is capable of being, particularly if drug use is too widely
> spread.  Our society is changing so rapidly that much of traditional folk
> wisdom is out of date, and much of what has replaced it has be generated by
> commercially driven culture, that is, by its very nature exploitative.
>
> I think such drugs can have great danger of removing people from important
> aspects of reality.  As humanity starts spiraling ever faster into the
> wormhole of the singularity, and as the world becomes more and more
> crowded,
> polluted, and competitive, and the have-nots increasingly have more power,
> and as the media can provide increasingly seductive non-realities, and as
> machine superintelligences incre

Re: [agi] Hunting for a Brainy Computer

2008-11-24 Thread Richard Loosemore

Ben Goertzel wrote:

Hi,

BTW, I just read this paper



For example, in Loosemore & Harley (in press) you can find an analysis of a
paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the latter
try to claim they have evidence in favor of grandmother neurons (or sparse
collections of grandmother neurons) and against the idea of distributed
representations.


which I found at

 http://www.vis.caltech.edu/~rodri/

and I strongly disagree that


We showed their conclusion to be incoherent.  It was deeply implausible,
given the empirical data they reported.



The claim that Harley and I made - which you quote above - was the 
*conclusion* sentence that summarized a detailed explanation of our 
reasoning.


That reasoning was in our original paper, and I also went to the trouble 
of providing a longer version of it in one of my last posts on this 
thread.  I showed, in that argument, that their claims about sparse vs 
distributed representations were incoherent, because they had not 
thought through the implications contained in their own words - part of 
which you quote below.


Merely quoting their words again, without resolving the inconsistencies 
that we pointed out, proves nothing.


We analyzed that paper because it was one of several that engendered a 
huge amount of publicity.  All of that publicity - which, as far as we 
can see, the authors did not have any problem with - had to do with the 
claims about grandmother cells, sparseness and distributed 
representations.  Nobody - not I, not Harley, and nobody else as far as 
I know - disputes that the empirical data were interesting, but that is 
not the point:  we attacked their paper because of their conclusion 
about the theoretical issue of sparse vs distributed representations, 
and the wider issue about grandmother cells.  In that context, it is not 
true that, as you put it below, the authors "only [claimed] to have 
gathered some information on empirical constraints on how neural 
knowledge representation may operate".  They went beyond just claiming 
that they had gathered some relevant data:  they tried to say what that 
data implied.




Richard Loosemore








Their conclusion, to quote them, is that

"
How neurons encode different percepts is one of the most intriguing
questions in neuroscience. Two extreme hypotheses are
schemes based on the explicit representations by highly selective
(cardinal, gnostic or grandmother) neurons and schemes that rely on
an implicit representation over a very broad and distributed population
of neurons1–4,6. In the latter case, recognition would require the
simultaneous activation of a large number of cells and therefore we
would expect each cell to respond to many pictures with similar basic
features. This is in contrast to the sparse firing we observe, because
most MTL cells do not respond to the great majority of images seen
by the patient. Furthermore, cells signal a particular individual or
object in an explicit manner27, in the sense that the presence of the
individual can, in principle, be reliably decoded from a very small
number of neurons.We do not mean to imply the existence of single
neurons coding uniquely for discrete percepts for several reasons:
first, some of these units responded to pictures of more than one
individual or object; second, given the limited duration of our
recording sessions, we can only explore a tiny portion of stimulus
space; and third, the fact that we can discover in this short time some
images—such as photographs of Jennifer Aniston—that drive the
cells suggests that each cell might represent more than one class of
images. Yet, this subset of MTL cells is selectively activated by
different views of individuals, landmarks, animals or objects. This
is quite distinct from a completely distributed population code and
suggests a sparse, explicit and invariant encoding of visual percepts in
MTL.
"

The only thing that bothers me about the paper is that the title

"
Invariant visual representation by single neurons in
the human brain
"

does not actually reflect the conclusions drawn.  A title like

"
Invariant visual representation by sparse neuronal population encodings
the human brain
"

would have reflected their actual conclusions a lot better.  But the paper's
conclusion clearly says

"
We do not mean to imply the existence of single
neurons coding uniquely for discrete percepts for several reasons:
"

I see some incoherence between the title and the paper's contents,
which is a bit frustrating, but no incoherence in the paper's conclusion,
nor between the data and the conclusion.

According to what the paper says, the authors do not claim to have
solve the neural knowledge representation problem, but only to have
gathered some information on empirical constraints on how neural
knowledge representation may operate.

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www

[agi] Entheogins, understainding the brain, and AGI

2008-11-24 Thread Ed Porter
Since I assume Ben, as well as a lot of the rest of us, want the AGI
movement to receive respectability in the academic and particularly in the
funding community, it is probably best that other than brain-science- or
AGI-focused discussions of the effects of drugs should not become too common
on the AGI list itself.  Ben, of course, is the ultimate decider of that.

I remember the excitement I had over 3 to 4 decades ago when I experimented
with psychedelics (although at relatively low dosages), so I can sympathize
with the enthusiasms of current experimenters.  And I find some of the
written accounts of such experiments that I have read on the web to be very
thoughtful, at time reminiscent, and very interesting from a brain
science/AGI point of view.  But right now I am sufficiently busy with more
concrete realities that I am not in the market for such encounters.  

I do think psychedelic experiences can shed valuable light on the extent to
which all perception is hallucination, just normally it is well turned and
controlled hallucination.  

For example, the experiences some have reported, including off list in this
discussion, of the sense of 3+1 D spacetime being shattered, or being
perceived as very different, is not a surprise if one considers that your
normal perception of space and time is an extremely complex and carefully
controlled hallucination.  If you substantially remove that control, it is
not surprising that, for example, a cubist-like deconstruction of special
perception might occur.  After all, your mind has to stitch together its
normal visual continuousness of 3D spatial reality from stereographic
projections onto V1, which because of jerky saccades of the eye, are a
rapid, disjointed, succession of grossly fish-eyed projections.  So when
psychedelics interfere with the normal process of stitching together
projections from V1 and/or V2 and from remembered matching patterns of
shapes and objects --- each having their own set of dimensions --- it is not
surprising that a very different perception of space could arise, including
a perception of a dis-joint set of many more than than 3+1 dimensions.

With regard to perceptions of direct communicating with a myriad of other
consciousnesses, such as elves, this is not surprising either, since the
concept of unity of consciousness is also a construct generated by mental
behavior and mental models, as is the construct of 3D space.  Your brain is
capable of generating many voices, many senses of awareness at once.  But it
normally works best, for generating behavior that helps humans survive, to
have a greater, more distinct divide between what is conscious and what is
kept in the subconscious, so that greater focus on the problems and
behaviors at hand can be achieved.

I am not, in any way trying to belittle the importance, nor "realness" of
psychedelic experiences, but I am saying that my study of brain science and
my own experiences decades ago with psychedelics make me think that one
cannot always trust one's perceptions, particularly when one is on
psychedelics.  

All perception can be considered hallucinations, that is, constructs of the
brain --- but some hallucinations are more valuable for certain tasks than
others.

I think psychedelics, if properly used, can be of sufficient worth, in
helping humans better understand our own minds and spirits and their
relationship to reality --- that --- if our society were more rational ---
it probably should have some limited ritualized used of psychedelics, as
have many primitive societies.  But it is not clear to me yet how rational
our society is capable of being, particularly if drug use is too widely
spread.  Our society is changing so rapidly that much of traditional folk
wisdom is out of date, and much of what has replaced it has be generated by
commercially driven culture, that is, by its very nature exploitative.

I think such drugs can have great danger of removing people from important
aspects of reality.  As humanity starts spiraling ever faster into the
wormhole of the singularity, and as the world becomes more and more crowded,
polluted, and competitive, and the have-nots increasingly have more power,
and as the media can provide increasingly seductive non-realities, and as
machine superintelligences increasingly decrease the relative value of human
work and human thought, I fear that truly mind-altering drugs, if use too
widely, could increase, rather than decrease, the chance that humanity will
fare well --- as civilization, as we know it, is increasingly and more
rapidly distorted by the momentus changes that face us.

But I am 60 years old, so maybe my viewpoint is out of date.

Ed Porter





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: 

Re: [agi] Hunting for a Brainy Computer

2008-11-24 Thread Mike Tintner

Ben,

Thanks for this analysis. V interesting. A question:

Are these investigations all being framed along the lines of :  "are 
invariant representations encoded in single neurons/sparse neuronal 
populations/distributed neurons?" IOW the *location* of the representation? 
Is anyone actually speculating about what *form* the invariant 
representation takes? What form IOW will the Jennifer Aniston concept take 
in the brain? Will it be, say,  a visual face, or the symbols "Jennifer 
Aniston", or some mentalese abstract symbols (whatever they might be), or 
what? Until you speculate about the invariant form, it seems to me, your 
investigations are going to be somewhat confused.


Ben:
BTW, I just read this paper


For example, in Loosemore & Harley (in press) you can find an analysis of 
a
paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the 
latter

try to claim they have evidence in favor of grandmother neurons (or sparse
collections of grandmother neurons) and against the idea of distributed
representations.


which I found at

http://www.vis.caltech.edu/~rodri/

and I strongly disagree that


We showed their conclusion to be incoherent.  It was deeply implausible,
given the empirical data they reported.


Their conclusion, to quote them, is that

"
How neurons encode different percepts is one of the most intriguing
questions in neuroscience. Two extreme hypotheses are
schemes based on the explicit representations by highly selective
(cardinal, gnostic or grandmother) neurons and schemes that rely on
an implicit representation over a very broad and distributed population
of neurons1–4,6. In the latter case, recognition would require the
simultaneous activation of a large number of cells and therefore we
would expect each cell to respond to many pictures with similar basic
features. This is in contrast to the sparse firing we observe, because
most MTL cells do not respond to the great majority of images seen
by the patient. Furthermore, cells signal a particular individual or
object in an explicit manner27, in the sense that the presence of the
individual can, in principle, be reliably decoded from a very small
number of neurons.We do not mean to imply the existence of single
neurons coding uniquely for discrete percepts for several reasons:
first, some of these units responded to pictures of more than one
individual or object; second, given the limited duration of our
recording sessions, we can only explore a tiny portion of stimulus
space; and third, the fact that we can discover in this short time some
images—such as photographs of Jennifer Aniston—that drive the
cells suggests that each cell might represent more than one class of
images. Yet, this subset of MTL cells is selectively activated by
different views of individuals, landmarks, animals or objects. This
is quite distinct from a completely distributed population code and
suggests a sparse, explicit and invariant encoding of visual percepts in
MTL.
"

The only thing that bothers me about the paper is that the title

"
Invariant visual representation by single neurons in
the human brain
"

does not actually reflect the conclusions drawn.  A title like

"
Invariant visual representation by sparse neuronal population encodings
the human brain
"

would have reflected their actual conclusions a lot better.  But the paper's
conclusion clearly says

"
We do not mean to imply the existence of single
neurons coding uniquely for discrete percepts for several reasons:
"

I see some incoherence between the title and the paper's contents,
which is a bit frustrating, but no incoherence in the paper's conclusion,
nor between the data and the conclusion.

According to what the paper says, the authors do not claim to have
solve the neural knowledge representation problem, but only to have
gathered some information on empirical constraints on how neural
knowledge representation may operate.

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-24 Thread Ben Goertzel
Hi,

BTW, I just read this paper


> For example, in Loosemore & Harley (in press) you can find an analysis of a
> paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the latter
> try to claim they have evidence in favor of grandmother neurons (or sparse
> collections of grandmother neurons) and against the idea of distributed
> representations.

which I found at

 http://www.vis.caltech.edu/~rodri/

and I strongly disagree that

> We showed their conclusion to be incoherent.  It was deeply implausible,
> given the empirical data they reported.

Their conclusion, to quote them, is that

"
How neurons encode different percepts is one of the most intriguing
questions in neuroscience. Two extreme hypotheses are
schemes based on the explicit representations by highly selective
(cardinal, gnostic or grandmother) neurons and schemes that rely on
an implicit representation over a very broad and distributed population
of neurons1–4,6. In the latter case, recognition would require the
simultaneous activation of a large number of cells and therefore we
would expect each cell to respond to many pictures with similar basic
features. This is in contrast to the sparse firing we observe, because
most MTL cells do not respond to the great majority of images seen
by the patient. Furthermore, cells signal a particular individual or
object in an explicit manner27, in the sense that the presence of the
individual can, in principle, be reliably decoded from a very small
number of neurons.We do not mean to imply the existence of single
neurons coding uniquely for discrete percepts for several reasons:
first, some of these units responded to pictures of more than one
individual or object; second, given the limited duration of our
recording sessions, we can only explore a tiny portion of stimulus
space; and third, the fact that we can discover in this short time some
images—such as photographs of Jennifer Aniston—that drive the
cells suggests that each cell might represent more than one class of
images. Yet, this subset of MTL cells is selectively activated by
different views of individuals, landmarks, animals or objects. This
is quite distinct from a completely distributed population code and
suggests a sparse, explicit and invariant encoding of visual percepts in
MTL.
"

The only thing that bothers me about the paper is that the title

"
Invariant visual representation by single neurons in
the human brain
"

does not actually reflect the conclusions drawn.  A title like

"
Invariant visual representation by sparse neuronal population encodings
the human brain
"

would have reflected their actual conclusions a lot better.  But the paper's
conclusion clearly says

"
We do not mean to imply the existence of single
neurons coding uniquely for discrete percepts for several reasons:
"

I see some incoherence between the title and the paper's contents,
which is a bit frustrating, but no incoherence in the paper's conclusion,
nor between the data and the conclusion.

According to what the paper says, the authors do not claim to have
solve the neural knowledge representation problem, but only to have
gathered some information on empirical constraints on how neural
knowledge representation may operate.

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-24 Thread Mike Tintner


Eric:> I think your idea that ego loss is induced by a swelling of abstract

senses, squeezing out the structures that deal with your self in an
identificatory way, rings true.




I haven't followed this thread closely, but there is an aspect to it, I 
would argue, which is AGI-relevant. It's not so much ego-loss as 
ego-abandonment - "letting your self go", which is central to mental 
illness. We are all capable of doing that under pressure - being highly 
conscious is painful especially under difficult circumstances. We also all 
continually diminish (and heighten) our consciousness- diminish rather than 
abandon our self -  by some form of "substance abuse" from hard drugs to 
mild stimulants like coffee and comfort food..


How is that AGI-relevant? Because a true AGI that is continually dealing 
with creative problems, is and has to be continually afraid (along with 
other unpleasant emotions) - i.e. alert to the risks of things going wrong, 
which they always can - those problems may not be solved. And there is and 
has to be an issue of how much attention the self should pay to those fears, 
(all part of the area of emotional (general) intelligence).


In extreme situations, of course, there will be an issue of 
self-extinction - suicide. When *should* an AGI commit suicide?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-24 Thread Eric Burton
I remember reading that LSD caused a desegregation of brain faculties,
so that patterns of activity produced by normal operation in one
region can spill over into adjacent ones, where they're intepreted
bizarrely. However, the brain does not go to soup or static, but
rather explodes with novel noise or intense satori. So indeed,
something else is happening.

I think your idea that ego loss is induced by a swelling of abstract
senses, squeezing out the structures that deal with your self in an
identificatory way, rings true. It's a phenomenon one usually realizes
has occurred, rather than going through acutely -- that is, it's in
the midst of some other trial that one realizes the conventional self
has evapourated, or become thin and transparent like tissue.

The signal to noise ratio on content-heavy tryptamines is very high.
5-meo-DMT which I mentioned is actually light on content but does
reliably induce a sense of transcendance and universal oneness. I
don't know if 5-meo-dmt satori is an ideal example of the bare ego
death experince. It is certainly also found in stranger substances

Eric B

On 11/24/08, Ed Porter <[EMAIL PROTECTED]> wrote:
> Eric,
>
> Without knowing the scientifically measurable effects of the substance your
> post mentioned on the operation of the brain --- I am hypothesizing that the
> subjective experience you described could be caused, for example, by a
> greatly increased activation of neurons, or by a great decrease in the
> operations of the control and tuning mechanism of the brain, such as those
> in the basil-gangia/thalamic/cortical feedback loop.  This could result in
> the large part of the brain that receives and perceives sensation and
> emotions not being a well moduluated, gain-controlled, and having normal
> higher level attention focusing processes select which, relatively small,
> parts of it get high degrees of activation by the parts of you brain that
> normally controls your mind  --- which are often the part of your brain most
> normally associated with self control, and thus the self, ---  a scheme
> selected by evolution so you as an organism can respond to those aspects of
> the environment that are most relevant to serving your own purposes, as has
> been generally necessary for survival of our ancestors, from a Darwinian
> standpoint.
>
> To use a sociological analogy, it may be a temporary revolution, in which
> the majority of the brain's neurons, that normally stay under the control of
> the elites, the portions of the pre-frontal lobe that normally control the
> focus of attention of the brain through their domination of the
> basil-ganglia and the thalamus, losing their ability to keep the mob in its
> place.  The result is that the senses and emotions run wild, and the part of
> the brain dedicated to representing the self --- instead of being able to
> control things --- is overwhelmed and greatly out numbered by the large
> portion of the brain dedicated to emotion, sensation, and patterns within
> them -- so that the consciousness is much more directly felt, without any or
> significant interference from the self.
>
> And being overwhelmed by this sensation, and its awareness of the "being"
> and "computation" (i.e., a since of life) of the reality around us---
> uninterrupted by the control and voices of the self --- generates a strong
> sensation that such sensed being is all, and, thus, we are one with it.
>
> If any one could give me a concise explanation, or link to one, of the
> scientifically studied effects on the brain of the chemicals that give such
> experiences, I would be interested in reading it, to see to what extent it
> agrees with the above hypothesis.
>
> Ed Porter
>
>
> -Original Message-
> From: Eric Burton [mailto:[EMAIL PROTECTED]
> Sent: Sunday, November 23, 2008 10:50 PM
> To: agi@v2.listbox.com
> Subject: Re: [agi] A paper that actually does solve the problem of
> consciousness
>
> Ego death! This is not as pernicious as it sounds. The death/rebirth
> trial is a standby of the psilocybin excursion. One realizes one's
> self has vanished and is reincarnated into all the strangeness of life
> on earth as if being born. Very much an experience of the physical
> vessel being re-filled with new spirit stuff, some new soul overly
> given to wonder at it all. A sensation at the heart of most tryptamine
> raptures, I think... certainly more overlaid with alien imagery when
> induced by say psilocin than say, five methoxy dmt. But with almost
> all the tryptamine/indole hallucinogens this experience of "user
> reboot" is often there
>
> As if the user, not the machine, is rebooting.
>
> Worthy, but outside list scope ._.
>
>
> On 11/23/08, Ed Porter <[EMAIL PROTECTED]> wrote:
>> Ben,
>>
>>
>>
>> I googled "ego loss" and found a lot of first person accounts of various
>> experiences.  From an AGI/brain science standpoint they were quite
>> interesting, but I can see why you might not want such account to be on
> this
>> li

RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-24 Thread Ed Porter
Eric,

Without knowing the scientifically measurable effects of the substance your
post mentioned on the operation of the brain --- I am hypothesizing that the
subjective experience you described could be caused, for example, by a
greatly increased activation of neurons, or by a great decrease in the
operations of the control and tuning mechanism of the brain, such as those
in the basil-gangia/thalamic/cortical feedback loop.  This could result in
the large part of the brain that receives and perceives sensation and
emotions not being a well moduluated, gain-controlled, and having normal
higher level attention focusing processes select which, relatively small,
parts of it get high degrees of activation by the parts of you brain that
normally controls your mind  --- which are often the part of your brain most
normally associated with self control, and thus the self, ---  a scheme
selected by evolution so you as an organism can respond to those aspects of
the environment that are most relevant to serving your own purposes, as has
been generally necessary for survival of our ancestors, from a Darwinian
standpoint.

To use a sociological analogy, it may be a temporary revolution, in which
the majority of the brain's neurons, that normally stay under the control of
the elites, the portions of the pre-frontal lobe that normally control the
focus of attention of the brain through their domination of the
basil-ganglia and the thalamus, losing their ability to keep the mob in its
place.  The result is that the senses and emotions run wild, and the part of
the brain dedicated to representing the self --- instead of being able to
control things --- is overwhelmed and greatly out numbered by the large
portion of the brain dedicated to emotion, sensation, and patterns within
them -- so that the consciousness is much more directly felt, without any or
significant interference from the self.  

And being overwhelmed by this sensation, and its awareness of the "being"
and "computation" (i.e., a since of life) of the reality around us---
uninterrupted by the control and voices of the self --- generates a strong
sensation that such sensed being is all, and, thus, we are one with it.

If any one could give me a concise explanation, or link to one, of the
scientifically studied effects on the brain of the chemicals that give such
experiences, I would be interested in reading it, to see to what extent it
agrees with the above hypothesis.

Ed Porter


-Original Message-
From: Eric Burton [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 23, 2008 10:50 PM
To: agi@v2.listbox.com
Subject: Re: [agi] A paper that actually does solve the problem of
consciousness

Ego death! This is not as pernicious as it sounds. The death/rebirth
trial is a standby of the psilocybin excursion. One realizes one's
self has vanished and is reincarnated into all the strangeness of life
on earth as if being born. Very much an experience of the physical
vessel being re-filled with new spirit stuff, some new soul overly
given to wonder at it all. A sensation at the heart of most tryptamine
raptures, I think... certainly more overlaid with alien imagery when
induced by say psilocin than say, five methoxy dmt. But with almost
all the tryptamine/indole hallucinogens this experience of "user
reboot" is often there

As if the user, not the machine, is rebooting.

Worthy, but outside list scope ._.


On 11/23/08, Ed Porter <[EMAIL PROTECTED]> wrote:
> Ben,
>
>
>
> I googled "ego loss" and found a lot of first person accounts of various
> experiences.  From an AGI/brain science standpoint they were quite
> interesting, but I can see why you might not want such account to be on
this
> list, other than perhaps if they were copied from other sites, and
> accompanied by third party deconstruction from a brain science or AGI
> standpoint.
>
>
>
> In fact, some of the account were disturbing, and were actually written to
> be cautionary tails.  Some of these accounts described "ego death. " Ego
> death appears to me to be quite distinct from what I had thought of as ego
> loss, because it appears to be associated with a sense of fearing death
> (which presumably one would not do if one had lost one's ego), which in
some
> instances occurred after, or intermitantly with,  periods of having sensed
a
> lost of ego, and was associated with a feat that one was permanently
loosing
> that sense of self that would be necessary for normal human existence.
> Several people reported having disturbing repercussions from such trips
for
> months or longer.
>
>
>
> But some of the people who reported ego loss said they felt it was a
> valuable experience.
>
>
>
> I forget exactly what various entheogens are supposed to do the brain,
from
> a measurable brain science standpoint, but several of the subjective
> accounts by people claiming to have taken very strong dosages of
entheogens
> described experiences that would be compatable with loss of normal brain
> control mechanis