On Jan 19, 2008, at 5:24 PM, Matt Mahoney wrote:
--- "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]> wrote:
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
Turing also committed suicide.
In his case I understand that the British government saw fit to
sentence
> So, people do have a practically useful way of cheating problems in NP
> now. Problem with AGI is, we don't know how to program it even given
> computers with infinite computational power.
Well, that is wrong IMO AIXI and the Godel Machine are provably correct
ways to achieve AGI with infin
Regarding AIG research as potentially psychologically disturbing, there are
so many other ways to be pscyhologically disturbed in a postmodern world
that it may not matter :)
It's already hard for a lot of people to have a healthy level of self-esteem
or self-indentity, and nihilism is not in shor
I believe that humans have the emotions that we do because of the
environment we evolved in. The more selfish/fearful/emotional you are, the
more likely you are to survive and reproduce. For humans, I think logic is a
sort of tool used to help us achieve happiness. Happiness is the
top-priority goa
On Jan 21, 2008 1:35 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> However, I would rephrase the question as: How would a pragmatically useful
> polynomial time solution of logical satisfiability affect AGI?
>
> In fact, it's interesting to talk about how existing SAT and SMT solvers
>
> http://en.
On Jan 20, 2008 5:35 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> In AGI, we don't care that much about worst-case complexity, nor even
> necessarily about average-case complexity for very large N. We care mainly
> about average-case complexity for realistic N and for the specific probability
>
I wrote
On Jan 20, 2008 2:34 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> I am disappointed because the question of how a polynomial time solution of
> logical satisfiability might affect agi is very important to me.
Well, feel free to start a new thread on that topic, then ;-)
In fact, I wi
On Jan 20, 2008 2:34 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> I am disappointed because the question of how a polynomial time solution of
> logical satisfiability might affect agi is very important to me.
Well, feel free to start a new thread on that topic, then ;-)
In fact, I will do so: I wi
--- Mike Dougherty <[EMAIL PROTECTED]> wrote:
> On Jan 19, 2008 8:24 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > --- "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]> wrote:
> >
>
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
> >
> > Turing also committed suicide.
>
Regarding the suicide rates of geniuses or those with high intelligence, I
wouldn't be concerned:
> Berman says that the intelligence study is less useful than those that
> point to *risk factors like divorce or unemployment*. ''It's not as if I'm
> going to get more worried about my less intelli
I am disappointed because the question of how a polynomial time solution of
logical satisfiability might affect agi is very important to me.
Jim Bromer
Ben Goertzel <[EMAIL PROTECTED]> wrote: Hi all,
I'd like to kill this thread, because not only is it off-topic, but it seems not
to be going any
Hi all,
I'd like to kill this thread, because not only is it off-topic, but it seems not
to be going anywhere remotely insightful or progressive.
Of course a polynomial-time solution to the boolean satisfiability
problem could potentially have impact on AGI (though it wouldn't
necessarily do so -
Jim,
I'm sure most people here don't have any difficulty understanding what
you are talking about. You seem to lack solid understanding of these
basic issues however. Please stop this off-topic discussion, I'm sure
you can find somewhere else to discuss computational complexity. Read
a good textbo
I believe that a polynomial solution to the Logical Satisifiability problem
will have a major effect on AI, and I would like to discuss that at sometime.
Jim Bromer
Richard Loosemore <[EMAIL PROTECTED]> wrote:
This thread has nothing to do with artificial general intelligence: is
there not a
I had no idea what you were talking about until I read
Matt Mahoney's remarks. I do not understand why people have so much trouble
reading my messages but it is not entirely my fault. I may have misunderstood
something that I read, or you may have misinterpreted something that I was
saying.
Joshua Fox wrote:
> Turing also committed suicide.
And Chislenko. Each of these people had different circumstances, and
suicide strikes everywhere, but I wonder if there is a common thread.
"Ramanujan, like many other great mathematicians and achievers, died
young. There are on the other hand
16 matches
Mail list logo