Re: [agi] Logical Satisfiability...Get used to it.

2008-04-14 Thread Charles D Hixson

Jim Bromer wrote:

Ben G wrote: >>...<<
...
Concerning beliefs and scientific rationalism: Beliefs are the basis
of all thought.  To imply that religious belief might be automatically
different from rational beliefs is naïve.  However, I think there is
an advantage in defining what a rational thought is realitve to AI
programming and how scientific rationalism is different from simple
rationalism.  I am going to write a few messages about this when I get
a chance.

By the way, I don't really see how a simple n^4 or n^3 SAT solver in
itself would be that useful for any immediate AGI project, but the
novel logical methods that the solver will reveal may be more
significant.

Jim Bromer
  
But religious beliefs *ARE* intrinsically different from rational 
beliefs.  They aren't the only such belief, but they are among them.  
Rational beliefs MUST be founded in other beliefs.  Rationalism does not 
provide a basis for generating beliefs ab initio, but only via reason, 
which requires both axioms and rules of inference.  (NARS may have 
eliminated the axioms, but I doubt it.  OTOH, I don't understand exactly 
how it works yet.)


Religion and other intrinsic beliefs are inherent in the construction of 
humans.  I suspect that every intelligent entity will require such 
beliefs.  Which particular religion is believed in isn't inherent, but 
is situational.  (Other factors may enter in, but I would need a clear 
explication of how that happened before I would believe that.)  Note 
that another inherent belief is "People like me are better than people 
who are different."  The fact that a belief is inherent doesn't mean it 
can't be overcome (or at least subdued) by counter-programming, merely 
that one will need to continually ward against it, or it will re-assert 
itself even if you know that it's wrong.


Saying that a belief is non-rational isn't denigrating it.  It's merely 
a statement that it isn't a built-in rule.  Even the persistence of 
forms doesn't seem to be totally built-in, though there are definitely 
lots of mechanisms that will tend to create it.  So in that case what's 
built in is a tendency to perceive the persistence of objects.  In the 
case of religion it's a bit more difficult to perceive what the built-in 
process is.  Plausibly it's a combination of several "tendency to 
perceive patterns shaped like ..." in the world that aren't 
intrinsically connected, but which have been connected by culture.  Or 
it might be something else.  (The "blame/attribute everything to the big 
alpha baboon" theory isn't totally silly, but I find it unsatisfactory.  
It's at most a partial answer.)



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-04-14 Thread Mark Waser

Concerning beliefs and scientific rationalism: Beliefs are the basis

of all thought.  To imply that religious belief might be automatically
different from rational beliefs is naïve.  However, I think there is
an advantage in defining what a rational thought is realitve to AI
programming and how scientific rationalism is different from simple
rationalism.  I am going to write a few messages about this when I get
a chance.

I'm just finishing off a paper for the AAAI Fall BICA Symposium where I 
effectively argue that religious belief is a rational drive common to all 
goal-seeking entities.  I don't (by any means) hit people in the face with 
that exact statement but it's plainly evident from what I do write.  Far too 
many people have been tainted/turned-off by the irrational over-believers 
and now instinctively throw the baby out with the bathwater whenever 
religion comes up (Richard Dawkins, I'm looking at you :-).  People need to 
look at what some of the more rational religious leaders are saying (the 
Dalai Lama being an *excellent* case in point with his hard-core support for 
the scientific investigation of meditation and other subjects).


What do you perceive as the difference/distinction between scientific 
rationalism and simple rationalism?  I don't perceive them as being 
different at all.



- Original Message - 
From: "Jim Bromer" <[EMAIL PROTECTED]>

To: 
Sent: Monday, April 14, 2008 2:38 PM
Subject: [agi] Logical Satisfiability...Get used to it.


Ben G wrote: >>

FWIW, I wasn't joking about your algorithm's putative
divine inspiration in my role as moderator, but rather in my role
as individual list participant ;-)

Sorry that my sense of humor got on your nerves. I've had that effect
on people before!

Really though: if you're going to post messages in forums populated
by scientific rationalists, claiming divine inspiration for your ideas, you
really gotta expect **at minimum** some good-natured ribbing... !
-- Ben G
<<

I appreciate the fact that you did not intend your comments to be mean
spirited and that you were speaking as a participant not as the
moderator.  I also appreciate the fact that Wasser realized that I
misunderstood his comment and made that clear.

I have annoyed quite a few people myself.  I am a little too critical
at times, but my criticisms are usually intended to provoke a deeper
examination an idea of some kind.  (I only rarely use criticism as a
tool of wanton destruction!)

Concerning beliefs and scientific rationalism: Beliefs are the basis
of all thought.  To imply that religious belief might be automatically
different from rational beliefs is naïve.  However, I think there is
an advantage in defining what a rational thought is realitve to AI
programming and how scientific rationalism is different from simple
rationalism.  I am going to write a few messages about this when I get
a chance.

By the way, I don't really see how a simple n^4 or n^3 SAT solver in
itself would be that useful for any immediate AGI project, but the
novel logical methods that the solver will reveal may be more
significant.

Jim Bromer

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Mark Waser

Really though: if you're going to post messages in forums populated
by scientific rationalists, claiming divine inspiration for your ideas, 
you

really gotta expect **at minimum** some good-natured ribbing... !


And (speaking from crispy experience :-) if you try to create a new 
religion -- don flame-proof underwear.  :-)


You may also have misinterpreted Ben's deity in the basement/jar comments --  
they were directed at me (and tailored to my sense of humor) and NOT at all 
intended to belittle you and/or your beliefs (Ben ISN'T that kind of guy). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Ben Goertzel
>  Thank you for your politeness and your insightful comments.  I am
>  going to quit this group because I have found that it is a pretty bad
>  sign when the moderator mocks an individual for his religious beliefs.

FWIW, I wasn't joking about your algorithm's putative
divine inspiration in my role as moderator, but rather in my role
as individual list participant ;-)

Sorry that my sense of humor got on your nerves.  I've had that effect
on people before!

Really though: if you're going to post messages in forums populated
by scientific rationalists, claiming divine inspiration for your ideas, you
really gotta expect **at minimum** some good-natured ribbing... !

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Jim Bromer
On Mon, Mar 31, 2008 at 9:46 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> All this talk about the Lord and SAT solvers has me thinking up variations
> to the Janis Joplin song
>
> http://www.azlyrics.com/lyrics/janisjoplin/mercedesbenz.html
> "
> Oh Lord, won't you buy me
> a polynomial-time SAT solution
> I'm counting on you Lord
> Don't leave me in destitution
>
> Prove that you love me
> And create a scientific revolution
> Oh Lord, won't you buy me
> a polynomial-time SAT solution
> "
> ... oh, whatever ...  ;-O
>
> -- Ben G

You win!  I am going to quit your group.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Jim Bromer
Yes.  SAT solvers act on sets of logical symbolic propositions.  This
can be effectively applied to logically closed (you know what I am
getting at) systems as well. Inductive systems are not logically
closed because new ideas may change the logical relationships of known
theories.  Also, any such system can also contain errors.   My thought
is, that we human beings can, and do create simple logical theories
about our environment.

I don't, however, think that human beings are SAT solvers either, but
I do think that they can hold a great deal of 'distributed' ideative
data that can be associated with some group (or groups) of theories.
These relations need to be learned, usually over a period of time.
But they can, somehow, be called into play when one theory (or a
variation of a theory) has some relevance to them.  Although this kind
of reaction may not be instantaneous, except for a well learned
associative relation, and although it may not work as a purely logical
analytical device, I believe that a SAT solver might be able to
simulate such situations and enhance contemporary rational-based AI
programs significantly.

I mentionied a limited general SAT solver that would be used with an
inductive AI program that would use logic to create theories about the
IO data environment, and about its own 'thinking'.  So there would
still be more complicated logical problems that my solver (if it works
at all) would not be able to handle.  What I have suggested here is
that there may be a threshold where a better solver, even a limited
solver, might actually allow a rational-based AI program, (even one
working in an inductive non-monotonic, non-taxonomic theory-building
application) to advance significantly beyond contemporary AI programs.

I feel that a symbolic approach would be easier to start with and it
could be feasible with better insight and some stronger methods.  I
do, however, also feel that (what I think is) a gated recurrent
artificial neural network with n-space mapping (or bus-state mapping)
could be made to work, but this would in essence be very similar to a
hybrid approach.

Thank you for your politeness and your insightful comments.  I am
going to quit this group because I have found that it is a pretty bad
sign when the moderator mocks an individual for his religious beliefs.
 However, I hope to talk to again on some other forum.

Jim Bromer

On Mon, Mar 31, 2008 at 9:12 AM, Stephen Reed <[EMAIL PROTECTED]> wrote:
>
> Hi Jim,
> According to the Wikipedia article on SAT Solvers, there are extensions for
> quantified formulas, and first order logic.  Otherwise SAT solvers operate
> principally on sets of symbolic propositions.  Agreed?
>
> I believe that SAT solvers are not cognitively plausible.  More precisely, I
> believe that human's do not perform constraint satisfaction problem solving
> in a similar manner.  One might argue however that SAT solvers are already
> super-human in respect to their performance for certain problems (i.e.
> digital circuit design verification).
>
> Where do you stand in the symbolic AI vs. connectionist AI dimension of our
> audience?  On the symbolic side?
> -Steve
>  Stephen L. Reed
>
> Artificial Intelligence Researcher
> http://texai.org/blog
> http://texai.org
> 3008 Oak Crest Ave.
> Austin, Texas, USA 78704
> 512.791.7860
>
>
> - Original Message 
> From: Jim Bromer <[EMAIL PROTECTED]>
> To: agi@v2.listbox.com
> Sent: Monday, March 31, 2008 7:46:30 AM
> Subject: Re: [agi] Logical Satisfiability...Get used to it.
>
> I am going to try to summarize what I have said.
>
> With God's help, I may have discovered a path toward a method to
> achieve a polynomial time solution to Logical Satisfiability, and so
> from this vantage point I have started to ask the question of whether
> or not a feasible SAT solver would actually be useful in advancing
> general AI.
>
> I think that most knowledgeable people would assume that it would be.
> However, there has been some doubt about this so I came up with a
> logical model that might show how such a situation could make a
> critical difference to general AI programming.  Or at least AI
> programming that emphasizes the use of rational methods.
>
> My feeling right now, is that if my solver actually works it would
> take at least n^3 or n^4 steps.  This means, for all practical
> purposes, that it would stretch the range of general solvers from
> logical formulas of 20 distinct variables or so to formulas of
> hundreds or even thousands of characters long.  I believe that this
> would be a major advancement, both in general computing and in AI
> programming.
>
> But would it make any difference in general AI programming, what this
> group calls AGI?
>
> Imagining a system that used logical or ration

RE: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Derek Zahn
Jim Bromer writes:> With God's help, I may have discovered a path toward a 
method to> achieve a polynomial time solution to Logical Satisfiability
 
If you want somebody to talk about the solution, you're
more likely to get helpful feedback elsewhere as it is not a
topic that most of us on this list deal with or know a lot about.
 
Besides that, publish your result and it will be used if it is helpful.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Richard Loosemore

Jim Bromer wrote:

I am going to try to summarize what I have said.

With God's help, I may have discovered a path toward a method to
achieve a polynomial time solution to Logical Satisfiability, and so
from this vantage point I have started to ask the question of whether
or not a feasible SAT solver would actually be useful in advancing
general AI.

I think that most knowledgeable people would assume that it would be.
However, there has been some doubt about this so I came up with a
logical model that might show how such a situation could make a
critical difference to general AI programming.  Or at least AI
programming that emphasizes the use of rational methods.

My feeling right now, is that if my solver actually works it would
take at least n^3 or n^4 steps.  This means, for all practical
purposes, that it would stretch the range of general solvers from
logical formulas of 20 distinct variables or so to formulas of
hundreds or even thousands of characters long.  I believe that this
would be a major advancement, both in general computing and in AI
programming.

But would it make any difference in general AI programming, what this
group calls AGI?


Jim,

The problem is that even if your solution (which sounds not so much like 
a solution as a vague, God-inspired possibility that you may have found 
a path toward a method that might conceivably yield a solution.) 
actually did what you claim it might do, that would still beg so many 
questions about what it means to build an AGI, that it has almost no 
relevance.


There are only some kinds of AI that might benefit, and there are some 
people who would claim that, in fact, the types of AI that could benefit 
are dead in the water anyway (will never lead to an actual AGI).


So when that is added to your religious references and open admission 
that you only think you may have a solution, all kinds of red flags go up.




Richard Loosemore








Imagining a system that used logical or rational methods that might
initially be expressed in fairly simple logical terms, but which could
have hundreds of variants and hundreds of interconnections with the
other logical formulas of the system I have come up with a case where
n^4 SAT might be critical.  The formulas of the system that I have in
mind would be speculative and derived from an inductive logical system
that was designed to be capable of learning.  I then pointed out that
some of the formulas produced by an automated system might have only a
few valid cases, meaning that a trial and error method of searching
for logical satisfiability would be very unlikely to work for those
formulas.  In this case the n^4 SAT solver would be very useful.  But
why would this be useful to AGI?

Remember, our programs are supposed to be adaptive and capable of
general learning.  If a fully automated AI program was truly learning
from the IO data environment, it would tend to create numerous
conjectures about it. Such a program would tend to create
insignificant conjectures that was founded on a great deal of trivial
evidence which could then be used to 'confirm' the conjecture.  Even
worse, it might (and will) produce meaningless garbage that was based
on methods like those that mush rational and non-rational responses or
made extensive use of over-generalization.  On the other hand a few
coincidences or over-generalizations could turn out to be very
meaningful.  So my theory is this.  If the program produced logical
theories of relations between events that occurred in the IO data
environment, then those theories that had only a few valid solutions
might be instrumental.  A complicated theory that only has a few valid
cases would, under certain conditions, be easier to prove or disprove
than a theory that can be 'verified' by tens of thousands of
combinations of trivial coincidences.  This is similar to Popper's
falsifiability theory in that it supposes that some theories have to
be strongly testable in order to advance science.  I do not mean to
suggest that falsifiability is absolute in an inductive system, just
that some key theories that are narrowly testable may be very
significant in the advancement of learning.  The rational-based
conjectures that only have a few solutions would therefore be better
for critical testing (as long as the solutions involved some kind of
observable event that was likely to occur under some conditions.)  And
a better general SAT solver would represent a major advancment in
discovering the conditions under which confirmatory or disconfirmatory
evidence for those kinds of theories could be found.

Perhaps people have objected to my messages about this because I
mentioned God, or perhaps they have objected to my question because
they believe a polynomial time solution to the SAT problem is
impossible.  On the other hand, there may be another objection to the
question simply because the answer is so blatantly obviousness.  Of
course a polynomial time solution to SAT would be signific

Re: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Ben Goertzel
All this talk about the Lord and SAT solvers has me thinking up variations
to the Janis Joplin song

http://www.azlyrics.com/lyrics/janisjoplin/mercedesbenz.html

"
Oh Lord, won't you buy me
a polynomial-time SAT solution
I'm counting on you Lord
Don't leave me in destitution

Prove that you love me
And create a scientific revolution
Oh Lord, won't you buy me
a polynomial-time SAT solution
"

... oh, whatever ...  ;-O

More seriously comments:

1)
Worst-case complexity doesn't matter much for AI.
What matters is average case across the class of
relevant problem instances

2)
O(n^4) doesn't matter if it's 99n^4

3)
Agree with Stephen Reed that SAT solvers have little to do
with human cognition ... but that doesn't mean they can't
be extremely useful within AGI architectures.  And SMT
solvers, I would suspect, even more so.

4)
I don't presently see any way to place an SAT solver at
the heart of an AGI architecture.  I can see how to place
them in valuable subsidiary roles.

Put crudely, SAT solvers can solve constraint satisfaction
problems really fast.  This may be useful.  But the crux of
an AGI system that uses SAT, will still arguably lie in the
module that **formulates** the constraint satisfaction problems
in a contextually appropriate way.

5)
If you can formulate the problem of **formulating a contextually
appropriate, computationally tractable constraint satisfaction
problem whose solution will allow a system to achieve its
general high level goals in a particular context** as a computationally
tractable constraint satisfaction ... THEN and only then, will
you have convinced me that a great SAT solver can
serve as the core of an AGI architecture.

-- Ben G



-- Ben G






On Mon, Mar 31, 2008 at 9:12 AM, Stephen Reed <[EMAIL PROTECTED]> wrote:
>
> Hi Jim,
> According to the Wikipedia article on SAT Solvers, there are extensions for
> quantified formulas, and first order logic.  Otherwise SAT solvers operate
> principally on sets of symbolic propositions.  Agreed?
>
> I believe that SAT solvers are not cognitively plausible.  More precisely, I
> believe that human's do not perform constraint satisfaction problem solving
> in a similar manner.  One might argue however that SAT solvers are already
> super-human in respect to their performance for certain problems (i.e.
> digital circuit design verification).
>
> Where do you stand in the symbolic AI vs. connectionist AI dimension of our
> audience?  On the symbolic side?
> -Steve
>  Stephen L. Reed
>
> Artificial Intelligence Researcher
> http://texai.org/blog
> http://texai.org
> 3008 Oak Crest Ave.
> Austin, Texas, USA 78704
> 512.791.7860
>
>
>
> ----- Original Message 
> From: Jim Bromer <[EMAIL PROTECTED]>
> To: agi@v2.listbox.com
>
> Sent: Monday, March 31, 2008 7:46:30 AM
> Subject: Re: [agi] Logical Satisfiability...Get used to it.
>
>  I am going to try to summarize what I have said.
>
> With God's help, I may have discovered a path toward a method to
> achieve a polynomial time solution to Logical Satisfiability, and so
> from this vantage point I have started to ask the question of whether
> or not a feasible SAT solver would actually be useful in advancing
> general AI.
>
> I think that most knowledgeable people would assume that it would be.
> However, there has been some doubt about this so I came up with a
> logical model that might show how such a situation could make a
> critical difference to general AI programming.  Or at least AI
> programming that emphasizes the use of rational methods.
>
> My feeling right now, is that if my solver actually works it would
> take at least n^3 or n^4 steps.  This means, for all practical
> purposes, that it would stretch the range of general solvers from
> logical formulas of 20 distinct variables or so to formulas of
> hundreds or even thousands of characters long.  I believe that this
> would be a major advancement, both in general computing and in AI
> programming.
>
> But would it make any difference in general AI programming, what this
> group calls AGI?
>
> Imagining a system that used logical or rational methods that might
> initially be expressed in fairly simple logical terms, but which could
> have hundreds of variants and hundreds of interconnections with the
> other logical formulas of the system I have come up with a case where
> n^4 SAT might be critical.  The formulas of the system that I have in
> mind would be speculative and derived from an inductive logical system
> that was designed to be capable of learning.  I then pointed out that
> some of the formulas produced by an automated system might have only a
> few valid cases, meaning that a trial and error method of searc

Re: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Stephen Reed
Hi Jim, 
According to the Wikipedia article on SAT Solvers, there are extensions for 
quantified formulas, and first order logic.  Otherwise SAT solvers operate 
principally on sets of symbolic propositions.  Agreed?

I believe that SAT solvers are not cognitively plausible.  More precisely, I 
believe that human's do not perform constraint satisfaction problem solving in 
a similar manner.  One might argue however that SAT solvers are already 
super-human in respect to their performance for certain problems (i.e. digital 
circuit design verification).

Where do you stand in the symbolic AI vs. connectionist AI dimension of our 
audience?  On the symbolic side?
-Steve
 
Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Jim Bromer <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Monday, March 31, 2008 7:46:30 AM
Subject: Re: [agi] Logical Satisfiability...Get used to it.

 I am going to try to summarize what I have said.

With God's help, I may have discovered a path toward a method to
achieve a polynomial time solution to Logical Satisfiability, and so
from this vantage point I have started to ask the question of whether
or not a feasible SAT solver would actually be useful in advancing
general AI.

I think that most knowledgeable people would assume that it would be.
However, there has been some doubt about this so I came up with a
logical model that might show how such a situation could make a
critical difference to general AI programming.  Or at least AI
programming that emphasizes the use of rational methods.

My feeling right now, is that if my solver actually works it would
take at least n^3 or n^4 steps.  This means, for all practical
purposes, that it would stretch the range of general solvers from
logical formulas of 20 distinct variables or so to formulas of
hundreds or even thousands of characters long.  I believe that this
would be a major advancement, both in general computing and in AI
programming.

But would it make any difference in general AI programming, what this
group calls AGI?

Imagining a system that used logical or rational methods that might
initially be expressed in fairly simple logical terms, but which could
have hundreds of variants and hundreds of interconnections with the
other logical formulas of the system I have come up with a case where
n^4 SAT might be critical.  The formulas of the system that I have in
mind would be speculative and derived from an inductive logical system
that was designed to be capable of learning.  I then pointed out that
some of the formulas produced by an automated system might have only a
few valid cases, meaning that a trial and error method of searching
for logical satisfiability would be very unlikely to work for those
formulas.  In this case the n^4 SAT solver would be very useful.  But
why would this be useful to AGI?

Remember, our programs are supposed to be adaptive and capable of
general learning.  If a fully automated AI program was truly learning
from the IO data environment, it would tend to create numerous
conjectures about it. Such a program would tend to create
insignificant conjectures that was founded on a great deal of trivial
evidence which could then be used to 'confirm' the conjecture.  Even
worse, it might (and will) produce meaningless garbage that was based
on methods like those that mush rational and non-rational responses or
made extensive use of over-generalization.  On the other hand a few
coincidences or over-generalizations could turn out to be very
meaningful.  So my theory is this.  If the program produced logical
theories of relations between events that occurred in the IO data
environment, then those theories that had only a few valid solutions
might be instrumental.  A complicated theory that only has a few valid
cases would, under certain conditions, be easier to prove or disprove
than a theory that can be 'verified' by tens of thousands of
combinations of trivial coincidences.  This is similar to Popper's
falsifiability theory in that it supposes that some theories have to
be strongly testable in order to advance science.  I do not mean to
suggest that falsifiability is absolute in an inductive system, just
that some key theories that are narrowly testable may be very
significant in the advancement of learning.  The rational-based
conjectures that only have a few solutions would therefore be better
for critical testing (as long as the solutions involved some kind of
observable event that was likely to occur under some conditions.)  And
a better general SAT solver would represent a major advancment in
discovering the conditions under which confirmatory or disconfirmatory
evidence for those kinds of theories could be found.

Perhaps people have objected to my messages about this because I
mentioned God, or perhaps they have objected to my q

Re: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Jim Bromer
I am going to try to summarize what I have said.

With God's help, I may have discovered a path toward a method to
achieve a polynomial time solution to Logical Satisfiability, and so
from this vantage point I have started to ask the question of whether
or not a feasible SAT solver would actually be useful in advancing
general AI.

I think that most knowledgeable people would assume that it would be.
However, there has been some doubt about this so I came up with a
logical model that might show how such a situation could make a
critical difference to general AI programming.  Or at least AI
programming that emphasizes the use of rational methods.

My feeling right now, is that if my solver actually works it would
take at least n^3 or n^4 steps.  This means, for all practical
purposes, that it would stretch the range of general solvers from
logical formulas of 20 distinct variables or so to formulas of
hundreds or even thousands of characters long.  I believe that this
would be a major advancement, both in general computing and in AI
programming.

But would it make any difference in general AI programming, what this
group calls AGI?

Imagining a system that used logical or rational methods that might
initially be expressed in fairly simple logical terms, but which could
have hundreds of variants and hundreds of interconnections with the
other logical formulas of the system I have come up with a case where
n^4 SAT might be critical.  The formulas of the system that I have in
mind would be speculative and derived from an inductive logical system
that was designed to be capable of learning.  I then pointed out that
some of the formulas produced by an automated system might have only a
few valid cases, meaning that a trial and error method of searching
for logical satisfiability would be very unlikely to work for those
formulas.  In this case the n^4 SAT solver would be very useful.  But
why would this be useful to AGI?

Remember, our programs are supposed to be adaptive and capable of
general learning.  If a fully automated AI program was truly learning
from the IO data environment, it would tend to create numerous
conjectures about it. Such a program would tend to create
insignificant conjectures that was founded on a great deal of trivial
evidence which could then be used to 'confirm' the conjecture.  Even
worse, it might (and will) produce meaningless garbage that was based
on methods like those that mush rational and non-rational responses or
made extensive use of over-generalization.  On the other hand a few
coincidences or over-generalizations could turn out to be very
meaningful.  So my theory is this.  If the program produced logical
theories of relations between events that occurred in the IO data
environment, then those theories that had only a few valid solutions
might be instrumental.  A complicated theory that only has a few valid
cases would, under certain conditions, be easier to prove or disprove
than a theory that can be 'verified' by tens of thousands of
combinations of trivial coincidences.  This is similar to Popper's
falsifiability theory in that it supposes that some theories have to
be strongly testable in order to advance science.  I do not mean to
suggest that falsifiability is absolute in an inductive system, just
that some key theories that are narrowly testable may be very
significant in the advancement of learning.  The rational-based
conjectures that only have a few solutions would therefore be better
for critical testing (as long as the solutions involved some kind of
observable event that was likely to occur under some conditions.)  And
a better general SAT solver would represent a major advancment in
discovering the conditions under which confirmatory or disconfirmatory
evidence for those kinds of theories could be found.

Perhaps people have objected to my messages about this because I
mentioned God, or perhaps they have objected to my question because
they believe a polynomial time solution to the SAT problem is
impossible.  On the other hand, there may be another objection to the
question simply because the answer is so blatantly obviousness.  Of
course a polynomial time solution to SAT would be significant in the
advancement AI programming.

Jim Bromer

On Sun, Mar 30, 2008 at 11:47 AM, Jim Bromer <[EMAIL PROTECTED]> wrote:

>
> The issue that I am still trying to develop is whether or not a general SAT 
> solver would be useful for AGI. I believe it would be. So I am going to go on 
> with my theory about bounded logical networks.
>
> A bounded logical network is a network where simple logical theories, that is 
> logical speculations about the input output environment and about its own 
> 'thinking', could be constructed with hundreds or thousands of variants and 
> interconnections to other bounded logical theories.  These theories would not 
> be fully integrated by strong taxonomic logic, so they could be used with 
> inductive learning.  Such a system would produce som

Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Richard Loosemore

Jim Bromer wrote:



On the contrary, Vladimir is completely correct in requesting that the
discussion go elsewhere:  this has no relevance to the AGI list, and
there are other places where it would be pertinent.


Richard Loosemore

 
If Ben doesn't want me to continue, I will stop posting to this group. 
Otherwise please try to understand what I said about the relevance of 
SAT to AGI and try to address the specific issues that I mentioned.  On 
the other hand, if you don't want to waste your time in this kind of 
discussion then do just that: Stay out of it.

Jim Bromer


Since diplomacy did not work, I will come to the point:  as far as i can 
see you have given no "specific issues", only content-free speculation 
on topics of no relevance.




Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Ben Goertzel
On Sun, Mar 30, 2008 at 5:09 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
> > 4) If you think some supernatural being placed an insight in your mind,
>  > you're
>  > probably better off NOT mentioning this when discussing the insight in a
>  > scientific forum, as it will just cause your idea to be taken way less
>  > seriously
>  > by a vast majority of scientific-minded people...
>
>  Awesome answer!
>
>  However, only *some* religions believe in supernatural beings and I,
>  personally, have never seen any evidence supporting such a thing.

I've got one in a jar in my basement ... but don't worry, I won't let him out
till the time is right ;-) ...

and so far, all his AI ideas have proved to be
absolute bullshit, unfortunately ... though he's done a good job of helping
me put hexes on my neighbors...


>  Have you been having such experiences and been avoiding mentioning them
>  because you're afraid for your reputation?
>
>  Ben, I'm worried about you now.;-)
>
>
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Mark Waser
4) If you think some supernatural being placed an insight in your mind, 
you're

probably better off NOT mentioning this when discussing the insight in a
scientific forum, as it will just cause your idea to be taken way less 
seriously

by a vast majority of scientific-minded people...


Awesome answer!

However, only *some* religions believe in supernatural beings and I, 
personally, have never seen any evidence supporting such a thing.


Have you been having such experiences and been avoiding mentioning them 
because you're afraid for your reputation?


Ben, I'm worried about you now.;-) 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Ben Goertzel
My judgment as list moderator:

1)  Discussions of particular, speculative algorithms for solving SAT
are not really germane for this list

2)  Announcements of really groundbreaking new SAT algorithms would
certainly be germane to the list

3) Discussions of issues specifically regarding the integration of SAT solvers
into AGI architectures are highly relevant to this list

4) If you think some supernatural being placed an insight in your mind, you're
probably better off NOT mentioning this when discussing the insight in a
scientific forum, as it will just cause your idea to be taken way less seriously
by a vast majority of scientific-minded people...

-- Ben G, List Owner

On Sun, Mar 30, 2008 at 4:41 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
>
>
> I agree with Richard and hereby formally request that Ben chime in.
>
> It is my contention that SAT is a relatively narrow form of Narrow AI and
> not general enough to be on an AGI list.
>
> This is not meant, in any way shape or form, to denigrate the work that you
> are doing.  It is very important work.
>
> It's just that you're performing the equivalent of presenting a biology
> paper at a physics convention.:-)
>
>
>
>
> - Original Message -
> From: Jim Bromer
> To: agi@v2.listbox.com
> Sent: Sunday, March 30, 2008 11:52 AM
> Subject: **SPAM** Re: [agi] Logical Satisfiability...Get used to it.
>
>
>
>
>
> > On the contrary, Vladimir is completely correct in requesting that the
> > discussion go elsewhere:  this has no relevance to the AGI list, and
> > there are other places where it would be pertinent.
> >
> >
> > Richard Loosemore
> >
> >
>
>  If Ben doesn't want me to continue, I will stop posting to this group.
> Otherwise please try to understand what I said about the relevance of SAT to
> AGI and try to address the specific issues that I mentioned.  On the other
> hand, if you don't want to waste your time in this kind of discussion then
> do just that: Stay out of it.
> Jim Bromer
>
>
> Jim Bromer
>  
>
>  agi | Archives | Modify Your Subscription
>  
>
>  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Mark Waser
I agree with Richard and hereby formally request that Ben chime in.

It is my contention that SAT is a relatively narrow form of Narrow AI and not 
general enough to be on an AGI list.

This is not meant, in any way shape or form, to denigrate the work that you are 
doing.  It is very important work.  

It's just that you're performing the equivalent of presenting a biology paper 
at a physics convention.:-)

  - Original Message - 
  From: Jim Bromer 
  To: agi@v2.listbox.com 
  Sent: Sunday, March 30, 2008 11:52 AM
  Subject: **SPAM** Re: [agi] Logical Satisfiability...Get used to it.





On the contrary, Vladimir is completely correct in requesting that the
discussion go elsewhere:  this has no relevance to the AGI list, and
there are other places where it would be pertinent.


Richard Loosemore


  If Ben doesn't want me to continue, I will stop posting to this group. 
Otherwise please try to understand what I said about the relevance of SAT to 
AGI and try to address the specific issues that I mentioned.  On the other 
hand, if you don't want to waste your time in this kind of discussion then do 
just that: Stay out of it.
  Jim Bromer


  Jim Bromer

--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Jim Bromer
 On the contrary, Vladimir is completely correct in requesting that the
> discussion go elsewhere:  this has no relevance to the AGI list, and
> there are other places where it would be pertinent.
>
>
> Richard Loosemore
>

If Ben doesn't want me to continue, I will stop posting to this group.
Otherwise please try to understand what I said about the relevance of SAT to
AGI and try to address the specific issues that I mentioned.  On the other
hand, if you don't want to waste your time in this kind of discussion then
do just that: Stay out of it.
Jim Bromer


Jim Bromer

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Jim Bromer
I intend on ignoring Vladimir's remarks about this. Many neural network
enthusiasts have expressed hostility toward discussions of logic in regards
to AI and I see no reason to let them arbitrarily rule over these discussion
groups. And many people have claimed that my messages don't make much
sense.  One person repeatedly claimed that some of my messages were just
"word salad."  Thanks for your encouragement.

I have had discussions about GA's, and I feel strongly that they contributed
to the conceptual development of AI, but do not feel that they are strong
enough for advanced AGI.

I tried the DPLL solver a long time ago, and my experience was that it did
not work on a lot of interesting logical formulas.  And as you say, if there
was one general solver, even if it was difficult, a lot of us might try to
incorporate it into whatever area of AI that we were interested in.

The issue that I am still trying to develop is whether or not a general SAT
solver would be useful for AGI. I believe it would be. So I am going to go
on with my theory about bounded logical networks.

A bounded logical network is a network where simple logical theories, that
is logical speculations about the input output environment and about its own
'thinking', could be constructed with hundreds or thousands of variants and
interconnections to other bounded logical theories.  These theories would
not be fully integrated by strong taxonomic logic, so they could be used
with inductive learning.  Such a system would produce some inconsistencies,
but any inductive system can produce inconsistencies.  I believe that
interconnected logically bounded theories could show both the intuition of
network theories, the subtleties and nuances of complex integrated theories,
and the strong logical-analytical potential of logical-rational programs.
People should also realize that the bounded interconnected logical
model could be used with a variety of rational reasoning methods, not just
ideative logic.  But my personal theories do center around rational ideative
reasoning that would, be capable (I believe) of using and learning general
reasoning.

Now there is one criticism to my opinion about the usefulness of a general
SAT solver in this model.  That is, since an interconnected bounded logical
network model is not a pure fully integrated logical model, then
approximations could be used effectively in the model.  For instance, if the
system was capable of creating bounded logical theories with thousands of
interconnections, even if a solution wasn't known, the program could
try millions of guesses about the logical relations to see if any worked.
These guesses would be simplifications, which would tend toward
over-generalization, but whats the problem?  The system I have in mind is
not purely logical, it is a bounded logical system which could and would
contain many inconsistencies anyway.  My contention here is that is just the
problem that we are faced with today in rational based AGI.  They can get so
far, but only so far.

A theory, with thousands of subtle variations and connections with other
theories, that only had one or a few correct solutions would be useful in
critical reasoning because these special theories would be critically
significant.  They would exhibit strong correlations with simple or
constrained relations that would be more like experiments that isolated
significant factors that can be tested.  And these related theories could be
examined more effectively using abstraction as well.  (There could still be
problems with the critical theory since it could contain inconsistencies,
but you are going to have that problem with any inductive system.)  If you
are going to be using a rational-based AGI method, then you are going to
want some theories that exhibit critical reasoning.  These kinds of
theories might turn out to be the keystone in developing more sophisticated
models about the world and reevaluating less sophisticated models.

Jim Bromer



On Sat, Mar 29, 2008 at 10:28 PM, David Salamon <[EMAIL PROTECTED]> wrote:

> Hey Jim,
>
> Glad to hear you're making some headway on such an important and
> challenging problem!
>
> Don't read to much in to Vladimir's response... he's probably just having
> a hard day or something :p  If it's fair game to talk about all the other
> narrow-AI topics on this list, talking about SAT is fair game.
>
> Although as Vladimir notes we do have some pretty good solutions (two that
> are both powerful and easy to understand are: DPLL and WalkSAT [interstingly
> this one is by the inventor of the BitTorrent protocol :p]). It is also very
> important to remember that those engineering AGIs would actually use SAT
> more often given such a solution, so it is very hard to tell all the massive
> benefits to the AGI effort (and the rest of humanity) straight off.
>
> I for one would (pardon my freedom) cream myself with joy to have such a
> solver. Additionally my heart goes out to anyone with the driv

Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Richard Loosemore

David Salamon wrote:

Hey Jim,

Glad to hear you're making some headway on such an important and 
challenging problem!


Don't read to much in to Vladimir's response... he's probably just 
having a hard day or something :p  If it's fair game to talk about all 
the other narrow-AI topics on this list, talking about SAT is fair game.


On the contrary, Vladimir is completely correct in requesting that the 
discussion go elsewhere:  this has no relevance to the AGI list, and 
there are other places where it would be pertinent.



Richard Loosemore


Although as Vladimir notes we do have some pretty good solutions (two 
that are both powerful and easy to understand are: DPLL and WalkSAT 
[interstingly this one is by the inventor of the BitTorrent protocol 
:p]). It is also very important to remember that those engineering AGIs 
would actually use SAT more often given such a solution, so it is very 
hard to tell all the massive benefits to the AGI effort (and the rest of 
humanity) straight off.


I for one would (pardon my freedom) cream myself with joy to have such a 
solver. Additionally my heart goes out to anyone with the drive and 
skill to work in areas like the ones you're in, especially one with the 
rocks to get up in front of a bunch of atheists and talk about their 
creator.


Keep us updated of this and any other AGI related areas of interest (my 
brief google stalk turned up your interest in genetic 
algorithms/programming, is that related?).


cheers,
-david salamon


On Sat, Mar 29, 2008 at 3:29 PM, Jim Bromer <[EMAIL PROTECTED] 
> wrote:


I have made a little progress on my SAT theory. As I said, I believe
that there is a chance that I might have gotten the word from the
Lord on my efforts (seriously), although I am not, in any way,
saying that either the project or the Lord's involvement is a sure
thing.  So I am partially going on faith, but it is not blind
faith.  I haven't come close to getting objective evidence that it
will work for all cases, but so far it is acting within the range of
expectations that I developed based on simulations that I created by
parts.  (These 'simulations' were simple and many done in my mind,
but some were done with pencil and paper, etc.)  I have examined the
problem in parts, and by looking at the parts with
different assumptions and examining the problems using positive,
critical and alternative theories, I have come to the conclusion
that it is feasible.  It will be a clunker though, no question about
that.  
 
So anyway, I cannot yet prove my theory, but I cannot disprove it

either.  I have been working on the problem for three years, and I
worked on it for a few months 20 years ago.  But I have been working
on this current theory since Oct 2007.  I have had experiences
similar to the those that Ben and others have talked about, where I
too thought I solved the problem only to discover that I hadn't
a short time later, but this has been going for five months since
October, and I am not retracting anything yet. 
 
But the thing that I still want to talk about is whether or not

anyone will be able to use a polynomial time solution to advantage
if indeed I can actually do it (as I am starting to believe that I
can).  An n^4 or n^5 solution to SAT does not look so great and even
an n^3 solution is a clunker.  And I also do not believe that strict
logic is going work for AGI.  But even so, I think I would be able
to use the theory in AGI because I believe it would be useful to use
logic in creating theories and theoretical models of whatever the
program would consider, and even though those logical theories would
have to be broken up into parts (parts that would be interconnected
and may overlap) I now suspect that if simple logical theories were
composed of hundreds of variations they could be used more
intuitively and more profoundly than if they were constrained to a
concise statement of only a few logical variables.  And an n^3 SAT
solver can easily handle a few thousand variables; a 2^n solver cannot.
 
And what most of the readers of my previous message have not

realized is that a solution to SAT will almost surely have a greater
potential effect than the solution to the simple problem of SAT.  It
will be a new way to look at logical complexity and it will
eventually lead to new ways to handle logical problems.  Imagining
how overlapping interrelated partitions of logical theories
which can handle up to a few thousand logical variables each and
which can handle a few thousand logical interconnections between
those parts I believe that I can see how artficial mind might be
both an intuitive network device and a strong logical analytical device.
 
Jim Bromer


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-29 Thread David Salamon
Hey Jim,

Glad to hear you're making some headway on such an important and challenging
problem!

Don't read to much in to Vladimir's response... he's probably just having a
hard day or something :p  If it's fair game to talk about all the other
narrow-AI topics on this list, talking about SAT is fair game.

Although as Vladimir notes we do have some pretty good solutions (two that
are both powerful and easy to understand are: DPLL and WalkSAT [interstingly
this one is by the inventor of the BitTorrent protocol :p]). It is also very
important to remember that those engineering AGIs would actually use SAT
more often given such a solution, so it is very hard to tell all the massive
benefits to the AGI effort (and the rest of humanity) straight off.

I for one would (pardon my freedom) cream myself with joy to have such a
solver. Additionally my heart goes out to anyone with the drive and skill to
work in areas like the ones you're in, especially one with the rocks to get
up in front of a bunch of atheists and talk about their creator.

Keep us updated of this and any other AGI related areas of interest (my
brief google stalk turned up your interest in genetic
algorithms/programming, is that related?).

cheers,
-david salamon


On Sat, Mar 29, 2008 at 3:29 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:

> I have made a little progress on my SAT theory. As I said, I believe that
> there is a chance that I might have gotten the word from the Lord on my
> efforts (seriously), although I am not, in any way, saying that either the
> project or the Lord's involvement is a sure thing.  So I am partially going
> on faith, but it is not blind faith.  I haven't come close to getting
> objective evidence that it will work for all cases, but so far it is acting
> within the range of expectations that I developed based on simulations that
> I created by parts.  (These 'simulations' were simple and many done in my
> mind, but some were done with pencil and paper, etc.)  I have examined the
> problem in parts, and by looking at the parts with different assumptions and
> examining the problems using positive, critical and alternative theories, I
> have come to the conclusion that it is feasible.  It will be a clunker
> though, no question about that.
>
> So anyway, I cannot yet prove my theory, but I cannot disprove it either.
> I have been working on the problem for three years, and I worked on it for a
> few months 20 years ago.  But I have been working on this current theory
> since Oct 2007.  I have had experiences similar to the those that Ben and
> others have talked about, where I too thought I solved the problem only to
> discover that I hadn't a short time later, but this has been going for five
> months since October, and I am not retracting anything yet.
>
> But the thing that I still want to talk about is whether or not anyone
> will be able to use a polynomial time solution to advantage if indeed I can
> actually do it (as I am starting to believe that I can).  An n^4 or n^5
> solution to SAT does not look so great and even an n^3 solution is a
> clunker.  And I also do not believe that strict logic is going work for
> AGI.  But even so, I think I would be able to use the theory in AGI because
> I believe it would be useful to use logic in creating theories and
> theoretical models of whatever the program would consider, and even though
> those logical theories would have to be broken up into parts (parts that
> would be interconnected and may overlap) I now suspect that if simple
> logical theories were composed of hundreds of variations they could be used
> more intuitively and more profoundly than if they were constrained to a
> concise statement of only a few logical variables.  And an n^3 SAT solver
> can easily handle a few thousand variables; a 2^n solver cannot.
>
> And what most of the readers of my previous message have not realized is
> that a solution to SAT will almost surely have a greater potential effect
> than the solution to the simple problem of SAT.  It will be a new way to
> look at logical complexity and it will eventually lead to new ways to handle
> logical problems.  Imagining how overlapping interrelated partitions of
> logical theories which can handle up to a few thousand logical variables
> each and which can handle a few thousand logical interconnections between
> those parts I believe that I can see how artficial mind might be both an
> intuitive network device and a strong logical analytical device.
>
> Jim Bromer
>  --
>   *agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=86

Re: [agi] Logical Satisfiability...Get used to it.

2008-03-29 Thread Vladimir Nesov
Jim,

Could you keep P=NP discussion off this list? There are plenty of
powerful SAT solvers already, so if there is a path towards AGI that
needs a SAT solver, they can be used in at least small-scale
prototypes, and thus the absence of scalable SAT solver is not a
bottleneck at the moment. P=NP can have profound implication on other
issues, but it's hardly specifically relevant for AGI. If your
interest lies in AI, P=NP is not the way, and if your interest lies in
P=NP, AGI is irrelevant.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com