"Right.  Then you use gradient ascent.  But what if you are scheduling a job 
shop for throughput when there are thousands of variables most of which have 
discrete values?"


I'd try to code it up for a SMT solver like Z3, or look for a SMT solver that 
had theories that closely matched the domain of the job shop.  Or try something 
like this<https://arxiv.org/abs/1609.02200> on a D-Wave.

<https://arxiv.org/abs/1609.02200>

Marcus


________________________________
From: Friam <friam-boun...@redfish.com> on behalf of Frank Wimberly 
<wimber...@gmail.com>
Sent: Wednesday, August 9, 2017 7:35 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Future of humans and artificial intelligence

Right.  Then you use gradient ascent.  But what if you are scheduling a job 
shop for throughput when there are thousands of variables most of which have 
discrete values?

Frank

Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 10:41 PM, "Marcus Daniels" 
<mar...@snoutfarm.com<mailto:mar...@snoutfarm.com>> wrote:

Frank writes:


"My point was that depth-first and breadth-first can probably serve only as a 
straw-man (straw-men?)."


Unless there is a robust meta-rule (not heuristic) or single deterministic 
search algorithm to rule them all, then wouldn't those other suggestions also 
be straw-men too?   If I knew that there were no noise and the domain was 
continuous and convex, then I wouldn't use a stochastic approach.


Marcus

________________________________
From: Friam <friam-boun...@redfish.com<mailto:friam-boun...@redfish.com>> on 
behalf of Frank Wimberly <wimber...@gmail.com<mailto:wimber...@gmail.com>>
Sent: Tuesday, August 8, 2017 10:15:05 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Future of humans and artificial intelligence

My point was that depth-first and breadth-first can probably serve only as a 
straw-man (straw-men?).

Frank Wimberly
Phone (505) 670-9918<tel:(505)%20670-9918>

On Aug 8, 2017 10:11 PM, "Marcus Daniels" 
<mar...@snoutfarm.com<mailto:mar...@snoutfarm.com>> wrote:

Frank writes:


"Then there's best-first search, B*, C*, constraint-directed search, etc.  And 
these are just classical search methods."


Connecting this back to evolutionary / stochastic techniques, genetic 
programming is one way to get the best of both approaches, at least in 
principle.   One can expose these human-designed algorithms as predefined 
library functions.  Typically in genetic programming the vocabulary consists of 
simple routines (e.g. arithmetic), conditionals, and recursion.


In practice, this kind of seeding of the solution space can collapse diversity. 
  It is a drag to see tons of compute time spent on a million little 
refinements around an already good solution.  (Yes, I know that solution!)  
More fun to see a set of clumsy solutions turn into to decent-performing but 
weird solutions.  I find my attention is drawn to properties of sub-populations 
and how I can keep the historically good performers _out_.  Not a pure GA, but 
a GA where communities also have fitness functions matching my heavy hand of 
justice..  (If I prove that conservatism just doesn't work, I'll be sure to 
pass it along.)


Marcus


________________________________
From: Friam <friam-boun...@redfish.com<mailto:friam-boun...@redfish.com>> on 
behalf of Frank Wimberly <wimber...@gmail.com<mailto:wimber...@gmail.com>>
Sent: Tuesday, August 8, 2017 7:57:06 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Future of humans and artificial intelligence

Then there's best-first search, B*, C*, constraint-directed search, etc.  And 
these are just classical search methods.

Feank

Frank Wimberly
Phone (505) 670-9918<tel:(505)%20670-9918>

On Aug 8, 2017 7:20 PM, "Marcus Daniels" 
<mar...@snoutfarm.com<mailto:mar...@snoutfarm.com>> wrote:

"But one problem is that breadth-first and depth-first search are just fast 
ways to find answers."


Just _not_ -- general but not efficient.   [My dog was demanding attention! ]

________________________________
From: Friam <friam-boun...@redfish.com<mailto:friam-boun...@redfish.com>> on 
behalf of Marcus Daniels <mar...@snoutfarm.com<mailto:mar...@snoutfarm.com>>
Sent: Tuesday, August 8, 2017 6:43:40 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence


Grant writes:


"On the other hand... evolution is stochastic. (You actually did not disagree 
with me on that. You only said that the reason I was right was another one.) "


I think of logic programming systems as a traditional tool of AI research (e.g. 
Prolog, now Curry, similar capabilities implemented in Lisp) from the age 
before the AI winter.  These systems provide a very flexible way to pose 
constraint problems.  But one problem is that breadth-first and depth-first 
search are just fast ways to find answers.  Recent work seems to have shifted 
to SMT solvers and specialized constraint solving algorithms, but these have 
somewhat less expressiveness as programming languages.  Meanwhile, machine 
learning has come on the scene in a big way and tasks traditionally associated 
with old-school AI, like natural language processing, are now matched or even 
dominated using neural nets (LSTM).  I find the range of capabilities provided 
by groups like nlp.stanford.edu<http://nlp.stanford.edu> really impressive -- 
there examples of both approaches (logic programming and machine learning) and 
then don't need to be mutually exclusive.


Quantum annealing is one area where the two may increasingly come together by 
using physical phenomena to accelerate the rate at which high dimensional 
discrete systems can be solved, without relying on fragile or domain-specific 
heuristics.


I often use evolutionary algorithms for hard optimization problems.  Genetic 
algorithms, for example, are robust to  noise (or if you like ambiguity) in 
fitness functions, and they are trivial to parallelize.


Marcus

________________________________
From: Friam <friam-boun...@redfish.com<mailto:friam-boun...@redfish.com>> on 
behalf of Grant Holland 
<grant.holland...@gmail.com<mailto:grant.holland...@gmail.com>>
Sent: Tuesday, August 8, 2017 4:51:18 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence


Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. 
And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree 
with me on that. You only said that the reason I was right was another one.) A 
good book on the stochasticity of evolution is "Chance and Necessity" by 
Jacques Monod. (I just finished rereading it for the second time. And that 
proved quite fruitful.)

G.

On 8/8/17 12:44 PM, glen ☣ wrote:

I'm not sure how Asimov intended them.  But the three laws is a trope that 
clearly shows the inadequacy of deontological ethics.  Rules are fine as far as 
they go.  But they don't go very far.  We can see this even in the foundations 
of mathematics, the unification of physics, and polyphenism/robustness in 
biology.  Von Neumann (Burks) said it best when he said: "But in the 
complicated parts of formal logic it is always one order of magnitude harder to 
tell what an object can do than to produce the object."  Or, if you don't like 
that, you can see the same perspective in his iterative construction of sets as 
an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than 
any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus 
our rule sets.  Stochasticity is the measure of the extent to which a rule set 
matches a set of patterns.  But Grant's right to qualify that with evolution, 
not because of the way evolution is stochastic, but because evolution requires 
a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will 
*always* fail.  It's guaranteed to fail because syncing with the environment 
isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.




============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to