On Tue, Feb 4, 2020, 4:12 PM Danko Nikolic <danko.niko...@gmail.com> wrote:

> " If there was no free lunch, then all the particles in the universe would
> have the same laws or random laws. "
>
> This is not true. There is no-free-lunch in everyday application of
> machine learning. When one finds out that one type of model performs better
> than another type of model (e.g., Bayes vs. Markov chain or Deep NN vs.
> decision tree), there is always no free lunch involved. You pick one model
> that is good for data set A, the same model is bad for data set B. And
> these are real data sets in our real life. No free lunch hits us in daily
> practice all the time.
>
> I cover that also in my advanced course on machine learning at Udemy.
>

This is not a consequence of no free lunch (which is based on the false
assumption of a uniform a-priori, which is not possible over infinite
sets). It is a consequence of Legg's proof that there is no elegant theory
of prediction. Powerful predictors are necessarily complex.
https://arxiv.org/abs/cs/0606070

So trying several algorithms (decision trees, neural nets, k-means, SVM,
etc.) is going to outperform any one of them, but you have to write more
code. Same with good compressors. You write lots of code to handle lots of
obscure file types and special cases. Legg proves there is no way around
this.

Suppose a simple bit predictor existed. Then I can create a simple sequence
you can't predict. My program simulates yours and outputs the opposite bit.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T353f2000d499d93b-Mecb6638fad8ed81bedd77506
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to