May I also point out to the My settings button on your top right corner > 
My topic email subscriptions > Unsubscribe from this thread, which would've 
spared you the message.

On Friday, September 2, 2016 at 11:19:42 AM UTC-3, Kevin Liu wrote:
>
> Hello Chris. Have you been applying relational learning to your Neural 
> Crest Migration Patterns in Craniofacial Development research project? It 
> could enhance your insights. 
>
> On Friday, September 2, 2016 at 6:18:15 AM UTC-3, Chris Rackauckas wrote:
>>
>> This entire thread is a trip... a trip which is not really relevant to 
>> julia-users. You may want to share these musings in the form of a blog 
>> instead of posting them here.
>>
>> On Friday, September 2, 2016 at 1:41:03 AM UTC-7, Kevin Liu wrote:
>>>
>>> Princeton's post: 
>>> http://www.nytimes.com/2016/08/28/world/europe/france-burkini-bikini-ban.html?_r=1
>>>
>>> Only logic saves us from paradox. - Minsky
>>>
>>> On Thursday, August 25, 2016 at 10:18:27 PM UTC-3, Kevin Liu wrote:
>>>>
>>>> Tim Holy, I am watching your keynote speech at JuliaCon 2016 where you 
>>>> mention the best optimization is not doing the computation at all. 
>>>>
>>>> Domingos talks about that in his book, where an efficient kind of 
>>>> learning is by analogy, with no model at all, and how numerous scientific 
>>>> discoveries have been made that way, e.g. Bohr's analogy of the solar 
>>>> system to the atom. Analogizers learn by hypothesizing that entities with 
>>>> similar known properties have similar unknown ones. 
>>>>
>>>> MLN can reproduce structure mapping, which is the more powerful type of 
>>>> analogy, that can make inferences from one domain (solar system) to 
>>>> another 
>>>> (atom). This can be done by learning formulas that don't refer to any of 
>>>> the specific relations in the source domain (general formulas). 
>>>>
>>>> Seth and Tim have been helping me a lot with putting the pieces 
>>>> together for MLN in the repo I created 
>>>> <https://github.com/hpoit/Kenya.jl/issues/2>, and more help is always 
>>>> welcome. I would like to write MLN in idiomatic Julia. My question at the 
>>>> moment to you and the community is how to keep mappings of first-order 
>>>> harmonic functions type-stable in Julia? I am just getting acquainted with 
>>>> the type field. 
>>>>
>>>> On Tuesday, August 9, 2016 at 9:02:25 AM UTC-3, Kevin Liu wrote:
>>>>>
>>>>> Helping me separate the process in parts and priorities would be a lot 
>>>>> of help. 
>>>>>
>>>>> On Tuesday, August 9, 2016 at 8:41:03 AM UTC-3, Kevin Liu wrote:
>>>>>>
>>>>>> Tim Holy, what if I could tap into the well of knowledge that you are 
>>>>>> to speed up things? Can you imagine if every learner had to start 
>>>>>> without 
>>>>>> priors? 
>>>>>>
>>>>>> > On Aug 9, 2016, at 07:06, Tim Holy <tim....@gmail.com> wrote: 
>>>>>> > 
>>>>>> > I'd recommend starting by picking a very small project. For 
>>>>>> example, fix a bug 
>>>>>> > or implement a small improvement in a package that you already find 
>>>>>> useful or 
>>>>>> > interesting. That way you'll get some guidance while making a 
>>>>>> positive 
>>>>>> > contribution; once you know more about julia, it will be easier to 
>>>>>> see your 
>>>>>> > way forward. 
>>>>>> > 
>>>>>> > Best, 
>>>>>> > --Tim 
>>>>>> > 
>>>>>> >> On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote: 
>>>>>> >> I have no idea where to start and where to finish. Founders' help 
>>>>>> would be 
>>>>>> >> wonderful. 
>>>>>> >> 
>>>>>> >>> On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu wrote: 
>>>>>> >>> After which I have to code Felix into Julia, a relational 
>>>>>> optimizer for 
>>>>>> >>> statistical inference with Tuffy <
>>>>>> http://i.stanford.edu/hazy/tuffy/> 
>>>>>> >>> inside, for enterprise settings. 
>>>>>> >>> 
>>>>>> >>>> On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu 
>>>>>> wrote: 
>>>>>> >>>> Can I get tips on bringing Alchemy's optimized Tuffy 
>>>>>> >>>> <http://i.stanford.edu/hazy/tuffy/> in Java to Julia while 
>>>>>> showing the 
>>>>>> >>>> best of Julia? I am going for the most correct way, even if it 
>>>>>> means 
>>>>>> >>>> coding 
>>>>>> >>>> Tuffy into C and Julia. 
>>>>>> >>>> 
>>>>>> >>>>> On Sunday, August 7, 2016 at 8:34:37 PM UTC-3, Kevin Liu wrote: 
>>>>>> >>>>> I'll try to build it, compare it, and show it to you guys. I 
>>>>>> offered to 
>>>>>> >>>>> do this as work. I am waiting to see if they will accept it. 
>>>>>> >>>>> 
>>>>>> >>>>>> On Sunday, August 7, 2016 at 6:15:50 PM UTC-3, Stefan 
>>>>>> Karpinski wrote: 
>>>>>> >>>>>> Kevin, as previously requested by Isaiah, please take this to 
>>>>>> some 
>>>>>> >>>>>> other forum or maybe start a blog. 
>>>>>> >>>>>> 
>>>>>> >>>>>>> On Sat, Aug 6, 2016 at 10:53 PM, Kevin Liu <kvt...@gmail.com> 
>>>>>> wrote: 
>>>>>> >>>>>>> Symmetry-based learning, Domingos, 2014 
>>>>>> >>>>>>> 
>>>>>> https://www.microsoft.com/en-us/research/video/symmetry-based-learning 
>>>>>> >>>>>>> / 
>>>>>> >>>>>>> 
>>>>>> >>>>>>> Approach 2: Deep symmetry networks generalize convolutional 
>>>>>> neural 
>>>>>> >>>>>>> networks by tying parameters and pooling over an arbitrary 
>>>>>> symmetry 
>>>>>> >>>>>>> group, 
>>>>>> >>>>>>> not just the translation group. In preliminary experiments, 
>>>>>> they 
>>>>>> >>>>>>> outperformed convnets on a digit recognition task. 
>>>>>> >>>>>>> 
>>>>>> >>>>>>>> On Friday, August 5, 2016 at 4:56:45 PM UTC-3, Kevin Liu 
>>>>>> wrote: 
>>>>>> >>>>>>>> Minsky died of a cerebral hemorrhage at the age of 88.[40] 
>>>>>> >>>>>>>> <https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-40> 
>>>>>> Ray 
>>>>>> >>>>>>>> Kurzweil <https://en.wikipedia.org/wiki/Ray_Kurzweil> says 
>>>>>> he was 
>>>>>> >>>>>>>> contacted by the cryonics organization Alcor Life Extension 
>>>>>> >>>>>>>> Foundation 
>>>>>> >>>>>>>> <
>>>>>> https://en.wikipedia.org/wiki/Alcor_Life_Extension_Foundation> 
>>>>>> >>>>>>>> seeking 
>>>>>> >>>>>>>> Minsky's body.[41] 
>>>>>> >>>>>>>> <
>>>>>> https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-Kurzweil-41> 
>>>>>> >>>>>>>> Kurzweil believes that Minsky was cryonically preserved by 
>>>>>> Alcor and 
>>>>>> >>>>>>>> will be revived by 2045.[41] 
>>>>>> >>>>>>>> <
>>>>>> https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-Kurzweil-41> 
>>>>>> >>>>>>>> Minsky 
>>>>>> >>>>>>>> was a member of Alcor's Scientific Advisory Board 
>>>>>> >>>>>>>> <https://en.wikipedia.org/wiki/Advisory_Board>.[42] 
>>>>>> >>>>>>>> <
>>>>>> https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-AlcorBoard-42> 
>>>>>> >>>>>>>> In 
>>>>>> >>>>>>>> keeping with their policy of protecting privacy, Alcor will 
>>>>>> neither 
>>>>>> >>>>>>>> confirm 
>>>>>> >>>>>>>> nor deny that Alcor has cryonically preserved Minsky.[43] 
>>>>>> >>>>>>>> <https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-43> 
>>>>>> >>>>>>>> 
>>>>>> >>>>>>>> We better do a good job. 
>>>>>> >>>>>>>> 
>>>>>> >>>>>>>>> On Friday, August 5, 2016 at 4:45:42 PM UTC-3, Kevin Liu 
>>>>>> wrote: 
>>>>>> >>>>>>>>> *So, I think in the next 20 years (2003), if we can get rid 
>>>>>> of all 
>>>>>> >>>>>>>>> of the traditional approaches to artificial intelligence, 
>>>>>> like 
>>>>>> >>>>>>>>> neural nets 
>>>>>> >>>>>>>>> and genetic algorithms and rule-based systems, and just 
>>>>>> turn our 
>>>>>> >>>>>>>>> sights a 
>>>>>> >>>>>>>>> little bit higher to say, can we make a system that can use 
>>>>>> all 
>>>>>> >>>>>>>>> those 
>>>>>> >>>>>>>>> things for the right kind of problem? Some problems are 
>>>>>> good for 
>>>>>> >>>>>>>>> neural 
>>>>>> >>>>>>>>> nets; we know that others, neural nets are hopeless on 
>>>>>> them. Genetic 
>>>>>> >>>>>>>>> algorithms are great for certain things; I suspect I know 
>>>>>> what 
>>>>>> >>>>>>>>> they're bad 
>>>>>> >>>>>>>>> at, and I won't tell you. (Laughter)*  - Minsky, founder of 
>>>>>> CSAIL 
>>>>>> >>>>>>>>> MIT 
>>>>>> >>>>>>>>> 
>>>>>> >>>>>>>>> *Those programmers tried to find the single best way to 
>>>>>> represent 
>>>>>> >>>>>>>>> knowledge - Only Logic protects us from paradox.* - Minsky 
>>>>>> (see 
>>>>>> >>>>>>>>> attachment from his lecture) 
>>>>>> >>>>>>>>> 
>>>>>> >>>>>>>>>> On Friday, August 5, 2016 at 8:12:03 AM UTC-3, Kevin Liu 
>>>>>> wrote: 
>>>>>> >>>>>>>>>> Markov Logic Network is being used for the continuous 
>>>>>> development 
>>>>>> >>>>>>>>>> of drugs to cure cancer at MIT's CanceRX <
>>>>>> http://cancerx.mit.edu/>, 
>>>>>> >>>>>>>>>> on 
>>>>>> >>>>>>>>>> DARPA's largest AI project to date, Personalized Assistant 
>>>>>> that 
>>>>>> >>>>>>>>>> Learns (PAL) <https://pal.sri.com/>, progenitor of Siri. 
>>>>>> One of 
>>>>>> >>>>>>>>>> Alchemy's largest applications to date was to learn a 
>>>>>> semantic 
>>>>>> >>>>>>>>>> network 
>>>>>> >>>>>>>>>> (knowledge graph as Google calls it) from the web. 
>>>>>> >>>>>>>>>> 
>>>>>> >>>>>>>>>> Some on Probabilistic Inductive Logic Programming / 
>>>>>> Probabilistic 
>>>>>> >>>>>>>>>> Logic Programming / Statistical Relational Learning from 
>>>>>> CSAIL 
>>>>>> >>>>>>>>>> <
>>>>>> http://people.csail.mit.edu/kersting/ecmlpkdd05_pilp/pilp_ida2005_ 
>>>>>> >>>>>>>>>> tut.pdf> (my understanding is Alchemy does PILP from 
>>>>>> entailment, 
>>>>>> >>>>>>>>>> proofs, and 
>>>>>> >>>>>>>>>> interpretation) 
>>>>>> >>>>>>>>>> 
>>>>>> >>>>>>>>>> The MIT Probabilistic Computing Project (where there is 
>>>>>> Picture, an 
>>>>>> >>>>>>>>>> extension of Julia, for computer vision; Watch the video 
>>>>>> from 
>>>>>> >>>>>>>>>> Vikash) 
>>>>>> >>>>>>>>>> <http://probcomp.csail.mit.edu/index.html> 
>>>>>> >>>>>>>>>> 
>>>>>> >>>>>>>>>> Probabilistic programming could do for Bayesian ML what 
>>>>>> Theano has 
>>>>>> >>>>>>>>>> done for neural networks. 
>>>>>> >>>>>>>>>> <http://www.inference.vc/deep-learning-is-easy/> - Ferenc 
>>>>>> Huszár 
>>>>>> >>>>>>>>>> 
>>>>>> >>>>>>>>>> Picture doesn't appear to be open-source, even though its 
>>>>>> Paper is 
>>>>>> >>>>>>>>>> available. 
>>>>>> >>>>>>>>>> 
>>>>>> >>>>>>>>>> I'm in the process of comparing the Picture Paper and 
>>>>>> Alchemy code 
>>>>>> >>>>>>>>>> and would like to have an open-source PILP from Julia that 
>>>>>> combines 
>>>>>> >>>>>>>>>> the 
>>>>>> >>>>>>>>>> best of both. 
>>>>>> >>>>>>>>>> 
>>>>>> >>>>>>>>>> On Wednesday, August 3, 2016 at 5:01:02 PM UTC-3, Christof 
>>>>>> Stocker 
>>>>>> >>>>>>>>>> 
>>>>>> >>>>>>>>>> wrote: 
>>>>>> >>>>>>>>>>> This sounds like it could be a great contribution. I 
>>>>>> shall keep a 
>>>>>> >>>>>>>>>>> curious eye on your progress 
>>>>>> >>>>>>>>>>> 
>>>>>> >>>>>>>>>>> Am Mittwoch, 3. August 2016 21:53:54 UTC+2 schrieb Kevin 
>>>>>> Liu: 
>>>>>> >>>>>>>>>>>> Thanks for the advice Cristof. I am only interested in 
>>>>>> people 
>>>>>> >>>>>>>>>>>> wanting to code it in Julia, from R by Domingos. The 
>>>>>> algo has 
>>>>>> >>>>>>>>>>>> been 
>>>>>> >>>>>>>>>>>> successfully applied in many areas, even though there 
>>>>>> are many 
>>>>>> >>>>>>>>>>>> other areas 
>>>>>> >>>>>>>>>>>> remaining. 
>>>>>> >>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>> On Wed, Aug 3, 2016 at 4:45 PM, Christof Stocker < 
>>>>>> >>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>> stocker....@gmail.com> wrote: 
>>>>>> >>>>>>>>>>>>> Hello Kevin, 
>>>>>> >>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>> Enthusiasm is a good thing and you should hold on to 
>>>>>> that. But 
>>>>>> >>>>>>>>>>>>> to save yourself some headache or disappointment down 
>>>>>> the road I 
>>>>>> >>>>>>>>>>>>> advice a 
>>>>>> >>>>>>>>>>>>> level head. Nothing is really as bluntly obviously 
>>>>>> solved as it 
>>>>>> >>>>>>>>>>>>> may seems 
>>>>>> >>>>>>>>>>>>> at first glance after listening to brilliant people 
>>>>>> explain 
>>>>>> >>>>>>>>>>>>> things. A 
>>>>>> >>>>>>>>>>>>> physics professor of mine once told me that one of the 
>>>>>> (he 
>>>>>> >>>>>>>>>>>>> thinks) most 
>>>>>> >>>>>>>>>>>>> malicious factors to his past students progress where 
>>>>>> overstated 
>>>>>> >>>>>>>>>>>>> results/conclusions by other researches (such as 
>>>>>> premature 
>>>>>> >>>>>>>>>>>>> announcements 
>>>>>> >>>>>>>>>>>>> from CERN). I am no mathematician, but as far as I can 
>>>>>> judge is 
>>>>>> >>>>>>>>>>>>> the no free 
>>>>>> >>>>>>>>>>>>> lunch theorem of pure mathematical nature and not 
>>>>>> something 
>>>>>> >>>>>>>>>>>>> induced 
>>>>>> >>>>>>>>>>>>> empirically. These kind of results are not that easily 
>>>>>> to get 
>>>>>> >>>>>>>>>>>>> rid of. If 
>>>>>> >>>>>>>>>>>>> someone (especially an expert) states such a theorem 
>>>>>> will prove 
>>>>>> >>>>>>>>>>>>> wrong I 
>>>>>> >>>>>>>>>>>>> would be inclined to believe that he is not talking 
>>>>>> about 
>>>>>> >>>>>>>>>>>>> literally, but 
>>>>>> >>>>>>>>>>>>> instead is just trying to make a point about a more or 
>>>>>> less 
>>>>>> >>>>>>>>>>>>> practical 
>>>>>> >>>>>>>>>>>>> implication. 
>>>>>> >>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>> Am Mittwoch, 3. August 2016 21:27:05 UTC+2 schrieb 
>>>>>> Kevin Liu: 
>>>>>> >>>>>>>>>>>>>> The Markov logic network represents a probability 
>>>>>> distribution 
>>>>>> >>>>>>>>>>>>>> over the states of a complex system (i.e. a cell), 
>>>>>> comprised of 
>>>>>> >>>>>>>>>>>>>> entities, 
>>>>>> >>>>>>>>>>>>>> where logic formulas encode the dependencies between 
>>>>>> them. 
>>>>>> >>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>> On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, 
>>>>>> Kevin Liu 
>>>>>> >>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>> wrote: 
>>>>>> >>>>>>>>>>>>>>> Alchemy is like an inductive Turing machine, to be 
>>>>>> programmed 
>>>>>> >>>>>>>>>>>>>>> to learn broadly or restrictedly. 
>>>>>> >>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>> The logic formulas from rules through which it 
>>>>>> represents can 
>>>>>> >>>>>>>>>>>>>>> be inconsistent, incomplete, or even incorrect-- the 
>>>>>> learning 
>>>>>> >>>>>>>>>>>>>>> and 
>>>>>> >>>>>>>>>>>>>>> probabilistic reasoning will correct them. The key 
>>>>>> point is 
>>>>>> >>>>>>>>>>>>>>> that Alchemy 
>>>>>> >>>>>>>>>>>>>>> doesn't have to learn from scratch, proving Wolpert 
>>>>>> and 
>>>>>> >>>>>>>>>>>>>>> Macready's no free 
>>>>>> >>>>>>>>>>>>>>> lunch theorem wrong by performing well on a variety 
>>>>>> of classes 
>>>>>> >>>>>>>>>>>>>>> of problems, 
>>>>>> >>>>>>>>>>>>>>> not just some. 
>>>>>> >>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>> On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, 
>>>>>> Kevin Liu 
>>>>>> >>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>> wrote: 
>>>>>> >>>>>>>>>>>>>>>> Hello Community, 
>>>>>> >>>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>>> I'm in the last pages of Pedro Domingos' book, the 
>>>>>> Master 
>>>>>> >>>>>>>>>>>>>>>> Algo, one of two recommended by Bill Gates to learn 
>>>>>> about AI. 
>>>>>> >>>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>>> From the book, I understand all learners have to 
>>>>>> represent, 
>>>>>> >>>>>>>>>>>>>>>> evaluate, and optimize. There are many types of 
>>>>>> learners that 
>>>>>> >>>>>>>>>>>>>>>> do this. What 
>>>>>> >>>>>>>>>>>>>>>> Domingos does is generalize these three parts, (1) 
>>>>>> using 
>>>>>> >>>>>>>>>>>>>>>> Markov Logic 
>>>>>> >>>>>>>>>>>>>>>> Network to represent, (2) posterior probability to 
>>>>>> evaluate, 
>>>>>> >>>>>>>>>>>>>>>> and (3) 
>>>>>> >>>>>>>>>>>>>>>> genetic search with gradient descent to optimize. 
>>>>>> The 
>>>>>> >>>>>>>>>>>>>>>> posterior can be 
>>>>>> >>>>>>>>>>>>>>>> replaced for another accuracy measure when it is 
>>>>>> easier, as 
>>>>>> >>>>>>>>>>>>>>>> genetic search 
>>>>>> >>>>>>>>>>>>>>>> replaced by hill climbing. Where there are 15 
>>>>>> popular options 
>>>>>> >>>>>>>>>>>>>>>> for 
>>>>>> >>>>>>>>>>>>>>>> representing, evaluating, and optimizing, Domingos 
>>>>>> >>>>>>>>>>>>>>>> generalized them into 
>>>>>> >>>>>>>>>>>>>>>> three options. The idea is to have one unified 
>>>>>> learner for 
>>>>>> >>>>>>>>>>>>>>>> any application. 
>>>>>> >>>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>>> There is code already done in R 
>>>>>> >>>>>>>>>>>>>>>> https://alchemy.cs.washington.edu/. My question: 
>>>>>> anybody in 
>>>>>> >>>>>>>>>>>>>>>> the community vested in coding it into Julia? 
>>>>>> >>>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>>> Thanks. Kevin 
>>>>>> >>>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>>>> On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin 
>>>>>> Liu wrote: 
>>>>>> >>>>>>>>>>>>>>>>> https://github.com/tbreloff/OnlineAI.jl/issues/5 
>>>>>> >>>>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>>>> On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin 
>>>>>> Liu 
>>>>>> >>>>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>>>> wrote: 
>>>>>> >>>>>>>>>>>>>>>>>> I plan to write Julia for the rest of me life... 
>>>>>> given it 
>>>>>> >>>>>>>>>>>>>>>>>> remains suitable. I am still reading all of 
>>>>>> Colah's 
>>>>>> >>>>>>>>>>>>>>>>>> material on nets. I ran 
>>>>>> >>>>>>>>>>>>>>>>>> Mocha.jl a couple weeks ago and was very happy to 
>>>>>> see it 
>>>>>> >>>>>>>>>>>>>>>>>> work. Thanks for 
>>>>>> >>>>>>>>>>>>>>>>>> jumping in and telling me about OnlineAI.jl, I 
>>>>>> will look 
>>>>>> >>>>>>>>>>>>>>>>>> into it once I am 
>>>>>> >>>>>>>>>>>>>>>>>> ready. From a quick look, perhaps I could help and 
>>>>>> learn by 
>>>>>> >>>>>>>>>>>>>>>>>> building a very 
>>>>>> >>>>>>>>>>>>>>>>>> clear documentation of it. Would really like to 
>>>>>> see Julia a 
>>>>>> >>>>>>>>>>>>>>>>>> leap ahead of 
>>>>>> >>>>>>>>>>>>>>>>>> other languages, and plan to contribute heavily to 
>>>>>> it, but 
>>>>>> >>>>>>>>>>>>>>>>>> at the moment am 
>>>>>> >>>>>>>>>>>>>>>>>> still getting introduced to CS, programming, and 
>>>>>> nets at 
>>>>>> >>>>>>>>>>>>>>>>>> the basic level. 
>>>>>> >>>>>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>>>>> On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom 
>>>>>> Breloff 
>>>>>> >>>>>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>>>>> wrote: 
>>>>>> >>>>>>>>>>>>>>>>>>> Kevin: computers that program themselves is a 
>>>>>> concept 
>>>>>> >>>>>>>>>>>>>>>>>>> which is much closer to reality than most would 
>>>>>> believe, 
>>>>>> >>>>>>>>>>>>>>>>>>> but julia-users 
>>>>>> >>>>>>>>>>>>>>>>>>> isn't really the best place for this speculation. 
>>>>>> If 
>>>>>> >>>>>>>>>>>>>>>>>>> you're actually 
>>>>>> >>>>>>>>>>>>>>>>>>> interested in writing code, I'm happy to discuss 
>>>>>> in 
>>>>>> >>>>>>>>>>>>>>>>>>> OnlineAI.jl. I was 
>>>>>> >>>>>>>>>>>>>>>>>>> thinking about how we might tackle code 
>>>>>> generation using a 
>>>>>> >>>>>>>>>>>>>>>>>>> neural framework 
>>>>>> >>>>>>>>>>>>>>>>>>> I'm working on. 
>>>>>> >>>>>>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>>>>>> On Friday, June 3, 2016, Kevin Liu <
>>>>>> kvt...@gmail.com> 
>>>>>> >>>>>>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>>>>>> wrote: 
>>>>>> >>>>>>>>>>>>>>>>>>>> If Andrew Ng who cited Gates, and Gates who 
>>>>>> cited 
>>>>>> >>>>>>>>>>>>>>>>>>>> Domingos (who did not lecture at Google with a 
>>>>>> TensorFlow 
>>>>>> >>>>>>>>>>>>>>>>>>>> question in the 
>>>>>> >>>>>>>>>>>>>>>>>>>> end), were unsuccessful penny traders, Julia was 
>>>>>> a 
>>>>>> >>>>>>>>>>>>>>>>>>>> language for web design, 
>>>>>> >>>>>>>>>>>>>>>>>>>> and the tribes in the video didn't actually 
>>>>>> solve 
>>>>>> >>>>>>>>>>>>>>>>>>>> problems, perhaps this 
>>>>>> >>>>>>>>>>>>>>>>>>>> would be a wildly off-topic, speculative 
>>>>>> discussion. But 
>>>>>> >>>>>>>>>>>>>>>>>>>> these statements 
>>>>>> >>>>>>>>>>>>>>>>>>>> couldn't be farther from the truth. In fact, if 
>>>>>> I had 
>>>>>> >>>>>>>>>>>>>>>>>>>> known about this 
>>>>>> >>>>>>>>>>>>>>>>>>>> video some months ago I would've understood 
>>>>>> better on how 
>>>>>> >>>>>>>>>>>>>>>>>>>> to solve a 
>>>>>> >>>>>>>>>>>>>>>>>>>> problem I was working on. 
>>>>>> >>>>>>>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>>>>>>> For the founders of Julia: I understand your 
>>>>>> tribe is 
>>>>>> >>>>>>>>>>>>>>>>>>>> mainly CS. This master algorithm, as you are 
>>>>>> aware, would 
>>>>>> >>>>>>>>>>>>>>>>>>>> require 
>>>>>> >>>>>>>>>>>>>>>>>>>> collaboration with other tribes. Just citing the 
>>>>>> obvious. 
>>>>>> >>>>>>>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>>>>>>> On Friday, June 3, 2016 at 10:21:25 AM UTC-3, 
>>>>>> Kevin Liu 
>>>>>> >>>>>>>>>>>>>>>>>>>> 
>>>>>> >>>>>>>>>>>>>>>>>>>> wrote: 
>>>>>> >>>>>>>>>>>>>>>>>>>>> There could be parts missing as Domingos 
>>>>>> mentions, but 
>>>>>> >>>>>>>>>>>>>>>>>>>>> induction, backpropagation, genetic 
>>>>>> programming, 
>>>>>> >>>>>>>>>>>>>>>>>>>>> probabilistic inference, 
>>>>>> >>>>>>>>>>>&
>>>>>
>>>>>

Reply via email to