Helping me separate the process in parts and priorities would be a lot of 
help. 

On Tuesday, August 9, 2016 at 8:41:03 AM UTC-3, Kevin Liu wrote:
>
> Tim Holy, what if I could tap into the well of knowledge that you are to 
> speed up things? Can you imagine if every learner had to start without 
> priors? 
>
> > On Aug 9, 2016, at 07:06, Tim Holy <tim.h...@gmail.com> wrote: 
> > 
> > I'd recommend starting by picking a very small project. For example, fix 
> a bug 
> > or implement a small improvement in a package that you already find 
> useful or 
> > interesting. That way you'll get some guidance while making a positive 
> > contribution; once you know more about julia, it will be easier to see 
> your 
> > way forward. 
> > 
> > Best, 
> > --Tim 
> > 
> >> On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote: 
> >> I have no idea where to start and where to finish. Founders' help would 
> be 
> >> wonderful. 
> >> 
> >>> On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu wrote: 
> >>> After which I have to code Felix into Julia, a relational optimizer 
> for 
> >>> statistical inference with Tuffy <http://i.stanford.edu/hazy/tuffy/> 
> >>> inside, for enterprise settings. 
> >>> 
> >>>> On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu wrote: 
> >>>> Can I get tips on bringing Alchemy's optimized Tuffy 
> >>>> <http://i.stanford.edu/hazy/tuffy/> in Java to Julia while showing 
> the 
> >>>> best of Julia? I am going for the most correct way, even if it means 
> >>>> coding 
> >>>> Tuffy into C and Julia. 
> >>>> 
> >>>>> On Sunday, August 7, 2016 at 8:34:37 PM UTC-3, Kevin Liu wrote: 
> >>>>> I'll try to build it, compare it, and show it to you guys. I offered 
> to 
> >>>>> do this as work. I am waiting to see if they will accept it. 
> >>>>> 
> >>>>>> On Sunday, August 7, 2016 at 6:15:50 PM UTC-3, Stefan Karpinski 
> wrote: 
> >>>>>> Kevin, as previously requested by Isaiah, please take this to some 
> >>>>>> other forum or maybe start a blog. 
> >>>>>> 
> >>>>>>> On Sat, Aug 6, 2016 at 10:53 PM, Kevin Liu <kvt...@gmail.com> 
> wrote: 
> >>>>>>> Symmetry-based learning, Domingos, 2014 
> >>>>>>> 
> https://www.microsoft.com/en-us/research/video/symmetry-based-learning 
> >>>>>>> / 
> >>>>>>> 
> >>>>>>> Approach 2: Deep symmetry networks generalize convolutional neural 
> >>>>>>> networks by tying parameters and pooling over an arbitrary 
> symmetry 
> >>>>>>> group, 
> >>>>>>> not just the translation group. In preliminary experiments, they 
> >>>>>>> outperformed convnets on a digit recognition task. 
> >>>>>>> 
> >>>>>>>> On Friday, August 5, 2016 at 4:56:45 PM UTC-3, Kevin Liu wrote: 
> >>>>>>>> Minsky died of a cerebral hemorrhage at the age of 88.[40] 
> >>>>>>>> <https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-40> Ray 
> >>>>>>>> Kurzweil <https://en.wikipedia.org/wiki/Ray_Kurzweil> says he 
> was 
> >>>>>>>> contacted by the cryonics organization Alcor Life Extension 
> >>>>>>>> Foundation 
> >>>>>>>> <https://en.wikipedia.org/wiki/Alcor_Life_Extension_Foundation> 
> >>>>>>>> seeking 
> >>>>>>>> Minsky's body.[41] 
> >>>>>>>> <
> https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-Kurzweil-41> 
> >>>>>>>> Kurzweil believes that Minsky was cryonically preserved by Alcor 
> and 
> >>>>>>>> will be revived by 2045.[41] 
> >>>>>>>> <
> https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-Kurzweil-41> 
> >>>>>>>> Minsky 
> >>>>>>>> was a member of Alcor's Scientific Advisory Board 
> >>>>>>>> <https://en.wikipedia.org/wiki/Advisory_Board>.[42] 
> >>>>>>>> <
> https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-AlcorBoard-42> 
> >>>>>>>> In 
> >>>>>>>> keeping with their policy of protecting privacy, Alcor will 
> neither 
> >>>>>>>> confirm 
> >>>>>>>> nor deny that Alcor has cryonically preserved Minsky.[43] 
> >>>>>>>> <https://en.wikipedia.org/wiki/Marvin_Minsky#cite_note-43> 
> >>>>>>>> 
> >>>>>>>> We better do a good job. 
> >>>>>>>> 
> >>>>>>>>> On Friday, August 5, 2016 at 4:45:42 PM UTC-3, Kevin Liu wrote: 
> >>>>>>>>> *So, I think in the next 20 years (2003), if we can get rid of 
> all 
> >>>>>>>>> of the traditional approaches to artificial intelligence, like 
> >>>>>>>>> neural nets 
> >>>>>>>>> and genetic algorithms and rule-based systems, and just turn our 
> >>>>>>>>> sights a 
> >>>>>>>>> little bit higher to say, can we make a system that can use all 
> >>>>>>>>> those 
> >>>>>>>>> things for the right kind of problem? Some problems are good for 
> >>>>>>>>> neural 
> >>>>>>>>> nets; we know that others, neural nets are hopeless on them. 
> Genetic 
> >>>>>>>>> algorithms are great for certain things; I suspect I know what 
> >>>>>>>>> they're bad 
> >>>>>>>>> at, and I won't tell you. (Laughter)*  - Minsky, founder of 
> CSAIL 
> >>>>>>>>> MIT 
> >>>>>>>>> 
> >>>>>>>>> *Those programmers tried to find the single best way to 
> represent 
> >>>>>>>>> knowledge - Only Logic protects us from paradox.* - Minsky (see 
> >>>>>>>>> attachment from his lecture) 
> >>>>>>>>> 
> >>>>>>>>>> On Friday, August 5, 2016 at 8:12:03 AM UTC-3, Kevin Liu wrote: 
> >>>>>>>>>> Markov Logic Network is being used for the continuous 
> development 
> >>>>>>>>>> of drugs to cure cancer at MIT's CanceRX <
> http://cancerx.mit.edu/>, 
> >>>>>>>>>> on 
> >>>>>>>>>> DARPA's largest AI project to date, Personalized Assistant that 
> >>>>>>>>>> Learns (PAL) <https://pal.sri.com/>, progenitor of Siri. One 
> of 
> >>>>>>>>>> Alchemy's largest applications to date was to learn a semantic 
> >>>>>>>>>> network 
> >>>>>>>>>> (knowledge graph as Google calls it) from the web. 
> >>>>>>>>>> 
> >>>>>>>>>> Some on Probabilistic Inductive Logic Programming / 
> Probabilistic 
> >>>>>>>>>> Logic Programming / Statistical Relational Learning from CSAIL 
> >>>>>>>>>> <
> http://people.csail.mit.edu/kersting/ecmlpkdd05_pilp/pilp_ida2005_ 
> >>>>>>>>>> tut.pdf> (my understanding is Alchemy does PILP from 
> entailment, 
> >>>>>>>>>> proofs, and 
> >>>>>>>>>> interpretation) 
> >>>>>>>>>> 
> >>>>>>>>>> The MIT Probabilistic Computing Project (where there is 
> Picture, an 
> >>>>>>>>>> extension of Julia, for computer vision; Watch the video from 
> >>>>>>>>>> Vikash) 
> >>>>>>>>>> <http://probcomp.csail.mit.edu/index.html> 
> >>>>>>>>>> 
> >>>>>>>>>> Probabilistic programming could do for Bayesian ML what Theano 
> has 
> >>>>>>>>>> done for neural networks. 
> >>>>>>>>>> <http://www.inference.vc/deep-learning-is-easy/> - Ferenc 
> Huszár 
> >>>>>>>>>> 
> >>>>>>>>>> Picture doesn't appear to be open-source, even though its Paper 
> is 
> >>>>>>>>>> available. 
> >>>>>>>>>> 
> >>>>>>>>>> I'm in the process of comparing the Picture Paper and Alchemy 
> code 
> >>>>>>>>>> and would like to have an open-source PILP from Julia that 
> combines 
> >>>>>>>>>> the 
> >>>>>>>>>> best of both. 
> >>>>>>>>>> 
> >>>>>>>>>> On Wednesday, August 3, 2016 at 5:01:02 PM UTC-3, Christof 
> Stocker 
> >>>>>>>>>> 
> >>>>>>>>>> wrote: 
> >>>>>>>>>>> This sounds like it could be a great contribution. I shall 
> keep a 
> >>>>>>>>>>> curious eye on your progress 
> >>>>>>>>>>> 
> >>>>>>>>>>> Am Mittwoch, 3. August 2016 21:53:54 UTC+2 schrieb Kevin Liu: 
> >>>>>>>>>>>> Thanks for the advice Cristof. I am only interested in people 
> >>>>>>>>>>>> wanting to code it in Julia, from R by Domingos. The algo has 
> >>>>>>>>>>>> been 
> >>>>>>>>>>>> successfully applied in many areas, even though there are 
> many 
> >>>>>>>>>>>> other areas 
> >>>>>>>>>>>> remaining. 
> >>>>>>>>>>>> 
> >>>>>>>>>>>> On Wed, Aug 3, 2016 at 4:45 PM, Christof Stocker < 
> >>>>>>>>>>>> 
> >>>>>>>>>>>> stocker....@gmail.com> wrote: 
> >>>>>>>>>>>>> Hello Kevin, 
> >>>>>>>>>>>>> 
> >>>>>>>>>>>>> Enthusiasm is a good thing and you should hold on to that. 
> But 
> >>>>>>>>>>>>> to save yourself some headache or disappointment down the 
> road I 
> >>>>>>>>>>>>> advice a 
> >>>>>>>>>>>>> level head. Nothing is really as bluntly obviously solved as 
> it 
> >>>>>>>>>>>>> may seems 
> >>>>>>>>>>>>> at first glance after listening to brilliant people explain 
> >>>>>>>>>>>>> things. A 
> >>>>>>>>>>>>> physics professor of mine once told me that one of the (he 
> >>>>>>>>>>>>> thinks) most 
> >>>>>>>>>>>>> malicious factors to his past students progress where 
> overstated 
> >>>>>>>>>>>>> results/conclusions by other researches (such as premature 
> >>>>>>>>>>>>> announcements 
> >>>>>>>>>>>>> from CERN). I am no mathematician, but as far as I can judge 
> is 
> >>>>>>>>>>>>> the no free 
> >>>>>>>>>>>>> lunch theorem of pure mathematical nature and not something 
> >>>>>>>>>>>>> induced 
> >>>>>>>>>>>>> empirically. These kind of results are not that easily to 
> get 
> >>>>>>>>>>>>> rid of. If 
> >>>>>>>>>>>>> someone (especially an expert) states such a theorem will 
> prove 
> >>>>>>>>>>>>> wrong I 
> >>>>>>>>>>>>> would be inclined to believe that he is not talking about 
> >>>>>>>>>>>>> literally, but 
> >>>>>>>>>>>>> instead is just trying to make a point about a more or less 
> >>>>>>>>>>>>> practical 
> >>>>>>>>>>>>> implication. 
> >>>>>>>>>>>>> 
> >>>>>>>>>>>>> Am Mittwoch, 3. August 2016 21:27:05 UTC+2 schrieb Kevin 
> Liu: 
> >>>>>>>>>>>>>> The Markov logic network represents a probability 
> distribution 
> >>>>>>>>>>>>>> over the states of a complex system (i.e. a cell), 
> comprised of 
> >>>>>>>>>>>>>> entities, 
> >>>>>>>>>>>>>> where logic formulas encode the dependencies between them. 
> >>>>>>>>>>>>>> 
> >>>>>>>>>>>>>> On Wednesday, August 3, 2016 at 4:19:09 PM UTC-3, Kevin Liu 
> >>>>>>>>>>>>>> 
> >>>>>>>>>>>>>> wrote: 
> >>>>>>>>>>>>>>> Alchemy is like an inductive Turing machine, to be 
> programmed 
> >>>>>>>>>>>>>>> to learn broadly or restrictedly. 
> >>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>> The logic formulas from rules through which it represents 
> can 
> >>>>>>>>>>>>>>> be inconsistent, incomplete, or even incorrect-- the 
> learning 
> >>>>>>>>>>>>>>> and 
> >>>>>>>>>>>>>>> probabilistic reasoning will correct them. The key point 
> is 
> >>>>>>>>>>>>>>> that Alchemy 
> >>>>>>>>>>>>>>> doesn't have to learn from scratch, proving Wolpert and 
> >>>>>>>>>>>>>>> Macready's no free 
> >>>>>>>>>>>>>>> lunch theorem wrong by performing well on a variety of 
> classes 
> >>>>>>>>>>>>>>> of problems, 
> >>>>>>>>>>>>>>> not just some. 
> >>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>> On Wednesday, August 3, 2016 at 4:01:15 PM UTC-3, Kevin 
> Liu 
> >>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>> wrote: 
> >>>>>>>>>>>>>>>> Hello Community, 
> >>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>> I'm in the last pages of Pedro Domingos' book, the Master 
> >>>>>>>>>>>>>>>> Algo, one of two recommended by Bill Gates to learn about 
> AI. 
> >>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>> From the book, I understand all learners have to 
> represent, 
> >>>>>>>>>>>>>>>> evaluate, and optimize. There are many types of learners 
> that 
> >>>>>>>>>>>>>>>> do this. What 
> >>>>>>>>>>>>>>>> Domingos does is generalize these three parts, (1) using 
> >>>>>>>>>>>>>>>> Markov Logic 
> >>>>>>>>>>>>>>>> Network to represent, (2) posterior probability to 
> evaluate, 
> >>>>>>>>>>>>>>>> and (3) 
> >>>>>>>>>>>>>>>> genetic search with gradient descent to optimize. The 
> >>>>>>>>>>>>>>>> posterior can be 
> >>>>>>>>>>>>>>>> replaced for another accuracy measure when it is easier, 
> as 
> >>>>>>>>>>>>>>>> genetic search 
> >>>>>>>>>>>>>>>> replaced by hill climbing. Where there are 15 popular 
> options 
> >>>>>>>>>>>>>>>> for 
> >>>>>>>>>>>>>>>> representing, evaluating, and optimizing, Domingos 
> >>>>>>>>>>>>>>>> generalized them into 
> >>>>>>>>>>>>>>>> three options. The idea is to have one unified learner 
> for 
> >>>>>>>>>>>>>>>> any application. 
> >>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>> There is code already done in R 
> >>>>>>>>>>>>>>>> https://alchemy.cs.washington.edu/. My question: anybody 
> in 
> >>>>>>>>>>>>>>>> the community vested in coding it into Julia? 
> >>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>> Thanks. Kevin 
> >>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>> On Friday, June 3, 2016 at 3:44:09 PM UTC-3, Kevin Liu 
> wrote: 
> >>>>>>>>>>>>>>>>> https://github.com/tbreloff/OnlineAI.jl/issues/5 
> >>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>> On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu 
> >>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>> wrote: 
> >>>>>>>>>>>>>>>>>> I plan to write Julia for the rest of me life... given 
> it 
> >>>>>>>>>>>>>>>>>> remains suitable. I am still reading all of Colah's 
> >>>>>>>>>>>>>>>>>> material on nets. I ran 
> >>>>>>>>>>>>>>>>>> Mocha.jl a couple weeks ago and was very happy to see 
> it 
> >>>>>>>>>>>>>>>>>> work. Thanks for 
> >>>>>>>>>>>>>>>>>> jumping in and telling me about OnlineAI.jl, I will 
> look 
> >>>>>>>>>>>>>>>>>> into it once I am 
> >>>>>>>>>>>>>>>>>> ready. From a quick look, perhaps I could help and 
> learn by 
> >>>>>>>>>>>>>>>>>> building a very 
> >>>>>>>>>>>>>>>>>> clear documentation of it. Would really like to see 
> Julia a 
> >>>>>>>>>>>>>>>>>> leap ahead of 
> >>>>>>>>>>>>>>>>>> other languages, and plan to contribute heavily to it, 
> but 
> >>>>>>>>>>>>>>>>>> at the moment am 
> >>>>>>>>>>>>>>>>>> still getting introduced to CS, programming, and nets 
> at 
> >>>>>>>>>>>>>>>>>> the basic level. 
> >>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>> On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom 
> Breloff 
> >>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>> wrote: 
> >>>>>>>>>>>>>>>>>>> Kevin: computers that program themselves is a concept 
> >>>>>>>>>>>>>>>>>>> which is much closer to reality than most would 
> believe, 
> >>>>>>>>>>>>>>>>>>> but julia-users 
> >>>>>>>>>>>>>>>>>>> isn't really the best place for this speculation. If 
> >>>>>>>>>>>>>>>>>>> you're actually 
> >>>>>>>>>>>>>>>>>>> interested in writing code, I'm happy to discuss in 
> >>>>>>>>>>>>>>>>>>> OnlineAI.jl. I was 
> >>>>>>>>>>>>>>>>>>> thinking about how we might tackle code generation 
> using a 
> >>>>>>>>>>>>>>>>>>> neural framework 
> >>>>>>>>>>>>>>>>>>> I'm working on. 
> >>>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>>> On Friday, June 3, 2016, Kevin Liu <kvt...@gmail.com> 
> >>>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>>> wrote: 
> >>>>>>>>>>>>>>>>>>>> If Andrew Ng who cited Gates, and Gates who cited 
> >>>>>>>>>>>>>>>>>>>> Domingos (who did not lecture at Google with a 
> TensorFlow 
> >>>>>>>>>>>>>>>>>>>> question in the 
> >>>>>>>>>>>>>>>>>>>> end), were unsuccessful penny traders, Julia was a 
> >>>>>>>>>>>>>>>>>>>> language for web design, 
> >>>>>>>>>>>>>>>>>>>> and the tribes in the video didn't actually solve 
> >>>>>>>>>>>>>>>>>>>> problems, perhaps this 
> >>>>>>>>>>>>>>>>>>>> would be a wildly off-topic, speculative discussion. 
> But 
> >>>>>>>>>>>>>>>>>>>> these statements 
> >>>>>>>>>>>>>>>>>>>> couldn't be farther from the truth. In fact, if I had 
> >>>>>>>>>>>>>>>>>>>> known about this 
> >>>>>>>>>>>>>>>>>>>> video some months ago I would've understood better on 
> how 
> >>>>>>>>>>>>>>>>>>>> to solve a 
> >>>>>>>>>>>>>>>>>>>> problem I was working on. 
> >>>>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>>>> For the founders of Julia: I understand your tribe is 
> >>>>>>>>>>>>>>>>>>>> mainly CS. This master algorithm, as you are aware, 
> would 
> >>>>>>>>>>>>>>>>>>>> require 
> >>>>>>>>>>>>>>>>>>>> collaboration with other tribes. Just citing the 
> obvious. 
> >>>>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>>>> On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin 
> Liu 
> >>>>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>>>> wrote: 
> >>>>>>>>>>>>>>>>>>>>> There could be parts missing as Domingos mentions, 
> but 
> >>>>>>>>>>>>>>>>>>>>> induction, backpropagation, genetic programming, 
> >>>>>>>>>>>>>>>>>>>>> probabilistic inference, 
> >>>>>>>>>>>>>>>>>>>>> and SVMs working together-- what's speculative about 
> the 
> >>>>>>>>>>>>>>>>>>>>> improved versions 
> >>>>>>>>>>>>>>>>>>>>> of these? 
> >>>>>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>>>>> Julia was made for AI. Isn't it time for a 
> consolidated 
> >>>>>>>>>>>>>>>>>>>>> view on how to reach it? 
> >>>>>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>>>>> On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, 
> Isaiah 
> >>>>>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>>>>> wrote: 
> >>>>>>>>>>>>>>>>>>>>>> This is not a forum for wildly off-topic, 
> speculative 
> >>>>>>>>>>>>>>>>>>>>>> discussion. 
> >>>>>>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>>>>>> Take this to Reddit, Hacker News, etc. 
> >>>>>>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>>>>>> On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu < 
> >>>>>>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>>>>>> kvt...@gmail.com> wrote: 
> >>>>>>>>>>>>>>>>>>>>>>> I am wondering how Julia fits in with the unified 
> >>>>>>>>>>>>>>>>>>>>>>> tribes 
> >>>>>>>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>>>>>>> 
> mashable.com/2016/06/01/bill-gates-ai-code-conference/ 
> >>>>>>>>>>>>>>>>>>>>>>> #8VmBFjIiYOqJ 
> >>>>>>>>>>>>>>>>>>>>>>> 
> >>>>>>>>>>>>>>>>>>>>>>> https://www.youtube.com/watch?v=B8J4uefCQMc 
> > 
> > 
>

Reply via email to