good talk; i should have watched it before now.
On Wednesday, 5 March 2014 20:31:44 UTC-3, John Myles White wrote: > > Stefan’s discussion of “static compilation” starts around minute 45 or so. > > — John > > On Mar 5, 2014, at 3:30 PM, andrew cooke <[email protected] <javascript:>> > wrote: > > > huh. today i learned. thanks. talk running now... > > On Wednesday, 5 March 2014 20:25:06 UTC-3, John Myles White wrote: >> >> One could argue that Julia is “statically compiled at run time”. See this >> talk by http://vimeo.com/84661077 for a discussion of that viewpoint, >> which I like. >> >> — John >> >> On Mar 5, 2014, at 3:20 PM, andrew cooke <[email protected]> wrote: >> >> >> then how is it a jit? i just checked wikipedia and the definition there >> is interpreter + compiler. which would give you stats from the interpreter? >> >> (not trying to be confrontational, just not understanding...!) >> >> thanks, >> andrew >> >> On Wednesday, 5 March 2014 20:07:12 UTC-3, Tim Holy wrote: >>> >>> Another factor is the following: Julia can't do the inference by >>> watching what >>> happens, because it has to compile the code before it runs. So it relies >>> on >>> static inference, or generates generic code when that fails. >>> >>> --Tim >>> >>> On Wednesday, March 05, 2014 02:51:21 PM andrew cooke wrote: >>> > oh, i think i get it. >>> > >>> > you're not solving, you're just propagating. >>> > >>> > so you need the specified types to infer the return. and that's local >>> to >>> > the function so scales. >>> > >>> > ignore me :o) >>> > >>> > cheers, >>> > andrew >>> > >>> > On Wednesday, 5 March 2014 19:40:30 UTC-3, andrew cooke wrote: >>> > > another question here made me realise i don't understand how return >>> types >>> > > are handled in julia. >>> > > >>> > > after all, return types are not specified in functions (are they?). >>> so >>> > > how does the system know that get() for Dict{A,B} returns type B? >>> > > >>> > > i guess there has to be whole program type inference on startup? >>> that >>> > > pulls in and analyses base? or is this info cached somewhere? >>> > > >>> > > because if it was just the JIT seeing what happened in practice as >>> code >>> > > ran, then you wouldn't have to worry about efficiency in the memoize >>> case >>> > > (because the cache would always return the same type in practice). >>> > > >>> > > is this described somewhere? i thought i had read most of the docs >>> by now >>> > > (sorry if i've missed something). or am i confused (again)? >>> > > >>> > > thanks, >>> > > andrew >>> > > >>> > > [if that's not clear, i think my problem is i don't understand how >>> much >>> > > the compiler relies on type inference, and how much on statistics of >>> types >>> > > of instances when running, and when inference is actually done] >>> >> >> >
