[julia-users] Re: deep learning for regression?

2016-02-02 Thread Phil Tomson
I'd be interested in seeing your sin-fitting network as well.

Phil

On Monday, February 1, 2016 at 9:34:16 AM UTC-8, michae...@gmail.com wrote:
>
> Thanks everyone for the comments and pointers to code. I have coded up a 
> simple example, fitting y=sin(x) + error, and the results very good, enough 
> so that I'll certainly be investigating further with larger scale problems. 
> I may try to use one of the existing packages, but it may be convenient to 
> use the MPI package to distribute the training data, as I have a 32 core 
> machine. If I come up with some interesting code, I'll post it here.
>


Re: [julia-users] Anyone working on Julia wrappers for TensorFlow?

2015-12-14 Thread Phil Tomson


On Monday, December 14, 2015 at 11:46:33 AM UTC-8, Stefan Karpinski wrote:
>
> Once we switch to LLVM 3.7.1 
> <https://github.com/JuliaLang/julia/issues/9336>, it will no longer 
> require a special build of Julia to use Cxx.jl. That is one of the reasons 
> why that issue is a high priority – a lot of work has gone into is lately 
> and it should happen in a few weeks. Once LLVM 3.7.1 proves stable on 
> master, we could even potentially backport the change to release-0.4 so 
> that you can use Cxx with 0.4 out of the box.
>

Thanks for the status on this.
 

>
> Regarding using TensorFlow from languages other than Python, Keno and Tony 
> have looked into this in some depth and here's what they found. At first 
> glance TensorFlow's Python interface follows the design pattern that lots 
> of high quality interfaces to C++ do, which is the following:
>
> user Python code ⟺
> handwritten python API ⟺
> SWIG Python API ⟺
> C++ code
>
>
> However, if you look a bit closer, it's actually the following:
>
> user Python code ⟺
> handwritten python API ⟺
> ProtoBuf generated serialization format ⟺
> SWIG generated Python API ⟺
> Hand Written C++ Interface ⟺
> (ProtoBuf if parsing ⟺)
> C++ code
>
>
> The interaction is considerably more involved than might be naively 
> imagined. This design resembles LLVM’s approach of having a well-defined, 
> serializable intermediate format, to serve as a stable interface between 
> high-level code and low-level computational libraries and regression tests. 
> However, the downside is that each high-level language has to implement its 
> own version of this. In Julia we have both PyCall and Cxx, so we can easily 
> interact with and reuse either the Python one or the C++ one.
>
> So the suggestion on TensorFlow’s website that having SWIG support alone 
> will immediately make it possible to use TensorFlow from various languages 
> appears to be somewhat misleading: SWIG is only used to wrap ~20 functions 
> which have a simple C interface. If you look at the TensorFlow source, 
> you'll see that there's about a 2:1 ratio of C++ to Python code, so in its 
> current incarnation, using TensorFlow from any language *without* Python 
> seems unlikely. Given that you can't use TensorFlow without Python, Julia 
> is in a good position since it's very good at talking to both Python and 
> C++.
>
>
Interesting.  

>
> On Mon, Dec 14, 2015 at 2:17 PM, Phil Tomson  > wrote:
>
>> TensorFlow is written in C++ and they use SWIG to generate wrappers for 
>> Python. There is no Julia target for SWIG, but looking at some discussions 
>> here seems to indicate that SWIG for Julia is kind of pointless given that 
>> things like Cxx.jl exist.
>>
>> However, Cxx.jl requires special dev versions of both Julia and LLVM so 
>> it's not an "out of the box" experience just yet. 
>>
>> There was a thread on TensorFlow here a few weeks back, I'm wondering if 
>> anyone was was working on Julia/TensorFlow interoperability?
>>
>> Are there alternatives to Cxx.jl for this kind of thing? Will Cxx.jl ever 
>> be at the point where we can use it with a standard release of Julia?
>>
>> Phil
>>
>
>

[julia-users] Anyone working on Julia wrappers for TensorFlow?

2015-12-14 Thread Phil Tomson
TensorFlow is written in C++ and they use SWIG to generate wrappers for 
Python. There is no Julia target for SWIG, but looking at some discussions 
here seems to indicate that SWIG for Julia is kind of pointless given that 
things like Cxx.jl exist.

However, Cxx.jl requires special dev versions of both Julia and LLVM so 
it's not an "out of the box" experience just yet. 

There was a thread on TensorFlow here a few weeks back, I'm wondering if 
anyone was was working on Julia/TensorFlow interoperability?

Are there alternatives to Cxx.jl for this kind of thing? Will Cxx.jl ever 
be at the point where we can use it with a standard release of Julia?

Phil


[julia-users] Re: Ehsan Totoni on ParallelAccelerator.jl in San Francisco on thurs Dec 17

2015-12-14 Thread Phil Tomson
Will this talk be recorded?

On Sunday, December 13, 2015 at 10:58:31 AM UTC-8, Christian Peel wrote:
>
> This thursday Dec 17  Ehsan Totoni of Intel Labs will speak on 
> ParallelAccelerator.jl [1] in San Francisco to the SF Julia Users group 
> [2].  ParallelAccelerator is a compiler that performs aggressive analysis 
> and optimization on top of the Julia compiler. It can automatically 
> eliminate overheads such as array bounds checking safely, and parallelize 
> and vectorize many data-parallel operations. Ehsan will describe how it 
> works, give examples, list current limitations, and discuss future 
> directions of the package.  
>
> we'd love to have you join us
>
> [1] https://github.com/IntelLabs/ParallelAccelerator.jl
> [2] http://www.meetup.com/Bay-Area-Julia-Users/events/226531171/
>
> -- 
> chris...@ieee.org 
>


Re: [julia-users] Re: Google releases TensorFlow as open source

2015-11-16 Thread Phil Tomson


On Monday, November 16, 2015 at 11:46:14 AM UTC-8, George Coles wrote:
>
> Does MXNet provide features that are analogous with Theano? I would rather 
> do machine learning in one language, than a mix of python + c + a DSL like 
> Theano. 


MXNet.jl is a wrapper around libmxnet so there is c (C++) in the 
background. 

MXNet.jl would be analogous to Theano in some ways. It also seems similar 
to TensorFlow. 

It is always cool to be able to quickly wrap native libraries, but Julia 
> would really gain momentum if it could obviate Theano et al (as cool as the 
> python ecosystem is, it is all quite ungainly.)
>

You should give some of the MXNet.jl examples a try.

Phil 


[julia-users] Re: Google releases TensorFlow as open source

2015-11-11 Thread Phil Tomson


On Tuesday, November 10, 2015 at 9:57:21 PM UTC-8, Valentin Churavy wrote:
>
> It fits in the same niche that Mocha.jl and MXNet.jl are filling right 
> now. MXNet is a ML library that shares many of the same design ideas of 
> TensorFlow and has great Julia support https://github.com/dmlc/MXNet.jl
>

MXNet and TensorFlow look like very similar frameworks. Both use symbolic 
computation which means they both create a DAG that can be manipulated and 
optimized for the underlying hardware (cpu or gpu). It would be interesting 
to see some comparisons between the two. I've read on another forum that 
MXNet is probably faster than TensorFlow at this point, but nobody has done 
any benchmarks yet (I'd try, but I don't have a GPU available at this 
point).

This DAG optimization step is pretty much a compiler in itself, I wonder 
how many similarities there are to ParallelAccelerator.jl? One could 
imagine borrowing some of ideas from it and taking advantage of Julia's 
macro features (which Python and C++ lack) to create a native Julia ML 
toolkit that could also have very high performance... problem is, there are 
so many ML toolkits coming out now that things are already getting pretty 
fragmented in the space.


>
> On Wednesday, 11 November 2015 01:04:00 UTC+9, Randy Zwitch wrote:
>>
>> For me, the bigger question is how does TensorFlow fit in/fill in gaps in 
>> currently available Julia libraries? I'm not saying that someone who is 
>> sufficiently interested shouldn't wrap the library, but it'd be great to 
>> identify what major gaps remain in ML for Julia and figure out if 
>> TensorFlow is the right way to proceed. 
>>
>> We're certainly nowhere near the R duplication problem yet, but certainly 
>> we're already repeating ourselves in many areas.
>>
>> On Monday, November 9, 2015 at 4:02:36 PM UTC-5, Phil Tomson wrote:
>>>
>>> Google has released it's deep learning library called TensorFlow as open 
>>> source code:
>>>
>>> https://github.com/tensorflow/tensorflow
>>>
>>> They include Python bindings, Any ideas about how easy/difficult it 
>>> would be to create Julia bindings?
>>>
>>> Phil
>>>
>>

[julia-users] Re: Google releases TensorFlow as open source

2015-11-11 Thread Phil Tomson


On Tuesday, November 10, 2015 at 8:28:32 PM UTC-8, Alireza Nejati wrote:
>
> Randy: To answer your question, I'd reckon that the two major gaps in 
> julia that TensorFlow could fill are:
>
> 1. Lack of automatic differentiation on arbitrary graph structures.
> 2. Lack of ability to map computations across cpus and clusters.
>

>From reading through some of the TensorFlow docs, it seems to currently 
only run on one machine. This is where MXNet has an advantage (and 
MXNet.jl) as it can run across multiple machines/gpus (see: 
https://mxnet.readthedocs.org/en/latest/distributed_training.html for 
example)

>
> Funny enough, I was thinking about (1) for the past few weeks and I think 
> I have an idea about how to accomplish it using existing JuliaDiff 
> libraries. About (2), I have no idea, and that's probably going to be the 
> most important aspect of TensorFlow moving forward (and also probably the 
> hardest to implement). So for the time being, I think it's definitely 
> worthwhile to just have an interface to TensorFlow. There are a few ways 
> this could be done. Some ways that I can think of:
>
> 1. Just tell people to use PyCall directly. Not an elegant solution. 
>
2. A more julia-integrated interface *a la* SymPy.
> 3. Using TensorFlow as the 'backend' of a novel julia-based machine 
> learning library. In this scenario, everything would be in julia, and 
> TensorFlow would only be used to map computations to hardware.
>
> I think 3 is the most attractive option, but also probably the hardest to 
> do.
>

So if I understand correctly, we need bindings to TensorFlow - they use 
SWIG to generate Python bindings, but there is no Julia backend for SWIG. 
Then using the #3 approach we'd build something more general on top of 
those bindings. Julia's macros should allow for some features that would be 
difficult in C++ or Python.

  


[julia-users] Google releases TensorFlow as open source

2015-11-09 Thread Phil Tomson
Google has released it's deep learning library called TensorFlow as open 
source code:

https://github.com/tensorflow/tensorflow

They include Python bindings, Any ideas about how easy/difficult it would 
be to create Julia bindings?

Phil


[julia-users] Re: Google releases TensorFlow as open source

2015-11-09 Thread Phil Tomson
Looks like they used SWIG to create the Python bindings.  I don't see Julia 
listed as an output target for SWIG.



On Monday, November 9, 2015 at 1:02:36 PM UTC-8, Phil Tomson wrote:
>
> Google has released it's deep learning library called TensorFlow as open 
> source code:
>
> https://github.com/tensorflow/tensorflow
>
> They include Python bindings, Any ideas about how easy/difficult it would 
> be to create Julia bindings?
>
> Phil
>


[julia-users] Moving from 0.3 to 0.4

2015-10-26 Thread Phil Tomson
Are there any docs on moving from 0.3 to 0.4? Or do we just look in the 
changelog?

I know some things have been deprecated and other things added. Also 
looking for kind of a "Best practices" sort of guideline for 0.4 - I 
suspect there were practices in 0.3 that aren't recommended now in 0.4.


[julia-users] Re: Performance compared to Matlab

2015-10-19 Thread Phil Tomson
Several comments here about the need to de-vectorize code and use for-loops 
instead. However, vectorized code is a lot more compact and generally 
easier to read than lots of for-loops. I seem to recall that there was 
discussion in the past about speeding up vectorized code in Julia so that 
it could be on par with the vectorized code performance - is this still 
something being worked on for future versions?

Otherwise, if we keep telling people that they need to convert their code 
to use for-loops, I think Julia isn't going to seem very compelling for 
people looking for alternatives to Matlab, R, etc.

On Sunday, October 18, 2015 at 6:41:54 AM UTC-7, Daniel Carrera wrote:
>
> Hello,
>
> Other people have already given advice on how to speed up the code. I just 
> want to comment that Julia really is faster than Matlab, but the way that 
> you make code faster in Julia is almost the opposite of how you do it in 
> Matlab. Specifically, in Matlab the advice is that if you want the code to 
> be fast, you need to eliminate every loop you can and write vectorized code 
> instead. This is because Matlab loops are slow. But Julia loops are fast, 
> and vectorized code creates a lot of overhead in the form of temporary 
> variables, garbage collection, and extra loops. So in Julia you optimize 
> code by putting everything into loops. The upshot is that if you take a 
> Matlab-optimized program and just do a direct line-by-line conversion to 
> Julia, the Julia version can easily be slower. But by the same token, if 
> you took a Julia-optimized program and converted it line-by-line to Matlab, 
> the Matlab version would be ridiculously slow.
>
> Oh, and in Julia you also care about types. If the compiler can infer 
> correctly the types of your variables it will write more optimal code.
>
> Cheers,
> Daniel.
>
>
> On Sunday, 18 October 2015 13:17:50 UTC+2, Vishnu Raj wrote:
>>
>> Although Julia homepage shows using Julia over Matlab gains more in 
>> performance, my experience is quite opposite.
>> I was trying to simulate channel evolution using Jakes Model for wireless 
>> communication system.
>>
>> Matlab code is:
>> function [ h, tf ] = Jakes_Flat( fd, Ts, Ns, t0, E0, phi_N )
>> %JAKES_FLAT 
>> %   Inputs:
>> %   fd, Ts, Ns  : Doppler frequency, sampling time, number of samples
>> %   t0, E0  : initial time, channel power
>> %   phi_N   : initial phase of the maximum Doppler frequeny
>> %   sinusoid
>> %
>> %   Outputs:
>> %   h, tf   : complex fading vector, current time
>>
>> if nargin < 6,  phi_N = 0;  end
>> if nargin < 5,  E0 = 1; end
>> if nargin < 4,  t0 = 0; end
>> 
>> N0 = 8; % As suggested by Jakes
>> N  = 4*N0 + 2;  % an accurate approximation
>> wd = 2*pi*fd;   % Maximum Doppler frequency[rad]
>> t  = t0 + [0:Ns-1]*Ts;  % Time vector
>> tf = t(end) + Ts;   % Final time
>> coswt = [ sqrt(2)*cos(wd*t); 2*cos(wd*cos(2*pi/N*[1:N0]')*t) ];
>> h  = E0/sqrt(2*N0+1)*exp(j*[phi_N pi/(N0+1)*[1:N0]])*coswt;
>> end
>> Enter code here...
>>
>> My call results in :
>> >> tic; Jakes_Flat( 926, 1E-6, 5, 0, 1, 0 ); toc
>> Elapsed time is 0.008357 seconds.
>>
>>
>> My corresponding Julia code is
>> function Jakes_Flat( fd, Ts, Ns, t0 = 0, E0 = 1, phi_N = 0 )
>> # Inputs:
>> #
>> # Outputs:
>>   N0  = 8;  # As suggested by Jakes
>>   N   = 4*N0+2; # An accurate approximation
>>   wd  = 2*pi*fd;# Maximum Doppler frequency
>>   t   = t0 + [0:Ns-1]*Ts;
>>   tf  = t[end] + Ts;
>>   coswt = [ sqrt(2)*cos(wd*t'); 2*cos(wd*cos(2*pi/N*[1:N0])*t') ]
>>   h = E0/sqrt(2*N0+1)*exp(im*[ phi_N pi/(N0+1)*[1:N0]']) * coswt
>>   return h, tf;
>> end
>> # Saved this as "jakes_model.jl"
>>
>>
>> My first call results in 
>> julia> include( "jakes_model.jl" )
>> Jakes_Flat (generic function with 4 methods)
>>
>> julia> @time Jakes_Flat( 926, 1e-6, 5, 0, 1, 0 )
>> elapsed time: 0.65922234 seconds (61018916 bytes allocated)
>>
>> julia> @time Jakes_Flat( 926, 1e-6, 5, 0, 1, 0 )
>> elapsed time: 0.042468906 seconds (17204712 bytes allocated, 63.06% gc 
>> time)
>>
>> For first execution, Julia is taking huge amount of time. On second call, 
>> even though Julia take considerably less(0.042468906 sec) than first(
>> 0.65922234 sec), it's still much higher to Matlab(0.008357 sec).
>> I'm using Matlab R2014b and Julia v0.3.10 on Mac OSX10.10.
>>
>> - vish
>>
>

Re: [julia-users] The new Dict syntax in 0.4 is very verbose

2015-09-02 Thread Phil Tomson


On Wednesday, September 2, 2015 at 11:21:35 AM UTC-7, Erik Schnetter wrote:
>
> If I recall correctly, the two sets of ASCII bracketing operators ([] and 
> {}) were deemed to be more usefully employed for arrays; 
>

How has have the curly braces "{" and "}" been reused for arrays in 0.4? 
Curly braces have been used to indicate Dicts up until 0.4 (and the syntax 
was essentially borrowed from Python/Ruby/others) I have to agree with the 
OP and others here that the verbose Dict syntax in 0.4 is pretty ugly. I 
didn't realize that was the way it was headed as I haven't done much in 0.4 
yet.

 

>  
>
> On Wed, Sep 2, 2015 at 1:29 PM, Michael Francis  > wrote:
>
>>
>> The arguments given in the thread that Dict 'isn't special' should also 
>> also apply to Vector and Array, I presume nobody wants to do away with 
>> literal syntax for them as well? 
>>
>> There are many times when having a simple terse native (code editor 
>> aware) literal syntax for structured data is very useful (in the same way 
>> that it is useful for vectors and arrays) and I second what David is 
>> saying, it feel like I'm back writing C++/C#/Java et al. 
>>
>> Using macros works, but everybody is going to have their own so there 
>> will be no consistency across the code base. Dict(...) works without the 
>> types so I guess that is the best of a bad bunch. 
>>
>> On Wednesday, September 2, 2015 at 1:07:59 PM UTC-4, Isaiah wrote:
>>>
>>> This issue was raised here:
>>> https://github.com/JuliaLang/julia/issues/6739#issuecomment-120149597
>>>
>>> I believe the consensus was that nice JSON input syntax could be handled 
>>> with a macro.
>>>
>>> Also, once the "[ a=>b, ...]" syntax deprecation goes away, I believe 
>>> this:
>>>
>>> [ :col => "l1", :col => "l2", ... ]
>>>
>>> will simply give you an array of Pair objects, which could be translated 
>>> to unitary Dicts by JSON.
>>>
>>> (FWIW, it is not necessary to specify the argument types to Dict)
>>>
>>> On Wed, Sep 2, 2015 at 12:45 PM, Michael Francis  
>>> wrote:
>>>
 With the change to 0.4 happening soon I'm finding the the new Dict 
 syntax in 0.4 (removal of {}, []) is extremely verbose.

 I find myself interfacing with JSON APIs frequently, for example a 
 configuration dictionary :

 data = {
 :displayrows => 20,
 :cols => [
 { :col => "l1" },
 { :col => "l2" },
 { :col => "l3" },
 { :col => "num", :display => true },
 { :col => "sum", :display => true, :conf => { :style 
 => 1, :func => { :method => "sum", :col => "num"  } } }
 ]  
... # Lots more   
 }

 becomes -

 data = Dict{Symbol,Any}(
 :displayrows => 20,
 :cols => [
 Dict{Symbol,Any}( :col => "l1" ),
 Dict{Symbol,Any}( :col => "l2" ),
 Dict{Symbol,Any}( :col => "l3"   ),
 Dict{Symbol,Any}( :col => "num", :display => true 
 ),
 Dict{Symbol,Any}( :col => "sum", :display => true, 
 :conf => Dict{Symbol,Any}( :style => 1, 
 :func 
 => Dict{Symbol,Any}( :method => "sum", :col => "num" ) ) )
 ]  
... # Lots more
 )

 This feels like asking a person using arrays to write the following

 Array{Int64,2}( Vector{Int64}( 1,2,3), Vector{Int64}( 4,5,6) )

 vs

 [ [ 1, 2, 3] [ 4,5,6 ] ]

 Can we please reconsider ?


>>>
>
>
> -- 
> Erik Schnetter > 
> http://www.perimeterinstitute.ca/personal/eschnetter/
>


[julia-users] subtracting two uint8's results in a Uint64?

2015-06-17 Thread Phil Tomson
Maybe this is expected, but it was a bit of a surprise to me:

 julia> function foo()
 red::Uint8 = 0x33
 blue::Uint8 = 0x36
 (red-blue)
  end
julia> foo()
0xfffd
julia> typeof(foo())
Uint64

The fact that it overflowed wasn't surprising, but the fact that it got 
converted to a Uint64 is a bit surprising (it ended up being a very large 
number that got used in other calculations later which led to odd results) 
. So it looks like all of the math operators will always promote to the 
largest size (but keep the same signed or unsignedness).

I'm wondering if it might make more sense if:
Uint8 - Uint8 -> Uint8
Or more generally: UintN  UintN -> UintN ?
and:  IntN  IntN -> IntN






Re: [julia-users] Direct access to fields in a type, unjulian?

2015-04-08 Thread Phil Tomson


On Wednesday, April 8, 2015 at 8:00:42 AM UTC-7, Tim Holy wrote:
>
> It's a matter of taste, really, but in general I agree that the Julian way 
> is 
> to reduce the number of accesses to fields directly. That said, I do 
> sometimes 
> access the fields. 
>
> However, your iterator example is a good opportunity to illustrate a more 
> julian approach: 
>
> mesh = mesh(...) 
> for vertex in vertices(mesh) 
> blah blah 
> end 
>
> The idea is that vertices(mesh) might return an iterator object, and then 
> you 
> write start, next, and done functions to implement iteration. Presumably 
> you 
> should build the iterators for Mesh on top of iterators for FESection, 
> etc, so 
> the whole thing is composable. You'd then have short implementations of 
> the 
> vertices function taking a Mesh, FESection, or Element. 
>
>
Tim,

Can you comment on the performance implications of directly accessing 
fields vs your approach?  I'm guessing that directly accessing the fields 
would be faster?

Phil 

> On Wednesday, April 08, 2015 06:35:46 AM Kristoffer Carlsson wrote: 
> > I come from a Python background where direct access to fields in for 
> > example classes with the dot notation is very common. 
> > 
> > However, from what I have seen in different conversations, accessing 
> fields 
> > directly is not really Julian. Sort of a "fields are an implementation 
> > detail" mindset, and "what is important are the functions". 
> > 
> > Here is an example of a type hierarchy that is a little bit similar to 
> > types I am working with now: 
> > 
> > type Element 
> > vertices::Vector{Int} 
> > end 
> > 
> > type Node 
> > coordinates::Vector{Float64} 
> > id::Int 
> > end 
> > 
> > type FESection 
> > elements::Vector{Elements} 
> > nodes::Vector{Nodes} 
> > end 
> > 
> > type Mesh 
> >sections::Vector{FESection} 
> > end 
> > 
> > Now, let's say that I want to write a function to loop over all 
> vertices. 
> > One way (which I would do in Python is): 
> > 
> > mesh = Mesh(.) 
> > for section in mesh.sections 
> > for element in section.elements 
> > for vertices in element.vertices 
> >   blah bla 
> > end 
> > end 
> > end 
> > 
> > 
> > 
> > However, this accesses the fields directly. Would it be more Julian to 
> > write getters for the fields? Since Julia does not have @property like 
> > Python I realize that by accessing the fields you commit to exactly the 
> > name of the field and it's type while with a getter it would be more 
> > flexible. 
> > 
> > Best regards, 
> > Kristoffer Carlsson 
>
>

[julia-users] Re: Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-26 Thread Phil Tomson


On Thursday, March 26, 2015 at 5:51:59 PM UTC-7, colint...@gmail.com wrote:
>
> Lots of useful answers here. This is an issue for me a lot too. Here are 
> two StackOverflow links that provide some more interesting reading:
>
>
> http://stackoverflow.com/questions/26173635/performance-penalty-using-anonymous-function-in-julia
>
> http://stackoverflow.com/questions/28356437/julia-compiler-does-not-appear-to-optimize-when-a-function-is-passed-a-function
>
> Stefan Karpinski answers in one of them that the problem will be fixed in 
> an upcoming overhaul of the type system. My current understanding of the 
> roadmap is that it is definitely planned to be fixed by v1.0, but that 
> there is quite a lot of support for a fix in v0.4 (not sure yet whether it 
> will happen).
>
> Cheers,
>
> Colin
>
>
Colin, 

Thanks for the links. It is a bit encouraging that you can specify the 
return type as shown in the second link there and achieve a little bit of a 
speedup (still not great performance, but about a 20% speedup in the small 
testcase I tried)

 

>
>
> On Thursday, 26 March 2015 05:41:10 UTC+11, Phil Tomson wrote:
>>
>>  Maybe this is just obvious, but it's not making much sense to me.
>>
>> If I have a reference to a function (pardon if that's not the correct 
>> Julia-ish terminology - basically just a variable that holds a Function 
>> type) and call it, it runs much more slowly (persumably because it's 
>> allocating a lot more memory) than it would if I make the same call with  
>> the function directly.
>>
>> Maybe that's not so clear, so let me show an example using the abs 
>> function:
>>
>> function test_time()
>>  sum = 1.0
>>  for i in 1:100
>>sum += abs(sum)
>>  end
>>  sum
>>  end
>>
>> Run it a few times with @time:
>>
>>julia> @time test_time()
>> elapsed time: 0.007576883 seconds (96 bytes allocated)
>> Inf
>>
>>julia> @time test_time()
>> elapsed time: 0.002058207 seconds (96 bytes allocated)
>> Inf
>>
>> julia> @time test_time()
>> elapsed time: 0.005015882 seconds (96 bytes allocated)
>> Inf
>>
>> Now let's try a modified version that takes a Function on the input:
>>
>> function test_time(func::Function)
>>  sum = 1.0
>>  for i in 1:100
>>sum += func(sum)
>>  end
>>  sum
>>  end
>>
>> So essentially the same function, but this time the function is passed 
>> in. Running this version a few times:
>>
>> julia> @time test_time(abs)
>> elapsed time: 0.066612994 seconds (3280 bytes allocated, 31.05% 
>> gc time)
>> Inf
>>  
>> julia> @time test_time(abs)
>> elapsed time: 0.064705561 seconds (3280 bytes allocated, 31.16% 
>> gc time)
>> Inf
>>
>> So roughly 10X slower, probably because of the much larger amount of 
>> memory allocated (3280 bytes vs. 96 bytes)
>>
>> Why does the second version allocate so much more memory? (I'm running 
>> Julia 0.3.6 for this testcase)
>>
>> Phil
>>
>>
>>

Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Phil Tomson


On Wednesday, March 25, 2015 at 12:34:47 PM UTC-7, Mauro wrote:
>
> This is a known limitation of Julia.  The trouble is that Julia cannot 
> do its type interference with the passed in function.  I don't have time 
> to search for the relevant issues but you should be able to find them. 
> Similarly, lambdas also suffer from this.  Hopefully this will be 
> resolved soon! 
>

Mauro: When you say "Hopefully this will be resolved soon! " does that mean 
this is an issue with a planned future fix?

For those of us used to programming in a very functional style, this 
limitation leads to less performant code in Julia.


> On Wed, 2015-03-25 at 19:41, Phil Tomson > 
> wrote: 
> >  Maybe this is just obvious, but it's not making much sense to me. 
> > 
> > If I have a reference to a function (pardon if that's not the correct 
> > Julia-ish terminology - basically just a variable that holds a Function 
> > type) and call it, it runs much more slowly (persumably because it's 
> > allocating a lot more memory) than it would if I make the same call with 
>   
> > the function directly. 
> > 
> > Maybe that's not so clear, so let me show an example using the abs 
> function: 
> > 
> > function test_time() 
> >  sum = 1.0 
> >  for i in 1:100 
> >sum += abs(sum) 
> >  end 
> >  sum 
> >  end 
> > 
> > Run it a few times with @time: 
> > 
> >julia> @time test_time() 
> > elapsed time: 0.007576883 seconds (96 bytes allocated) 
> > Inf 
> > 
> >julia> @time test_time() 
> > elapsed time: 0.002058207 seconds (96 bytes allocated) 
> > Inf 
> > 
> > julia> @time test_time() 
> > elapsed time: 0.005015882 seconds (96 bytes allocated) 
> > Inf 
> > 
> > Now let's try a modified version that takes a Function on the input: 
> > 
> > function test_time(func::Function) 
> >  sum = 1.0 
> >  for i in 1:100 
> >sum += func(sum) 
> >  end 
> >  sum 
> >  end 
> > 
> > So essentially the same function, but this time the function is passed 
> in. 
> > Running this version a few times: 
> > 
> > julia> @time test_time(abs) 
> > elapsed time: 0.066612994 seconds (3280 bytes allocated, 31.05% 
> > gc time) 
> > Inf 
> >   
> > julia> @time test_time(abs) 
> > elapsed time: 0.064705561 seconds (3280 bytes allocated, 31.16% 
> gc 
> > time) 
> > Inf 
> > 
> > So roughly 10X slower, probably because of the much larger amount of 
> memory 
> > allocated (3280 bytes vs. 96 bytes) 
> > 
> > Why does the second version allocate so much more memory? (I'm running 
> > Julia 0.3.6 for this testcase) 
> > 
> > Phil 
>
>

Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Phil Tomson


On Wednesday, March 25, 2015 at 5:07:27 PM UTC-7, Tony Kelman wrote:
>
> The function-to-be-called is not known at compile time in Phil's 
> application, apparently.
>

Right, they come out of a JSON file. I parse the JSON and construct a list 
of processing nodes from it and those could have 1 of two functions.
 

>
> Question for Phil: are there a limited set of functions that you know 
> you'll be calling here? 
>

True. Currently two. Could be more later.
 

> I was doing something similar recently, where it actually made the most 
> sense to create a fixed Dict{Symbol, UInt} of function codes, use that dict 
> as a lookup table, passing the symbol into the function and generating the 
> runtime conditionals for which function to call via a macro. I can point 
> you to some rough code if it would help and if this is at all similar to 
> what you're trying to do.
>

I would be interested in seeing your macro. 
I actually can already get the function name as a symbol (instead of having 
it be a function) and I've been trying to make a macro that applies that 
function (as defined by the symbol) to the arguments. But so far not 
working (I just posted a query about it)



>
> On Wednesday, March 25, 2015 at 2:59:42 PM UTC-7, ele...@gmail.com wrote:
>>
>>
>>
>> On Thursday, March 26, 2015 at 8:06:41 AM UTC+11, Phil Tomson wrote:
>>>
>>>
>>>
>>> On Wednesday, March 25, 2015 at 1:52:04 PM UTC-7, Tim Holy wrote:
>>>>
>>>> No, it's 
>>>>
>>>>f = @anon x->abs(x) 
>>>>
>>>> and then pass f to test_time. Declare the function like this: 
>>>>
>>>> function test_time{F}(func::F) 
>>>>  
>>>> end 
>>>>
>>>
>>> Ok, got that working, but when I try using it inside the function (which 
>>> would be closer to what I really need to do):
>>>
>>>  function test_time2(func::Function)
>>>  fn = @anon x->func(x)
>>>
>>
>> No, as Tim said, you do @anon outside test_time with the function you 
>> want to use and pass the result as the parameter.  Note also his point of 
>> how to declare test_time as a generic.
>>
>> Cheers
>> Lex
>>
>>  
>>
>>>  sum = 1.0
>>>  for i in 1:100
>>> sum += fn(sum)
>>>  end
>>>  sum
>>>  end
>>>
>>> julia> @time test_time2(abs)
>>> ERROR: `func` has no method matching func(::Float64)
>>>  in ##26503 at 
>>> /home/phil/.julia/v0.3/FastAnonymous/src/FastAnonymous.jl:2
>>>  in test_time2 at none:5
>>>
>>>
>>>
>>>
>>>
>>>> --Tim 
>>>>
>>>> On Wednesday, March 25, 2015 01:30:28 PM Phil Tomson wrote: 
>>>> > On Wednesday, March 25, 2015 at 1:08:24 PM UTC-7, Tim Holy wrote: 
>>>> > > Don't use a macro, just use the @anon macro to create an object 
>>>> that will 
>>>> > > be 
>>>> > > fast to use as a "function." 
>>>> > 
>>>> > I guess I'm not understanding how this is used, I would have thought 
>>>> I'd 
>>>> > need to do something like: 
>>>> > 
>>>> > julia> 
>>>> > function test_time(func::Function) 
>>>> >  f = @anon func 
>>>> >  sum = 1.0 
>>>> >  for i in 1:100 
>>>> >sum += f(sum) 
>>>> >  end 
>>>> >  sum 
>>>> >  end 
>>>> > ERROR: `anonsplice` has no method matching anonsplice(::Symbol) 
>>>> > 
>>>> > 
>>>> > ... or even trying it outside of the function: 
>>>> > julia> f = @anon abs 
>>>> > ERROR: `anonsplice` has no method matching anonsplice(::Symbol) 
>>>> > 
>>>> > > --Tim 
>>>> > > 
>>>> > > On Wednesday, March 25, 2015 01:00:27 PM Phil Tomson wrote: 
>>>> > > > I have a couple of instances where a function is determined by 
>>>> some 
>>>> > > > parameters (in a JSON file in this case) and I have to call it in 
>>>> this 
>>>> > > > manner.  I'm thinking it should be possible to speed these up via 
>>>> a 
>>>> > > 
>>>> > 

[julia-users] passing in a symbol to a macro and applying it as a function to expression

2015-03-25 Thread Phil Tomson
I want to be able to pass in a symbol which represents a function name into 
a macro and then have that function applied to an expression, something 
like:

  @apply_func :abs (x - y)

(where (x-y) could stand in for some expression or a single number)

I did a bit of searching here and came up with the following (posted by Tim 
Holy last year, from this post: 
https://groups.google.com/forum/#!searchin/julia-users/macro$20symbol/julia-users/lrtnyACdrxQ/5wovJmrUs0MJ
 
):

  macro apply_func(fn::Symbol, ex::Expr) 
 qex = Expr(:quote, ex) 
 quote 
   $(esc(fn))($qex) 
 end 
  end 

I've got a Symbol which represents a function name and I'd like to apply to 
the expression, so I'd like to be able to do:
   x = 10
   y = 11 
  @apply_func :abs (x - y)
...And get: 1

But first of all, a symbol doesn't work there:
julia> macroexpand(:(@apply_func :abs 1))
:($(Expr(:error, TypeError(:anonymous,"typeassert",Symbol,:(:abs)

I think this is because the arguments to the macro are already being passed 
in as a symbol... so it becomes ::abs

Ok, so what if I go with:

ulia> macroexpand(:(@apply_func abs 1+2))
  quote  # none, line 4:
  abs($(Expr(:copyast, :(:(1 + 2) 
  end

...that seems problematic because we're passing an Expr to the abs then:

julia> @apply_func abs 1+2
ERROR: `abs` has no method matching abs(::Expr)

Ok, so now I'm realizing that macro isn't going to do what I want it to, so 
let's change it:

  macro apply_func(fn::Symbol, ex::Expr)
 quote
   $(esc(fn))($ex)
 end
  end

That works better:
julia> @apply_func abs 1+2
3

But It won't work if I pass in a symbol:
julia> macroexpand(:(@apply_func :abs 1+2))
:($(Expr(:error, TypeError(:anonymous,"typeassert",Symbol,:(:abs)

How would I go about getting that case to work?

Phil





Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Phil Tomson


On Wednesday, March 25, 2015 at 1:52:04 PM UTC-7, Tim Holy wrote:
>
> No, it's 
>
>f = @anon x->abs(x) 
>
> and then pass f to test_time. Declare the function like this: 
>
> function test_time{F}(func::F) 
>  
> end 
>

Ok, got that working, but when I try using it inside the function (which 
would be closer to what I really need to do):

 function test_time2(func::Function)
 fn = @anon x->func(x)
 sum = 1.0
 for i in 1:100
sum += fn(sum)
 end
 sum
 end

julia> @time test_time2(abs)
ERROR: `func` has no method matching func(::Float64)
 in ##26503 at /home/phil/.julia/v0.3/FastAnonymous/src/FastAnonymous.jl:2
 in test_time2 at none:5





> --Tim 
>
> On Wednesday, March 25, 2015 01:30:28 PM Phil Tomson wrote: 
> > On Wednesday, March 25, 2015 at 1:08:24 PM UTC-7, Tim Holy wrote: 
> > > Don't use a macro, just use the @anon macro to create an object that 
> will 
> > > be 
> > > fast to use as a "function." 
> > 
> > I guess I'm not understanding how this is used, I would have thought I'd 
> > need to do something like: 
> > 
> > julia> 
> > function test_time(func::Function) 
> >  f = @anon func 
> >  sum = 1.0 
> >  for i in 1:100 
> >sum += f(sum) 
> >  end 
> >  sum 
> >  end 
> > ERROR: `anonsplice` has no method matching anonsplice(::Symbol) 
> > 
> > 
> > ... or even trying it outside of the function: 
> > julia> f = @anon abs 
> > ERROR: `anonsplice` has no method matching anonsplice(::Symbol) 
> > 
> > > --Tim 
> > > 
> > > On Wednesday, March 25, 2015 01:00:27 PM Phil Tomson wrote: 
> > > > I have a couple of instances where a function is determined by some 
> > > > parameters (in a JSON file in this case) and I have to call it in 
> this 
> > > > manner.  I'm thinking it should be possible to speed these up via a 
> > > 
> > > macro, 
> > > 
> > > > but I'm a macro newbie.  I'll probably post a different question 
> related 
> > > 
> > > to 
> > > 
> > > > that, but would a macro be feasible in an instance like this? 
> > > > 
> > > > On Wednesday, March 25, 2015 at 12:35:20 PM UTC-7, Tim Holy wrote: 
> > > > > There have been many prior posts about this topic. Maybe we should 
> add 
> > > 
> > > a 
> > > 
> > > > > FAQ 
> > > > > page we can direct people to. In the mean time, your best bet is 
> to 
> > > 
> > > search 
> > > 
> > > > > (or 
> > > > > use FastAnonymous or NumericFuns). 
> > > > > 
> > > > > --Tim 
> > > > > 
> > > > > On Wednesday, March 25, 2015 11:41:10 AM Phil Tomson wrote: 
> > > > > >  Maybe this is just obvious, but it's not making much sense to 
> me. 
> > > > > > 
> > > > > > If I have a reference to a function (pardon if that's not the 
> > > 
> > > correct 
> > > 
> > > > > > Julia-ish terminology - basically just a variable that holds a 
> > > 
> > > Function 
> > > 
> > > > > > type) and call it, it runs much more slowly (persumably because 
> it's 
> > > > > > allocating a lot more memory) than it would if I make the same 
> call 
> > > 
> > > with 
> > > 
> > > > > > the function directly. 
> > > > > > 
> > > > > > Maybe that's not so clear, so let me show an example using the 
> abs 
> > > > > 
> > > > > function: 
> > > > > > function test_time() 
> > > > > > 
> > > > > >  sum = 1.0 
> > > > > >  for i in 1:100 
> > > > > >   
> > > > > >sum += abs(sum) 
> > > > > >   
> > > > > >  end 
> > > > > >  sum 
> > > > > >   
> > > > > >  end 
> > > > > > 
> > > > > > Run it a few times with @time: 
> > > > > >julia> @time test_time() 
> > > > > > 
> > > > > > elapsed time: 0.007576883 seconds (96 bytes allocated) 
&g

Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Phil Tomson


On Wednesday, March 25, 2015 at 1:08:24 PM UTC-7, Tim Holy wrote:
>
> Don't use a macro, just use the @anon macro to create an object that will 
> be 
> fast to use as a "function." 
>

I guess I'm not understanding how this is used, I would have thought I'd 
need to do something like:

julia> 
function test_time(func::Function)
 f = @anon func
 sum = 1.0
 for i in 1:100
   sum += f(sum)
 end
 sum
 end
ERROR: `anonsplice` has no method matching anonsplice(::Symbol)


... or even trying it outside of the function:
julia> f = @anon abs
ERROR: `anonsplice` has no method matching anonsplice(::Symbol)

 

>
> --Tim 
>
> On Wednesday, March 25, 2015 01:00:27 PM Phil Tomson wrote: 
> > I have a couple of instances where a function is determined by some 
> > parameters (in a JSON file in this case) and I have to call it in this 
> > manner.  I'm thinking it should be possible to speed these up via a 
> macro, 
> > but I'm a macro newbie.  I'll probably post a different question related 
> to 
> > that, but would a macro be feasible in an instance like this? 
> > 
> > On Wednesday, March 25, 2015 at 12:35:20 PM UTC-7, Tim Holy wrote: 
> > > There have been many prior posts about this topic. Maybe we should add 
> a 
> > > FAQ 
> > > page we can direct people to. In the mean time, your best bet is to 
> search 
> > > (or 
> > > use FastAnonymous or NumericFuns). 
> > > 
> > > --Tim 
> > > 
> > > On Wednesday, March 25, 2015 11:41:10 AM Phil Tomson wrote: 
> > > >  Maybe this is just obvious, but it's not making much sense to me. 
> > > > 
> > > > If I have a reference to a function (pardon if that's not the 
> correct 
> > > > Julia-ish terminology - basically just a variable that holds a 
> Function 
> > > > type) and call it, it runs much more slowly (persumably because it's 
> > > > allocating a lot more memory) than it would if I make the same call 
> with 
> > > > the function directly. 
> > > > 
> > > > Maybe that's not so clear, so let me show an example using the abs 
> > > 
> > > function: 
> > > > function test_time() 
> > > > 
> > > >  sum = 1.0 
> > > >  for i in 1:100 
> > > >   
> > > >sum += abs(sum) 
> > > >   
> > > >  end 
> > > >  sum 
> > > >   
> > > >  end 
> > > > 
> > > > Run it a few times with @time: 
> > > >julia> @time test_time() 
> > > > 
> > > > elapsed time: 0.007576883 seconds (96 bytes allocated) 
> > > > Inf 
> > > > 
> > > >julia> @time test_time() 
> > > > 
> > > > elapsed time: 0.002058207 seconds (96 bytes allocated) 
> > > > Inf 
> > > > 
> > > > julia> @time test_time() 
> > > > elapsed time: 0.005015882 seconds (96 bytes allocated) 
> > > > Inf 
> > > > 
> > > > Now let's try a modified version that takes a Function on the input: 
> > > > function test_time(func::Function) 
> > > > 
> > > >  sum = 1.0 
> > > >  for i in 1:100 
> > > >   
> > > >sum += func(sum) 
> > > >   
> > > >  end 
> > > >  sum 
> > > >   
> > > >  end 
> > > > 
> > > > So essentially the same function, but this time the function is 
> passed 
> > > 
> > > in. 
> > > 
> > > > Running this version a few times: 
> > > > julia> @time test_time(abs) 
> > > > elapsed time: 0.066612994 seconds (3280 bytes allocated, 
> 31.05% 
> > > > 
> > > > gc time) 
> > > > 
> > > > Inf 
> > > > 
> > > > julia> @time test_time(abs) 
> > > > elapsed time: 0.064705561 seconds (3280 bytes allocated, 
> 31.16% 
> > > 
> > > gc 
> > > 
> > > > time) 
> > > > 
> > > > Inf 
> > > > 
> > > > So roughly 10X slower, probably because of the much larger amount of 
> > > 
> > > memory 
> > > 
> > > > allocated (3280 bytes vs. 96 bytes) 
> > > > 
> > > > Why does the second version allocate so much more memory? (I'm 
> running 
> > > > Julia 0.3.6 for this testcase) 
> > > > 
> > > > Phil 
>
>

Re: [julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Phil Tomson
I have a couple of instances where a function is determined by some 
parameters (in a JSON file in this case) and I have to call it in this 
manner.  I'm thinking it should be possible to speed these up via a macro, 
but I'm a macro newbie.  I'll probably post a different question related to 
that, but would a macro be feasible in an instance like this?

On Wednesday, March 25, 2015 at 12:35:20 PM UTC-7, Tim Holy wrote:
>
> There have been many prior posts about this topic. Maybe we should add a 
> FAQ 
> page we can direct people to. In the mean time, your best bet is to search 
> (or 
> use FastAnonymous or NumericFuns). 
>
> --Tim 
>
> On Wednesday, March 25, 2015 11:41:10 AM Phil Tomson wrote: 
> >  Maybe this is just obvious, but it's not making much sense to me. 
> > 
> > If I have a reference to a function (pardon if that's not the correct 
> > Julia-ish terminology - basically just a variable that holds a Function 
> > type) and call it, it runs much more slowly (persumably because it's 
> > allocating a lot more memory) than it would if I make the same call with 
> > the function directly. 
> > 
> > Maybe that's not so clear, so let me show an example using the abs 
> function: 
> > 
> > function test_time() 
> >  sum = 1.0 
> >  for i in 1:100 
> >sum += abs(sum) 
> >  end 
> >  sum 
> >  end 
> > 
> > Run it a few times with @time: 
> > 
> >julia> @time test_time() 
> > elapsed time: 0.007576883 seconds (96 bytes allocated) 
> > Inf 
> > 
> >julia> @time test_time() 
> > elapsed time: 0.002058207 seconds (96 bytes allocated) 
> > Inf 
> > 
> > julia> @time test_time() 
> > elapsed time: 0.005015882 seconds (96 bytes allocated) 
> > Inf 
> > 
> > Now let's try a modified version that takes a Function on the input: 
> > 
> > function test_time(func::Function) 
> >  sum = 1.0 
> >  for i in 1:100 
> >sum += func(sum) 
> >  end 
> >  sum 
> >  end 
> > 
> > So essentially the same function, but this time the function is passed 
> in. 
> > Running this version a few times: 
> > 
> > julia> @time test_time(abs) 
> > elapsed time: 0.066612994 seconds (3280 bytes allocated, 31.05% 
> > gc time) 
> > Inf 
> > 
> > julia> @time test_time(abs) 
> > elapsed time: 0.064705561 seconds (3280 bytes allocated, 31.16% 
> gc 
> > time) 
> > Inf 
> > 
> > So roughly 10X slower, probably because of the much larger amount of 
> memory 
> > allocated (3280 bytes vs. 96 bytes) 
> > 
> > Why does the second version allocate so much more memory? (I'm running 
> > Julia 0.3.6 for this testcase) 
> > 
> > Phil 
>
>

[julia-users] Calling a function via a reference to a function allocates a lot of memory as compared to directly calling

2015-03-25 Thread Phil Tomson
 Maybe this is just obvious, but it's not making much sense to me.

If I have a reference to a function (pardon if that's not the correct 
Julia-ish terminology - basically just a variable that holds a Function 
type) and call it, it runs much more slowly (persumably because it's 
allocating a lot more memory) than it would if I make the same call with  
the function directly.

Maybe that's not so clear, so let me show an example using the abs function:

function test_time()
 sum = 1.0
 for i in 1:100
   sum += abs(sum)
 end
 sum
 end

Run it a few times with @time:

   julia> @time test_time()
elapsed time: 0.007576883 seconds (96 bytes allocated)
Inf

   julia> @time test_time()
elapsed time: 0.002058207 seconds (96 bytes allocated)
Inf

julia> @time test_time()
elapsed time: 0.005015882 seconds (96 bytes allocated)
Inf

Now let's try a modified version that takes a Function on the input:

function test_time(func::Function)
 sum = 1.0
 for i in 1:100
   sum += func(sum)
 end
 sum
 end

So essentially the same function, but this time the function is passed in. 
Running this version a few times:

julia> @time test_time(abs)
elapsed time: 0.066612994 seconds (3280 bytes allocated, 31.05% 
gc time)
Inf
 
julia> @time test_time(abs)
elapsed time: 0.064705561 seconds (3280 bytes allocated, 31.16% gc 
time)
Inf

So roughly 10X slower, probably because of the much larger amount of memory 
allocated (3280 bytes vs. 96 bytes)

Why does the second version allocate so much more memory? (I'm running 
Julia 0.3.6 for this testcase)

Phil




[julia-users] Re: Best practices for migrating 0.3 code to 0.4? (specifically constructors)

2015-03-13 Thread Phil Tomson


On Thursday, March 12, 2015 at 9:00:45 PM UTC-7, Avik Sengupta wrote:
>
> I think this is simply due to your passing a UTF8String, while your 
> function defined only for ASCIIString. Since there is no function defined 
> for UTF8String, julia falls back to the default constructor that calls 
> convert. 
>
> julia> type A
>  a::ASCIIString
>  b::Int
>end
>
> julia> function A(fn::ASCIIString)
>A(fn, length(fn)
>end
> ERROR: syntax: missing comma or ) in argument list
>
> julia> function A(fn::ASCIIString)
>A(fn, length(fn))
>end
> A
>
> julia> A("X")
> A("X",1)
>
> julia> A("∞")
> ERROR: MethodError: `convert` has no method matching convert(::Type{A}, 
> ::UTF8String)
> This may have arisen from a call to the constructor A(...),
> since type constructors fall back to convert methods.
> Closest candidates are:
>   convert{T}(::Type{T}, ::T)
>
>  in call at no file
>
>
> #Now try this: 
>
> julia> type A
>  a::AbstractString
>  b::Int
>end
>
> julia> function A(fn::AbstractString)
>A(fn, length(fn))
>end
> A
>
> julia> A("X")
> A("X",1)
>
> julia> A("∞")
> A("∞",1)
>
> Not that using AbstractString is only one way to solve this, which may or 
> may not be appropriate for your use case. The code above is simply to 
> demonstrate the issue at hand. 
>
>
Why would this have changed in 0.4, though? 

Running in Julia 0.4:
julia> typeof("string")
ASCIIString

so a quoted string still has the same type as it did before.


 

>
>
> On Thursday, 12 March 2015 23:43:58 UTC, Phil Tomson wrote:
>>
>> I thought I'd give 0.4 a spin to try out the new garbage collector.
>>
>> On my current codebase developed with 0.3 I ran into several warnings 
>> (*float32() 
>> should now be Float32()* - that sort of thing)
>>
>> And then this error:
>>
>>
>>
>>
>>
>>
>> *ERROR: LoadError: LoadError: LoadError: LoadError: LoadError: 
>> MethodError: `convert` has no method matching convert(::Type{Img.ImgHSV}, 
>> ::UTF8String)This may have arisen from a call to the constructor 
>> Img.ImgHSV(...),since type constructors fall back to convert 
>> methods.Closest candidates are:  convert{T}(::Type{T}, ::T)*
>> After poking around New Language Features and the list here a bit it 
>> seems that there are changes to how overloaded constructors work.
>>
>> In my case I've got:
>>
>> type ImgHSV
>>   name::ASCIIString
>>   data::Array{HSV{Float32},2}  
>>   #data::Array{IntHSV,2}  
>>   height::Int64
>>   wid::Int64
>>   h_mean::Float32
>>   s_mean::Float32
>>   v_mean::Float32
>>   h_std::Float32
>>   s_std::Float32
>>   v_std::Float32
>> end
>>
>> # Given a filename of an image file, construct an ImgHSV
>> function ImgHSV(fn::ASCIIString)
>>   name,ext = splitext(basename(fn))
>>   source_img_hsv = Images.data(convert(Image{HSV{Float64}},imread(fn)))
>>   #scale all the values up from (0->1) to (0->255)
>>   source_img_scaled = map(x-> HSV( ((x.h/360)*255),(x.s*255),(x.v*255)),
>>   source_img_hsv)
>>   img_ht  = size(source_img_hsv,2)
>>   img_wid = size(source_img_hsv,1)
>>   h_mean = (mean(map(x-> x.h,source_img_hsv)/360)*255)
>>   s_mean = (mean(map(x-> x.s,source_img_hsv))*255)
>>   v_mean = (mean(map(x-> x.v,source_img_hsv))*255)
>>   h_std  = (std(map(x-> x.h,source_img_hsv)/360)*255)
>>   s_std  = (std(map(x-> x.s,source_img_hsv))*255)
>>   v_std  = (std(map(x-> x.v,source_img_hsv))*255)
>>   ImgHSV(
>> name,
>> float32(source_img_scaled),
>> img_ht,
>> img_wid,
>> h_mean,
>> s_mean,
>> v_mean,
>> h_std,
>> s_std,
>> v_std
>>   )
>> end
>>
>> Should I rename this function to something like buildImgHSV so it's not 
>> actually a constructor and *convert* doesn't enter the picture?
>>
>> Phil
>>
>>

[julia-users] Best practices for migrating 0.3 code to 0.4? (specifically constructors)

2015-03-12 Thread Phil Tomson
I thought I'd give 0.4 a spin to try out the new garbage collector.

On my current codebase developed with 0.3 I ran into several warnings 
(*float32() 
should now be Float32()* - that sort of thing)

And then this error:






*ERROR: LoadError: LoadError: LoadError: LoadError: LoadError: MethodError: 
`convert` has no method matching convert(::Type{Img.ImgHSV}, 
::UTF8String)This may have arisen from a call to the constructor 
Img.ImgHSV(...),since type constructors fall back to convert 
methods.Closest candidates are:  convert{T}(::Type{T}, ::T)*
After poking around New Language Features and the list here a bit it seems 
that there are changes to how overloaded constructors work.

In my case I've got:

type ImgHSV
  name::ASCIIString
  data::Array{HSV{Float32},2}  
  #data::Array{IntHSV,2}  
  height::Int64
  wid::Int64
  h_mean::Float32
  s_mean::Float32
  v_mean::Float32
  h_std::Float32
  s_std::Float32
  v_std::Float32
end

# Given a filename of an image file, construct an ImgHSV
function ImgHSV(fn::ASCIIString)
  name,ext = splitext(basename(fn))
  source_img_hsv = Images.data(convert(Image{HSV{Float64}},imread(fn)))
  #scale all the values up from (0->1) to (0->255)
  source_img_scaled = map(x-> HSV( ((x.h/360)*255),(x.s*255),(x.v*255)),
  source_img_hsv)
  img_ht  = size(source_img_hsv,2)
  img_wid = size(source_img_hsv,1)
  h_mean = (mean(map(x-> x.h,source_img_hsv)/360)*255)
  s_mean = (mean(map(x-> x.s,source_img_hsv))*255)
  v_mean = (mean(map(x-> x.v,source_img_hsv))*255)
  h_std  = (std(map(x-> x.h,source_img_hsv)/360)*255)
  s_std  = (std(map(x-> x.s,source_img_hsv))*255)
  v_std  = (std(map(x-> x.v,source_img_hsv))*255)
  ImgHSV(
name,
float32(source_img_scaled),
img_ht,
img_wid,
h_mean,
s_mean,
v_mean,
h_std,
s_std,
v_std
  )
end

Should I rename this function to something like buildImgHSV so it's not 
actually a constructor and *convert* doesn't enter the picture?

Phil



Re: [julia-users] Re: Memory allocation questions

2015-03-12 Thread Phil Tomson


On Thursday, March 12, 2015 at 2:14:34 AM UTC-7, Mauro wrote:
>
> Julia is not yet very good with producing fast vectorized code which 
> does not allocate temporaries.  The temporaries is what gets you here. 
>
> However, running your example, I get a slightly different a different 
> *.mem file (which makes more sense to me): 
>
> - function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
> 0   nl.hx = x 
> 248832000   wx = nl.w * nl.hx 
> 348364800   nl.pa = nl.b+wx 
> 1094864752   nl.pr = tanh(nl.pa).*nl.scale 
> - end 
>
> (what version of julia are you running, me 0.3.6).  So everytime 
> forward_propagate is called some temporaries are allocated.  So in 
> performance critical code you have write loops instead: 
>
> function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
> nl.hx = x # note: nl.hx now point to the same junk of memory 
> for i=1:size(nl.w,1) 
> nl.pa[i] = 0.; 
> for j=1:size(nl.w,2) 
> nl.pa[i] += nl.w[i,j]*nl.hx[j] 
> end 
> nl.pa[i] += nl.b[i] 
> nl.pr[i] = tanh(nl.pa[i])*nl.scale[i] 
> end 
> end 
>
> This does not allocate any memory and runs your test case at about 2x 
> the speed. 
>

Just tried that, I'm seeing a much bigger improvement. went from 8 seconds 
to 0.5 seconds per image evaluation. Nice improvment!

>
> Also a note on the code in your first email.  Instead of: 
>
>   for y in 1:img.height 
> @simd for x in 1:img.wid 
>   if 1 < x < img.wid 
> @inbounds left   = img.data[x-1,y] 
> @inbounds center = img.data[x,y] 
> @inbounds right  = img.data[x+1,y] 
>
> you should be able to write: 
>
>   @inbounds for y in 1:img.height 
> @simd for x in 1:img.wid 
>   if 1 < x < img.wid 
> left   = img.data[x-1,y] 
> center = img.data[x,y] 
> @inbounds right  = img.data[x+1,y] 
>
> Just curious, why did you get rid of the @inbounds on the assignments to 
left and center, but not right?
 

> Also, did you check that the @simd works?  I'm no expert on that but my 
> understanding is that most of the time it doesn't work with if-else.  If 
> that is the case, maybe special-case the first and last iteration and 
> run the loop like: @simd for x in 2:img.wid-1 . 


I just did that and I don't see a huge difference there. I'm not sure @simd 
is doing much there, in fact I took it out and nothing changed. Probably 
have to look at the LLVM IR output to see what's happening there.

 In fact that would save 
> you a comparisons in each iteration irrespective of @simd. 
>

Yes, that's a good point.  I think I'll just pre-load those two columns 
(the 1st and last columns of the matrix)

>
> On Thu, 2015-03-12 at 02:17, Phil Tomson > 
> wrote: 
> > I transformed it into a single-file testcase: 
> > 
> > # 
> > type NeuralLayer 
> > w::Matrix{Float32}   # weights 
> > cm::Matrix{Float32}  # connection matrix 
> > b::Vector{Float32}   # biases 
> > scale::Vector{Float32}  # 
> > a_func::Symbol # activation function 
> > hx::Vector{Float32}  # input values 
> > pa::Vector{Float32}  # pre activation values 
> > pr::Vector{Float32}  # predictions (activation values) 
> > frozen::Bool 
> > end 
> > 
> > function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
> >   nl.hx = x 
> >   wx = nl.w * nl.hx 
> >   nl.pa = nl.b+wx 
> >   nl.pr = tanh(nl.pa).*nl.scale 
> > end 
> > 
> > out_dim = 10 
> > in_dim = 10 
> > b = sqrt(6) / sqrt(in_dim + out_dim) 
> > 
> > nl = NeuralLayer( 
> >float32(2.0b * rand(Float32,out_dim,in_dim) - b), #setup rand 
> weights 
> >ones(Float32,out_dim,in_dim), #connection matrix 
> >  float32(map(x->x*(randbool()?-1:1),rand(out_dim)*rand(1:4))), 
> > #biases 
> >rand(Float32,out_dim),  # scale 
> >:tanh, 
> >rand(Float32,in_dim), 
> >rand(Float32,out_dim), 
> >rand(Float32,out_dim), 
> >false 
> > ) 
> > 
> > x = ones(Float32,in_dim) 
> > forward_propagate(nl,x) 
> > clear_malloc_data() 
> > for i in 1:(1920*1080) 
> >   forward_propagate(nl,x) 
> > end 
> > println("nl.pr is: $(nl.pr)") 
> > 
> # 
>
> > 
> > Now the interesting part of the  .mem file looks like this: 
> > 
> >- function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
> > 0   nl.hx = x 
> > 0   wx = nl.w * nl.hx 
> >   348368752   nl.pa = nl.b+wx 
> > 0   nl.pr = tanh(nl.pa).*nl.scale 
> > - end 
> > 
> > I split up the matrix multiply and the addition of bias vector into two 
> > separate lines and it looks like it's the vector addition that's 
> allocating 
> > all of the memory (which seems surprising, but maybe I'm missing 
> something). 
> > 
> > Phil 
>
>

Re: [julia-users] Re: Memory allocation questions

2015-03-12 Thread Phil Tomson


On Thursday, March 12, 2015 at 2:14:34 AM UTC-7, Mauro wrote:
>
> Julia is not yet very good with producing fast vectorized code which 
> does not allocate temporaries.  The temporaries is what gets you here. 
>
> However, running your example, I get a slightly different a different 
> *.mem file (which makes more sense to me): 
>
> - function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
> 0   nl.hx = x 
> 248832000   wx = nl.w * nl.hx 
> 348364800   nl.pa = nl.b+wx 
> 1094864752   nl.pr = tanh(nl.pa).*nl.scale 
> - end 
>

I would have guessed it should look more like that; why would the 
multiplication not result in temporaries (in my case)? That was a bit 
mysterious. 

>
> (what version of julia are you running, me 0.3.6).  


0.3.4 in my case.

 

> So everytime 
> forward_propagate is called some temporaries are allocated.  So in 
> performance critical code you have write loops instead: 
>

Will this always be the case or is this a current limitation of the Julia 
compiler? It seems like the more idiomatic, compact code should be handled 
more efficiently. Having to break this out into nested for-loops definitely 
hurts both readability as well as productivity.

>
> function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
> nl.hx = x # note: nl.hx now point to the same junk of memory 
> for i=1:size(nl.w,1) 
> nl.pa[i] = 0.; 
> for j=1:size(nl.w,2) 
> nl.pa[i] += nl.w[i,j]*nl.hx[j] 
> end 
> nl.pa[i] += nl.b[i] 
> nl.pr[i] = tanh(nl.pa[i])*nl.scale[i] 
> end 
> end 
>
>  

> This does not allocate any memory and runs your test case at about 2x 
> the speed. 
>
> Also a note on the code in your first email.  Instead of: 
>
>   for y in 1:img.height 
> @simd for x in 1:img.wid 
>   if 1 < x < img.wid 
> @inbounds left   = img.data[x-1,y] 
> @inbounds center = img.data[x,y] 
> @inbounds right  = img.data[x+1,y] 
>
> you should be able to write: 
>
>   @inbounds for y in 1:img.height 
> @simd for x in 1:img.wid 
>   if 1 < x < img.wid 
> left   = img.data[x-1,y] 
> center = img.data[x,y] 
> @inbounds right  = img.data[x+1,y] 
>
> Also, did you check that the @simd works?  I'm no expert on that but my 
> understanding is that most of the time it doesn't work with if-else.  If 
> that is the case, maybe special-case the first and last iteration and 
> run the loop like: @simd for x in 2:img.wid-1 .  In fact that would save 
> you a comparisons in each iteration irrespective of @simd. 
>
> On Thu, 2015-03-12 at 02:17, Phil Tomson > 
> wrote: 
> > I transformed it into a single-file testcase: 
> > 
> > # 
> > type NeuralLayer 
> > w::Matrix{Float32}   # weights 
> > cm::Matrix{Float32}  # connection matrix 
> > b::Vector{Float32}   # biases 
> > scale::Vector{Float32}  # 
> > a_func::Symbol # activation function 
> > hx::Vector{Float32}  # input values 
> > pa::Vector{Float32}  # pre activation values 
> > pr::Vector{Float32}  # predictions (activation values) 
> > frozen::Bool 
> > end 
> > 
> > function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
> >   nl.hx = x 
> >   wx = nl.w * nl.hx 
> >   nl.pa = nl.b+wx 
> >   nl.pr = tanh(nl.pa).*nl.scale 
> > end 
> > 
> > out_dim = 10 
> > in_dim = 10 
> > b = sqrt(6) / sqrt(in_dim + out_dim) 
> > 
> > nl = NeuralLayer( 
> >float32(2.0b * rand(Float32,out_dim,in_dim) - b), #setup rand 
> weights 
> >ones(Float32,out_dim,in_dim), #connection matrix 
> >  float32(map(x->x*(randbool()?-1:1),rand(out_dim)*rand(1:4))), 
> > #biases 
> >rand(Float32,out_dim),  # scale 
> >:tanh, 
> >rand(Float32,in_dim), 
> >rand(Float32,out_dim), 
> >rand(Float32,out_dim), 
> >false 
> > ) 
> > 
> > x = ones(Float32,in_dim) 
> > forward_propagate(nl,x) 
> > clear_malloc_data() 
> > for i in 1:(1920*1080) 
> >   forward_propagate(nl,x) 
> > end 
> > println("nl.pr is: $(nl.pr)") 
> > 
> # 
>
> > 
> > Now the interesting part of the  .mem file looks like this: 
> > 
> >- function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
> > 0   nl.hx = x 
> > 0   wx = nl.w * nl.hx 
> >   348368752   nl.pa = nl.b+wx 
> > 0   nl.pr = tanh(nl.pa).*nl.scale 
> > - end 
> > 
> > I split up the matrix multiply and the addition of bias vector into two 
> > separate lines and it looks like it's the vector addition that's 
> allocating 
> > all of the memory (which seems surprising, but maybe I'm missing 
> something). 
> > 
> > Phil 
>
>

[julia-users] Re: Memory allocation questions

2015-03-11 Thread Phil Tomson
I transformed it into a single-file testcase:

#
type NeuralLayer
w::Matrix{Float32}   # weights 
cm::Matrix{Float32}  # connection matrix 
b::Vector{Float32}   # biases 
scale::Vector{Float32}  # 
a_func::Symbol # activation function
hx::Vector{Float32}  # input values
pa::Vector{Float32}  # pre activation values
pr::Vector{Float32}  # predictions (activation values)
frozen::Bool
end

function forward_propagate(nl::NeuralLayer,x::Vector{Float32})
  nl.hx = x 
  wx = nl.w * nl.hx
  nl.pa = nl.b+wx
  nl.pr = tanh(nl.pa).*nl.scale 
end

out_dim = 10
in_dim = 10
b = sqrt(6) / sqrt(in_dim + out_dim)

nl = NeuralLayer(
   float32(2.0b * rand(Float32,out_dim,in_dim) - b), #setup rand weights
   ones(Float32,out_dim,in_dim), #connection matrix
 float32(map(x->x*(randbool()?-1:1),rand(out_dim)*rand(1:4))), 
#biases
   rand(Float32,out_dim),  # scale 
   :tanh, 
   rand(Float32,in_dim),
   rand(Float32,out_dim),
   rand(Float32,out_dim),
   false
)

x = ones(Float32,in_dim)
forward_propagate(nl,x)
clear_malloc_data()
for i in 1:(1920*1080)
  forward_propagate(nl,x)
end
println("nl.pr is: $(nl.pr)")
#

Now the interesting part of the  .mem file looks like this:

   - function forward_propagate(nl::NeuralLayer,x::Vector{Float32})
0   nl.hx = x
0   wx = nl.w * nl.hx
  348368752   nl.pa = nl.b+wx
0   nl.pr = tanh(nl.pa).*nl.scale
- end

I split up the matrix multiply and the addition of bias vector into two 
separate lines and it looks like it's the vector addition that's allocating 
all of the memory (which seems surprising, but maybe I'm missing something).

Phil





[julia-users] Re: Array as function

2015-03-11 Thread Phil Tomson


On Wednesday, March 11, 2015 at 4:49:06 PM UTC-7, Diego Tapias wrote:
>
> Quick question: what does this mean
>
> julia> Array(Int,1)
>  1-element Array{Int64,1}:
>  139838919411184
>
> ?
>
> And another question:
>
> Is this just used for initializing an array? 
>

Typically you would use something like:

my_array =  Array(Int,1)

This preallocates space for your array and gives it a type. (it's an array 
with only one entry)

The array isn't initialized at that point, the space is just allocated, 
that's why you see:

139838919411184

in your single entry array. 

If you wanted to initialize it, you could do:

my_array = zeros(Int,1)

> ​
>


[julia-users] Memory allocation questions

2015-03-11 Thread Phil Tomson
I started out by putting an '@time' macro call on the function that I 
figured was taking the most time, results looked like:
elapsed time: 8.429919506 seconds (4275452256 bytes allocated, 37.36% gc 
time)

... so lots of bytes being allocated. 

To get a better picture of where that was happening I tried running Julia 
with --track-allocation=user

And looking in the .mem file for the same function I had prefixed with 
@time, I see:

- function 
fcq_clust(img::ImgHSV,ann::ANN.ArtificialNeuralNetwork,blend0::Matrix{Float32})
0  img_hs_mean::Float32 = 0.5
0  left::HSV{Float32}   = 
HSV(float32(0.0),float32(0.0),float32(0.0))
0  center::HSV{Float32} = 
HSV(float32(0.0),float32(0.0),float32(0.0))
0  right::HSV{Float32}  = 
HSV(float32(0.0),float32(0.0),float32(0.0))
  768  param_array::Vector{Float32} = Array(Float32,10)
0  param_array[7] = img.s_mean 
0  param_array[8] = img.v_mean
0  param_array[9] = img.s_std  
0  param_array[10]= img.v_std
0  for y in 1:img.height
0@simd for x in 1:img.wid
0  if 1 < x < img.wid
0@inbounds left   = img.data[x-1,y]
0@inbounds center = img.data[x,y]
0@inbounds right  = img.data[x+1,y]
- 
0@inbounds param_array[1] = left.s
0@inbounds param_array[2] = center.s
0@inbounds param_array[3] = right.s
0@inbounds param_array[4] = left.v
0@inbounds param_array[5] = center.v
0@inbounds param_array[6] = right.v
- 
0ANN.predict(ann,param_array)
- 
0@inbounds blend0[x,y] = param_array[1]
-  else
0@inbounds blend0[x,y] = img_hs_mean
-  end
-end
0  end
0 end 


It looks pretty OK. But then I looked at the .mem file for the ANN.predict 
function:

- function predict(ann::ArtificialNeuralNetwork,x::Vector{Float32})
0 for i in 1:length(ann.layers)
0 x = forward_propagate(ann.layers[i], x)
- end
- end

Again, looks fine, but then I checked that forward_propagate function:

- function forward_propagate(nl::NeuralLayer,x::Vector{Float32})
0   nl.hx = x 
-1828754432   nl.pa = nl.b + nl.w * x
0   nl.pr = tanh(nl.pa).*nl.scale 
- end

Aha! Now we're getting somewhere. Apparently so much memory was allocated 
there that the counter overflowed and went negative(!)

The NeuralLayer type is defined as:

type NeuralLayer
w::Matrix{Float32}   # weights
cm::Matrix{Float32}  # connection matrix 
b::Vector{Float32}   # biases
scale::Vector{Float32}  #
a_func::Symbol # activation function
hx::Vector{Float32}  # input values
pa::Vector{Float32}  # pre activation values
pr::Vector{Float32}  # predictions (activation values)
frozen::Bool
end

Any ideas about reducing memory allocation in?:

nl.pa = nl.b + nl.w * x

Phil








[julia-users] eval in function scope (accessing function args)

2015-03-05 Thread Phil Tomson
Given:

abstract ABSGene
type NuGene <: Genetic.ABSGene
 fqnn::ANN
 dcqnn::ANN
 score::Float32
 end

 function mutate_copy{T<:ABSGene}(gene::T)
 all_fields_except_score = filter(x->x != :score, names(T))
 all_fields_except_score =  map(x->("mutate_copy(*gene*
.$x)"),all_fields_except_score)
 eval(parse("$(T)("*join(all_fields_except_score,",")*")"))
 end

 ng = NuGene()

 mutated_ng = mutate_copy(ng)


results in:
  
ERROR: gene not defined
in mutate_copy at none:4

If I just look at it as a string (prior to running parse and eval)  it 
looks fine:

"NuGene(mutate_copy(gene.fqnn),mutate_copy(gene.dcqnn))"

However, eval doesn't seem to know about gene that has been passed into the 
mutate_copy function.

How do I access the gene argument that's been passed into the mutate copy?

I tried this:

function mutate_copy{T<:ABSGene}(gene::T)
  all_fields_except_score = filter(x->x != :score, names(T))
  all_fields_except_score =  map(x->("mutate_copy(*$gene*
.$x)"),all_fields_except_score)
  #eval(parse("$(T)("*join(all_fields_except_score,",")*")"))
  (("$(T)("*join(all_fields_except_score,",")*")"))
end

But that expands the gene in the string which is not what I want.





 


[julia-users] Any way to constrain a Function type by argument types?

2015-03-03 Thread Phil Tomson
Let's say I want to define a type that contains a Function type:

type IterFunc
  iter_trigger::Int64
  func::Function 
end

Is there any way to say that the func member in that type should be a 
function that takes arguments of a certain type?

 Something like (not valid syntax, I tried):

type IterFunc
  iter_trigger::Int64
  func::Function{i::Int64, something::SomeType}
end


?


[julia-users] Any introspective way to get a list of functions defined in a module?

2015-02-25 Thread Phil Tomson
Just wondering if there is any way to get a list of functions defined in a 
module?


[julia-users] What Julia package would you use for animations?

2015-02-06 Thread Phil Tomson
Say I want to do something like Boids simulation in Julia 
(http://en.wikipedia.org/wiki/Boids), what packages are available to show 
the animations? These are easy(ish) things to do in Processing, but Julia 
is more capable mathematically and performance should be better. 

I know there's the Images library, but is it amenable to doing animations?




[julia-users] Re: It would be great to see some Julia talks at OSCON 2015

2015-01-30 Thread Phil Tomson
Just a reminder: the call for proposals for OSCON closes Feb 2. You've got 
one more weekend to get your Julia talk proposals in.  It's great 
visibility for the language as well as for presenters.
http://www.oscon.com/open-source-2015/public/cfp/360 
<http://www.google.com/url?q=http%3A%2F%2Fwww.oscon.com%2Fopen-source-2015%2Fpublic%2Fcfp%2F360&sa=D&sntz=1&usg=AFQjCNGnugQLYPpLXmuaRhg3KCplOimxHw>

Phil


On Tuesday, January 6, 2015 at 7:16:07 AM UTC-8, Phil Tomson wrote:
>
> Hello Julia users:  
>
> I'm on the program committee for OSCON (the O'Reilly Open Source 
> Convention) and we're always looking for interesting programming talks. I'm 
> pretty sure that there hasn't been any kind of Julia talk at OSCON in the 
> past. Given the rising visibility of the language it would be great if we 
> could get some talk proposals for OSCON 2015.  This year there is a special 
> emphasis on languages for math & science (see the Call for Proposals here: 
> http://www.oscon.com/open-source-2015/public/cfp/360 
> <http://www.google.com/url?q=http%3A%2F%2Fwww.oscon.com%2Fopen-source-2015%2Fpublic%2Fcfp%2F360&sa=D&sntz=1&usg=AFQjCNGnugQLYPpLXmuaRhg3KCplOimxHw>
>  
> specifically the "Solve" track) and Julia really should be featured.
>
> Are you using Julia to solve some interesting problems? Please submit a 
> talk proposal.
>
> OSCON is held in beautiful Portland Oregon July 20-24 this year.  Hope to 
> see your Julia talk there.
>
> Phil
>


[julia-users] Debugging: stepping into another package

2015-01-22 Thread Phil Tomson


I'm using the Debug package. I want to set a break point and then step into 
another external package (in this case Mocha), I tried something like this:









*using Mochausing Debug#...set up a lot of Mocha stuff ...#@debug (()-> 
begin  println("break point!")  @bp  solve(solver,net)end)()*

It does indeed stop at the @bp:














*  142println("break point!") -->  143@bp  144
solve(solver, net)debug:143> sat 
/home/phil/devel/WASP_TD/algorithm_dev/processing/depth_engine/src/learn_grad_whole.jl:144
  
143@bp -->  144solve(solver, net)  145  end)()debug:144> s*but 
at that point it doesn't step into the solve function (defined in Mocha 
package). Is there any way to do that?


[julia-users] It would be great to see some Julia talks at OSCON 2015

2015-01-06 Thread Phil Tomson
Hello Julia users:  

I'm on the program committee for OSCON (the O'Reilly Open Source 
Convention) and we're always looking for interesting programming talks. I'm 
pretty sure that there hasn't been any kind of Julia talk at OSCON in the 
past. Given the rising visibility of the language it would be great if we 
could get some talk proposals for OSCON 2015.  This year there is a special 
emphasis on languages for math & science (see the Call for Proposals here: 
http://www.oscon.com/open-source-2015/public/cfp/360 specifically the 
"Solve" track) and Julia really should be featured.

Are you using Julia to solve some interesting problems? Please submit a 
talk proposal.

OSCON is held in beautiful Portland Oregon July 20-24 this year.  Hope to 
see your Julia talk there.

Phil


[julia-users] Re: Why is typeof hex or binary number Uint64, while typeof decimal number is Int64?

2014-12-07 Thread Phil Tomson


On Sunday, December 7, 2014 5:08:45 PM UTC-8, ele...@gmail.com wrote:
>
>
>
> On Monday, December 8, 2014 10:21:52 AM UTC+10, Phil Tomson wrote:
>>
>> julia> typeof(-0b111)
>> Uint64
>>
>> julia> typeof(-7)
>> Int64
>>
>> julia> typeof(-0x7)
>> Uint64
>>
>> julia> typeof(-7)
>> Int64
>>
>> I find this a bit surprising. Why does the base of the number determine 
>> signed or unsigned-ness? Is this intentional or possibly a bug?
>>
>
> This is documented behaviour 
> http://docs.julialang.org/en/latest/manual/integers-and-floating-point-numbers/#integers
>  
> based on the heuristic that using hex is "mostly" in situations where you 
> need unsigned behaviour anyway.
>

The doc says: 

> This behavior is based on the observation that when one uses *unsigned 
> hex literals* for integer values, one typically is using them to 
> represent a fixed numeric byte sequence, rather than just an integer value.
>  
>

Hmm In the above cases they were signed hex literals. 


[julia-users] Why is typeof hex or binary number Uint64, while typeof decimal number is Int64?

2014-12-07 Thread Phil Tomson
julia> typeof(-0b111)
Uint64

julia> typeof(-7)
Int64

julia> typeof(-0x7)
Uint64

julia> typeof(-7)
Int64

I find this a bit surprising. Why does the base of the number determine 
signed or unsigned-ness? Is this intentional or possibly a bug?