Re: [julia-users] Re: haskey for Set
that works, thank you. Freddy Chua On Thu, Nov 12, 2015 at 3:06 PM, Seth <catch...@bromberger.com> wrote: > in(el, S) or el in S. > > > On Thursday, November 12, 2015 at 2:36:28 PM UTC-8, Freddy Chua wrote: >> >> haskey does not work for Set ? It only works for Dict. Should it be that >> way? How do I test whether an element is in a Set? >> >
[julia-users] haskey for Set
haskey does not work for Set ? It only works for Dict. Should it be that way? How do I test whether an element is in a Set?
[julia-users] Measure Execution Time of Remote Process
If I execute the following, @sync begin @spawnat remote_process_id_1 f() @spawnat remote_process_id_2 f() end How do I measure how much time each individual process takes? @elapsed is only for an aggregated value.
[julia-users] compile errors on osx 10.9.5 for v0.3.2
error during bootstrap: LoadError(at sysimg.jl line 230: LoadError(at linalg.jl line 216: LoadError(at linalg/umfpack.jl line 78: ErrorException(error compiling anonymous: could not load module libumfpack: dlopen(libumfpack.dylib, 1): Library not loaded: /Users/freddy/Unix/homebrew/lib/gcc/x86_64-apple-darwin13.3.0/4.9.1/libgfortran.3.dylib Referenced from: /Users/freddy/Unix/julia/usr/lib//libopenblas.dylib Reason: image not found Basic Block in function 'julia_anonymous;13266' does not have terminator! label %ifcont LLVM ERROR: Broken module, no Basic Block terminator!
[julia-users] Sorted Dictionary
Is there a dictionary or associative container or Tree Map that is sorted on the keys?
[julia-users] Re: Sorted Dictionary
It's not OrderedDict, if that's what you are thinking of. On Thursday, July 31, 2014 1:55:35 PM UTC-7, Ivar Nesje wrote: https://github.com/JuliaLang/DataStructures.jl
[julia-users] How to get text from PyObject matplotlib.text.Text object
I am using PyPlot to visualise my results. Then I used locs, labels = xticks() to get the x-axis labels but labels[1] is a PyObject matplotlib.text.Text object instead of a Julia String. How do I get the string out? I need to reformat the string.. -Freddy
[julia-users] Compilation error help??
../kernel/x86_64/dgemm_kernel_4x4_haswell.S:2548: Error: no such instruction: `vpermpd $ 0xb1,%ymm3,%ymm3' make[4]: *** [dtrmm_kernel_RN_HASWELL.o] Error 1 make[3]: *** [libs] Error 1 *** Clean the OpenBLAS build with 'make -C deps clean-openblas'. Rebuild with 'make OPENBLAS_USE_THREAD=0 if OpenBLAS had trouble linking libpthread.so, and with 'make OPENBLAS_TARGET_ARCH=NEHALEM' if there were errors building SandyBridge support. Both these options can also be used simultaneously. *** make[2]: *** [openblas-v0.2.9/libopenblas.so] Error 1 make[1]: *** [julia-release] Error 2 make: *** [release] Error 2 I have already included both of these options OPENBLAS_USE_THREAD=0 OPENBLAS_TARGET_ARCH=NEHALEM CentOS it's a VM
Re: [julia-users] Unable to compile Julia after Homebrew removed gfortran
Does not work, the compiler revert back to using clang On Thursday, May 29, 2014 1:57:25 PM UTC+8, Kevin Squire wrote: USEGCC = 1 USECLANG = 0 On Wed, May 28, 2014 at 10:34 PM, Freddy Chua fred...@gmail.comjavascript: wrote: I am trying to benchmark GNU cc vs Clang cc. But how to configure the Make.user so that I compile using GNU gcc instead of clang cc ? On Thursday, May 29, 2014 11:18:40 AM UTC+8, Jameson wrote: after changing versions of gfortran, you need to delete from deps anything built with fortran: SuiteSparse, Openblas, LAPACK, and ARPACK to rebuild them (anyone have an idea of where to put this as an FAQ?) On Wed, May 28, 2014 at 10:27 PM, Freddy Chua fred...@gmail.com wrote: I am on OSX Mavericks. OSX does not come with the GCC compilers. Instead it uses the Clang CC compilers which does not included gfortran, I have been using gfortran from Homebrew. Recently, Homebrew removed gfortran as a formula. But gfortran is now included in the homebrew gcc formula. After removing gfortran and installing gcc, I cannot compile julia anymore. this is the output from running make clean, make cleanall then make sparse/abstractsparse.jl linalg.jl error during bootstrap: LoadError(at sysimg.jl line 222: LoadError(at linalg.jl line 195: LoadError(at linalg/matmul.jl line 206: ErrorException(error compiling blas_vendor: could not load module libopenblas: dlopen(libopenblas.dylib, 1): Library not loaded: ~/homebrew/Cellar/gfortran/4.8.2/gfortran/lib/libgfortran.3.dylib Referenced from: ~/julia/usr/lib/libopenblas.dylib Reason: image not found Basic Block in function 'julia_blas_vendor7698' does not have terminator! label %try LLVM ERROR: Broken module, no Basic Block terminator! make[1]: *** [~/julia/usr/lib/julia/sys0.o] Error 1 make: *** [release] Error 2 Someone help me please?
[julia-users] Unable to compile Julia after Homebrew removed gfortran
I am on OSX Mavericks. OSX does not come with the GCC compilers. Instead it uses the Clang CC compilers which does not included gfortran, I have been using gfortran from Homebrew. Recently, Homebrew removed gfortran as a formula. But gfortran is now included in the homebrew gcc formula. After removing gfortran and installing gcc, I cannot compile julia anymore. this is the output from running make clean, make cleanall then make sparse/abstractsparse.jl linalg.jl error during bootstrap: LoadError(at sysimg.jl line 222: LoadError(at linalg.jl line 195: LoadError(at linalg/matmul.jl line 206: ErrorException(error compiling blas_vendor: could not load module libopenblas: dlopen(libopenblas.dylib, 1): Library not loaded: ~/homebrew/Cellar/gfortran/4.8.2/gfortran/lib/libgfortran.3.dylib Referenced from: ~/julia/usr/lib/libopenblas.dylib Reason: image not found Basic Block in function 'julia_blas_vendor7698' does not have terminator! label %try LLVM ERROR: Broken module, no Basic Block terminator! make[1]: *** [~/julia/usr/lib/julia/sys0.o] Error 1 make: *** [release] Error 2 Someone help me please?
Re: [julia-users] Unable to compile Julia after Homebrew removed gfortran
I am trying to benchmark GNU cc vs Clang cc. But how to configure the Make.user so that I compile using GNU gcc instead of clang cc ? On Thursday, May 29, 2014 11:18:40 AM UTC+8, Jameson wrote: after changing versions of gfortran, you need to delete from deps anything built with fortran: SuiteSparse, Openblas, LAPACK, and ARPACK to rebuild them (anyone have an idea of where to put this as an FAQ?) On Wed, May 28, 2014 at 10:27 PM, Freddy Chua fred...@gmail.comjavascript: wrote: I am on OSX Mavericks. OSX does not come with the GCC compilers. Instead it uses the Clang CC compilers which does not included gfortran, I have been using gfortran from Homebrew. Recently, Homebrew removed gfortran as a formula. But gfortran is now included in the homebrew gcc formula. After removing gfortran and installing gcc, I cannot compile julia anymore. this is the output from running make clean, make cleanall then make sparse/abstractsparse.jl linalg.jl error during bootstrap: LoadError(at sysimg.jl line 222: LoadError(at linalg.jl line 195: LoadError(at linalg/matmul.jl line 206: ErrorException(error compiling blas_vendor: could not load module libopenblas: dlopen(libopenblas.dylib, 1): Library not loaded: ~/homebrew/Cellar/gfortran/4.8.2/gfortran/lib/libgfortran.3.dylib Referenced from: ~/julia/usr/lib/libopenblas.dylib Reason: image not found Basic Block in function 'julia_blas_vendor7698' does not have terminator! label %try LLVM ERROR: Broken module, no Basic Block terminator! make[1]: *** [~/julia/usr/lib/julia/sys0.o] Error 1 make: *** [release] Error 2 Someone help me please?
[julia-users] Convert Array{Array{Float64}, 1} to Array{Float64, 2}
For example a = Array(Array, 0) push!(a, [1, 2]) push!(a, [3, 4]) Gives me an array of array. Can I get a matrix easily in this way?
[julia-users] Re: Convert Array{Array{Float64}, 1} to Array{Float64, 2}
I mean, is there a function that allows me to take in a and return a matrix? b = convert_to_matrix(a) b[:, 2] = [2,4] On Monday, May 26, 2014 1:36:47 AM UTC+8, Freddy Chua wrote: For example a = Array(Array, 0) push!(a, [1, 2]) push!(a, [3, 4]) Gives me an array of array. Can I get a matrix easily in this way?
[julia-users] Re: Convert Array{Array{Float64}, 1} to Array{Float64, 2}
hang on, what does the ... in hcat(a...) means On Monday, May 26, 2014 1:47:21 AM UTC+8, Ethan Anderes wrote: Right, hcat(a…) does that (up to a transpose since julia stores things in column major order ). julia a = Array(Array, 0) 0-element Array{Array{T,N},1} julia push!(a, [1, 2]) 1-element Array{Array{T,N},1}: [1,2] julia push!(a, [3, 4]) 2-element Array{Array{T,N},1}: [1,2] [3,4] julia b = hcat(a...) 2x2 Array{Int64,2}: 1 3 2 4 julia b[:, 2] 2-element Array{Int64,1}: 3 4 On Sunday, May 25, 2014 10:42:38 AM UTC-7, Freddy Chua wrote: I mean, is there a function that allows me to take in a and return a matrix? b = convert_to_matrix(a) b[:, 2] = [2,4] On Monday, May 26, 2014 1:36:47 AM UTC+8, Freddy Chua wrote: For example a = Array(Array, 0) push!(a, [1, 2]) push!(a, [3, 4]) Gives me an array of array. Can I get a matrix easily in this way?
[julia-users] Re: Convert Array{Array{Float64}, 1} to Array{Float64, 2}
cool, thanks! On Monday, May 26, 2014 1:57:28 AM UTC+8, Ethan Anderes wrote: I love the ... notation. It splats the entries into separate arguments separated by commas into the function. julia y = abcd abcd julia [y...] == ['a', 'b', 'c', 'd'] true On Sunday, May 25, 2014 10:49:05 AM UTC-7, Freddy Chua wrote: hang on, what does the ... in hcat(a...) means On Monday, May 26, 2014 1:47:21 AM UTC+8, Ethan Anderes wrote: Right, hcat(a…) does that (up to a transpose since julia stores things in column major order ). julia a = Array(Array, 0) 0-element Array{Array{T,N},1} julia push!(a, [1, 2]) 1-element Array{Array{T,N},1}: [1,2] julia push!(a, [3, 4]) 2-element Array{Array{T,N},1}: [1,2] [3,4] julia b = hcat(a...) 2x2 Array{Int64,2}: 1 3 2 4 julia b[:, 2] 2-element Array{Int64,1}: 3 4 On Sunday, May 25, 2014 10:42:38 AM UTC-7, Freddy Chua wrote: I mean, is there a function that allows me to take in a and return a matrix? b = convert_to_matrix(a) b[:, 2] = [2,4] On Monday, May 26, 2014 1:36:47 AM UTC+8, Freddy Chua wrote: For example a = Array(Array, 0) push!(a, [1, 2]) push!(a, [3, 4]) Gives me an array of array. Can I get a matrix easily in this way?
[julia-users] Re: How does pass-by-sharing work exactly?
b = b .+ 5 creates a new instance of an array, so the original array pointed to by b is not changed at all. On Thursday, May 1, 2014 7:39:14 PM UTC+8, Kaj Wiik wrote: As a new user I was surprised that even if you change the value of function arguments (inside the function) the changes are not always visible outside but in some cases they are. Here's an example: function vappu!(a,b) a[3]=100 b = b .+ 5 (a,b) end c = [1:5] d = [1:5] vappu!(c,d) ([1,2,100,4,5],[6,7,8,9,10]) c 5-element Array{Int64,1}: 1 2 100 4 5 d 5-element Array{Int64,1}: 1 2 3 4 5 Should I loop over arrays explicitly, what is happening in b = b .+ 5 ? Thanks, Kaj
Re: [julia-users] How does pass-by-sharing work exactly?
do this b = [1:5] f(x) = x + 5 map!(f, b) On Thursday, May 1, 2014 9:03:35 PM UTC+8, Kevin Squire wrote: b[:] = b .+ 5 has the behavior that you want. However, it creates a copy, does the addition, then copies the result back into b. So, looping (aka devectorizing) would generally be faster. For simple expressions like these, though, the Devectorize.jl package should allow you to write @devec b[:] = b .+ 5 It then rewrites the expression as a loop. It isn't able to recognize some expressions, though (especially complex ones), so YMMV. (Actually, it may not work with .+, since that is a relatively new change in the language. If you check and it doesn't, try submitting a github issue, or just report back here.) Cheers! Kevin On Thursday, May 1, 2014, Kaj Wiik kaj@gmail.com javascript: wrote: OK, thanks, makes sense. But how to change the original instance, is looping the only way? On Thursday, May 1, 2014 3:12:51 PM UTC+3, Freddy Chua wrote: b = b .+ 5 creates a new instance of an array, so the original array pointed to by b is not changed at all. On Thursday, May 1, 2014 7:39:14 PM UTC+8, Kaj Wiik wrote: As a new user I was surprised that even if you change the value of function arguments (inside the function) the changes are not always visible outside but in some cases they are. Here's an example: function vappu!(a,b) a[3]=100 b = b .+ 5 (a,b) end c = [1:5] d = [1:5] vappu!(c,d) ([1,2,100,4,5],[6,7,8,9,10]) c 5-element Array{Int64,1}: 1 2 100 4 5 d 5-element Array{Int64,1}: 1 2 3 4 5
Re: [julia-users] Re: Help me optimize Stochastic Gradient Descent of Least Squares Error
Cool it works better now. I thought having the codes inside begin and end is already avoiding the global scoping of the variables. But I would still like to point out that java is still twice as fast than Julia. I am not sure how is scala compared to Julia. But Julia syntax is wa easier than java and scala. Freddy Chua On Sun, Apr 27, 2014 at 2:07 PM, Elliot Saba staticfl...@gmail.com wrote: Hey there Freddy. The first thing you can do to speed up your code is to throw it inside of a function. Simply replacing your first line (which is begin) with function domytest() speeds up your code significantly. I get a runtime of about 1.5 seconds from running the function versus ~70 seconds from running the original code. I believe the reason behind this is because outside of a function, all of these variables are treated as global variables, which cannot have the same assumptions of type-stability that local variables can have, which slows down computation significantly. See this pagehttp://julia.readthedocs.org/en/latest/manual/performance-tips/for more info, and other performance tips. -E On Sat, Apr 26, 2014 at 11:03 PM, Freddy Chua freddy...@gmail.com wrote: This code takes 60+ secs to execute on my machine. The Java equivalent takes only 0.2 secs!!! Please tell me how to optimise the following code.begin begin N = 1 K = 100 rate = 1e-2 ITERATIONS = 1 # generate y y = rand(N) # generate x x = rand(K, N) # generate w w = zeros(Float64, K) tic() for i=1:ITERATIONS for n=1:N y_hat = 0.0 for k=1:K y_hat += w[k] * x[k,n] end for k=1:K w[k] += rate * (y[n] - y_hat) * x[k,n] end end end toc() end Sorry for repeated posting, I did so to properly indent the code..
[julia-users] Re: Help me optimize Stochastic Gradient Descent of Least Squares Error
I just hope that Julia can be faster than Java someday... On Sunday, April 27, 2014 2:03:28 PM UTC+8, Freddy Chua wrote: This code takes 60+ secs to execute on my machine. The Java equivalent takes only 0.2 secs!!! Please tell me how to optimise the following code.begin begin N = 1 K = 100 rate = 1e-2 ITERATIONS = 1 # generate y y = rand(N) # generate x x = rand(K, N) # generate w w = zeros(Float64, K) tic() for i=1:ITERATIONS for n=1:N y_hat = 0.0 for k=1:K y_hat += w[k] * x[k,n] end for k=1:K w[k] += rate * (y[n] - y_hat) * x[k,n] end end end toc() end Sorry for repeated posting, I did so to properly indent the code..
Re: [julia-users] Re: Help me optimize Stochastic Gradient Descent of Least Squares Error
Here's the Java code. import java.util.Random; public class LeastSquaresError { public static void main(String [] args) { int N = 10; int K = 100; double rate = 1e-2; int ITERATIONS = 100; double [] y = new double[N]; double [] x = new double[N*K]; double [] w = new double[K]; Random rand = new Random(); for(int n=0;nN;n++) { y[n] = rand.nextDouble(); for(int k=0;kK;k++) { x[n*K + k] = rand.nextDouble(); } } for(int k=0;kK;k++) { w[k] = 0.0; } long t1 = System.currentTimeMillis(); for(int i=0;iITERATIONS;i++) { for(int n=0;nN;n++) { double y_hat = 0.0; for(int k=0;kK;k++) { y_hat += w[k] * x[n*K + k]; } for(int k=0;kK;k++) { w[k] += rate * (y[n] - y_hat) * x[n*K + k]; } } } long t2 = System.currentTimeMillis(); double elapsed = (double)(t2-t1)/1000.0; System.out.println(String.format(Time elapsed: %e, elapsed)); } } On Sunday, April 27, 2014 2:46:19 PM UTC+8, Elliot Saba wrote: It might also help to see the equivalent Java code, to make sure that we're actually doing the same things. Ivar's comment about temporaries is spot on; sometimes it's the things about the language that we take for granted that are killing us performance-wise, so it's always best to make sure we're comparing apples to oranges. -E On Sat, Apr 26, 2014 at 11:44 PM, Ivar Nesje iva...@gmail.comjavascript: wrote: The clue is to structure it more like a c/java program and less like a matlab script. Mathworks has made great efforts to be able to run poorly structured programs fast. Julia focuses on generating fast machine code, but we currently don't optimize well for the common case where global variables don't change their type, so we and up doing the slow multiple dispatch lookup at every step of the loop, instead of only once at compile time. Solution: wrap the code in a function, so that Julia can analyze the types. To get really high performance, it is worth noting that Julia don't have a fast garbage collector. (Nobody really does, but many are apparently faster than ours). It will often be useful to reduce the number of temporarily allocated objects, so that GC kicks in less often. Solution: devectorize your code and manipulate arrays in place, to reduce the number of temporary arrays that are needed.
[julia-users] Re: Help me optimize Stochastic Gradient Descent of Least Squares Error
Stochastic Gradient Descent is one of the most important optimisation algorithm in Machine Learning. So having it perform better than Java is important to have more widespread adoption. On Sunday, April 27, 2014 2:03:28 PM UTC+8, Freddy Chua wrote: This code takes 60+ secs to execute on my machine. The Java equivalent takes only 0.2 secs!!! Please tell me how to optimise the following code.begin begin N = 1 K = 100 rate = 1e-2 ITERATIONS = 1 # generate y y = rand(N) # generate x x = rand(K, N) # generate w w = zeros(Float64, K) tic() for i=1:ITERATIONS for n=1:N y_hat = 0.0 for k=1:K y_hat += w[k] * x[k,n] end for k=1:K w[k] += rate * (y[n] - y_hat) * x[k,n] end end end toc() end Sorry for repeated posting, I did so to properly indent the code..
Re: [julia-users] Re: Help me optimize Stochastic Gradient Descent of Least Squares Error
wooh, this @inbounds thing is new to me... At least it does shows that Julia is comparable to Java. On Sunday, April 27, 2014 3:04:26 PM UTC+8, Elliot Saba wrote: Since we have made sure that our for loops have the right boundaries, we can assure the compiler that we're not going to step out of the bounds of an array, and surround our code in the @inbounds macro. This is not something you should do unless you're certain that you'll never try to access memory out of bounds, but it does get the runtime down to 0.23 seconds, which is on the same order as Java. Here's the full codehttps://gist.github.com/staticfloat/11339342with all the modifications made. -E On Sat, Apr 26, 2014 at 11:55 PM, Freddy Chua fred...@gmail.comjavascript: wrote: Stochastic Gradient Descent is one of the most important optimisation algorithm in Machine Learning. So having it perform better than Java is important to have more widespread adoption. On Sunday, April 27, 2014 2:03:28 PM UTC+8, Freddy Chua wrote: This code takes 60+ secs to execute on my machine. The Java equivalent takes only 0.2 secs!!! Please tell me how to optimise the following code.begin begin N = 1 K = 100 rate = 1e-2 ITERATIONS = 1 # generate y y = rand(N) # generate x x = rand(K, N) # generate w w = zeros(Float64, K) tic() for i=1:ITERATIONS for n=1:N y_hat = 0.0 for k=1:K y_hat += w[k] * x[k,n] end for k=1:K w[k] += rate * (y[n] - y_hat) * x[k,n] end end end toc() end Sorry for repeated posting, I did so to properly indent the code..
Re: [julia-users] Re: Help me optimize Stochastic Gradient Descent of Least Squares Error
You are mistaken. The improvement is in the Julia implementation. On Sunday, April 27, 2014 11:13:12 PM UTC+8, Iain Dunning wrote: I'm very surprised that Java is that much faster than the initial implementation provided (after its been wrapped in a function). Feel like there is something non-obvious going on... On Sunday, April 27, 2014 5:33:06 AM UTC-4, Carlos Becker wrote: I agree with Elliot, take a look at the performance tips. Also, you may want to move the tic(), toc() out of the function, make sure you compile it first, and then use @time function calll to time it. you may also get a considerable boost by using @simd in your for loops (together with @inbounds) Let us know how it goes ;) cheers. El domingo, 27 de abril de 2014 09:39:03 UTC+2, Freddy Chua escribió: Alright, thanks! All these is looking very positive for Julia. On Sunday, April 27, 2014 3:36:23 PM UTC+8, Elliot Saba wrote: I highly suggest you read through the whole Performance Tipshttp://julia.readthedocs.org/en/latest/manual/performance-tips/ page I linked to above; it has documentation on all these little features and stuff. I did get a small improvement (~5%) by enabling SIMD extensions on the two inner for loops, but that requires a very recent build of Julia and is a somewhat experimental feature. Neat to have though. -E On Sun, Apr 27, 2014 at 12:14 AM, Freddy Chua fred...@gmail.comwrote: wooh, this @inbounds thing is new to me... At least it does shows that Julia is comparable to Java. On Sunday, April 27, 2014 3:04:26 PM UTC+8, Elliot Saba wrote: Since we have made sure that our for loops have the right boundaries, we can assure the compiler that we're not going to step out of the bounds of an array, and surround our code in the @inbounds macro. This is not something you should do unless you're certain that you'll never try to access memory out of bounds, but it does get the runtime down to 0.23 seconds, which is on the same order as Java. Here's the full codehttps://gist.github.com/staticfloat/11339342with all the modifications made. -E On Sat, Apr 26, 2014 at 11:55 PM, Freddy Chua fred...@gmail.comwrote: Stochastic Gradient Descent is one of the most important optimisation algorithm in Machine Learning. So having it perform better than Java is important to have more widespread adoption. On Sunday, April 27, 2014 2:03:28 PM UTC+8, Freddy Chua wrote: This code takes 60+ secs to execute on my machine. The Java equivalent takes only 0.2 secs!!! Please tell me how to optimise the following code.begin begin N = 1 K = 100 rate = 1e-2 ITERATIONS = 1 # generate y y = rand(N) # generate x x = rand(K, N) # generate w w = zeros(Float64, K) tic() for i=1:ITERATIONS for n=1:N y_hat = 0.0 for k=1:K y_hat += w[k] * x[k,n] end for k=1:K w[k] += rate * (y[n] - y_hat) * x[k,n] end end end toc() end Sorry for repeated posting, I did so to properly indent the code..
Re: [julia-users] Re: Help me optimize Stochastic Gradient Descent of Least Squares Error
yep, x never changes... On Monday, April 28, 2014 12:25:14 AM UTC+8, Jason Merrill wrote: On Sunday, April 27, 2014 12:04:26 AM UTC-7, Elliot Saba wrote: Since we have made sure that our for loops have the right boundaries, we can assure the compiler that we're not going to step out of the bounds of an array, and surround our code in the @inbounds macro. This is not something you should do unless you're certain that you'll never try to access memory out of bounds, but it does get the runtime down to 0.23 seconds, which is on the same order as Java. Here's the full codehttps://gist.github.com/staticfloat/11339342with all the modifications made. -E I made this comment on the gist, and then figured I should just copy it here: Lifting the computation of the scaling constant out of the loop shaves another 13% off the runtime. So change lines 28-30 to c = rate*(y[n] - y_hat)for k=1:K w[k] += c * x[k,n]end Ideally, the compiler might do that for you. BTW, is x really supposed to remain the same between iterations?
[julia-users] Re: Help me optimize Stochastic Gradient Descent of Least Squares Error
begin N = 1 K = 100 rate = 1e-2 ITERATIONS = 100 # generate y y = rand(N) # generate x x = rand(K, N) # generate w w = zeros(Float64, K) tic() for i=1:ITERATIONS for n=1:N y_hat = 0.0 x_n = x[:,n] for k=1:K y_hat += w[k] * x_n[k] end for k=1:K w[k] += rate * (y[n] - y_hat) * x_n[k] end end end toc() end
[julia-users] Re: Surprising range behavior
I think it's correct because the next value in the range would exceed PI. If you try 0:pi/101:pi, you would get 3.14 again. On Thursday, April 24, 2014 5:59:10 AM UTC+8, Peter Simon wrote: The first three results below are what I expected. The fourth result surprised me: julia (0:pi:pi)[end] 3.141592653589793 julia (0:pi/2:pi)[end] 3.141592653589793 julia (0:pi/3:pi)[end] 3.141592653589793 julia (0:pi/100:pi)[end] 3.1101767270538954 Is this behavior correct? Version info: julia versioninfo() Julia Version 0.3.0-prerelease+2703 Commit 942ae42* (2014-04-22 18:57 UTC) Platform Info: System: Windows (x86_64-w64-mingw32) CPU: Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz WORD_SIZE: 64 BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY) LAPACK: libopenblas LIBM: libopenlibm --Peter
Re: [julia-users] How to get memory address of variable?
Found pointer_from_objref Thanks! I am trying to fix HDF5 using this! On Sunday, April 20, 2014 6:29:16 PM UTC+8, Tim Holy wrote: You can use pointer(foo). However, note that Julia has the a===b comparsion (which maps to is(a,b)), which evaluates to true only if a and b refer to the same object. --Tim On Saturday, April 19, 2014 10:37:45 PM Freddy Chua wrote: I am trying to see if I could fix some file serialization problems. Suppose I have a composite type type Foo a::Int64 end foo = Foo(10) How do I get the memory address or location of foo?
[julia-users] How to get memory address of variable?
I am trying to see if I could fix some file serialization problems. Suppose I have a composite type type Foo a::Int64 end foo = Foo(10) How do I get the memory address or location of foo?
[julia-users] I noticed there is no do while loop
as stated in question..
[julia-users] Pretty sure Dict (dictionary) is slow
I am using Dict to store my values. Since it is hash table, I thought that the performance would remain fairly constant even as the dictionary grows bigger. But this is not what I am experience at the moment. When the size of my Dict grows, the cost of retrieval increases as well. Can someone help me here??? I really need the dictionary to be fast...
[julia-users] Re: Pretty sure Dict (dictionary) is slow
Just to add, the key is an object rather than the usual ASCIIString or Int64 On Tuesday, April 1, 2014 6:34:08 PM UTC+8, Freddy Chua wrote: I am using Dict to store my values. Since it is hash table, I thought that the performance would remain fairly constant even as the dictionary grows bigger. But this is not what I am experience at the moment. When the size of my Dict grows, the cost of retrieval increases as well. Can someone help me here??? I really need the dictionary to be fast...
Re: [julia-users] Any method to save the variables in workspace to file?
Looks pretty good! On Tuesday, April 1, 2014 8:51:44 PM UTC+8, Isaiah wrote: One option is the JLD feature of HDF5 package: https://github.com/timholy/HDF5.jl On Tue, Apr 1, 2014 at 8:41 AM, Freddy Chua fred...@gmail.comjavascript: wrote: in matlab, there's save and load in java, there's object serialization So does julia have this feature?
Re: [julia-users] Re: Pretty sure Dict (dictionary) is slow
ObjectIdDict does not allow pre-defined types.. wouldn't that affect the performance too? On Tuesday, April 1, 2014 8:55:45 PM UTC+8, Isaiah wrote: You could try ObjectIdDict, which is specialized for this use case. On Tue, Apr 1, 2014 at 6:51 AM, Freddy Chua fred...@gmail.comjavascript: wrote: Just to add, the key is an object rather than the usual ASCIIString or Int64 On Tuesday, April 1, 2014 6:34:08 PM UTC+8, Freddy Chua wrote: I am using Dict to store my values. Since it is hash table, I thought that the performance would remain fairly constant even as the dictionary grows bigger. But this is not what I am experience at the moment. When the size of my Dict grows, the cost of retrieval increases as well. Can someone help me here??? I really need the dictionary to be fast...
Re: [julia-users] Re: Pretty sure Dict (dictionary) is slow
Hard to isolate my code. It may not be the Dictionary as I have also noticed abnormal behaviour with my own customised linked list. I also read in a pretty big chunk of data (1GB). But let me describe it briefly here. My program read in some data in the order of 10s of MB Then it reads in the 1GB data and start processing it, this is the second stage of the program. I ran several tests on this part, sometimes, I read in 10 mb only, sometimes i read in 20mb, sometimes 100 mb. Note, the data in here is independent of one another, it only takes up additional memory in the O/S. So reading in twice the amount of data should only require double the amount of time. What I discover is, the time spent does not scale linearly, infact, it increases polynomially. I suspect this is due to the way Julia handles data structures and objects with incontiguous memory allocation. Could someone give some tips on memory management? On Tuesday, April 1, 2014 10:26:43 PM UTC+8, Iain Dunning wrote: Can you give _any_ sample code to demonstrate this behaviour? On Tuesday, April 1, 2014 9:00:20 AM UTC-4, Freddy Chua wrote: ObjectIdDict does not allow pre-defined types.. wouldn't that affect the performance too? On Tuesday, April 1, 2014 8:55:45 PM UTC+8, Isaiah wrote: You could try ObjectIdDict, which is specialized for this use case. On Tue, Apr 1, 2014 at 6:51 AM, Freddy Chua fred...@gmail.com wrote: Just to add, the key is an object rather than the usual ASCIIString or Int64 On Tuesday, April 1, 2014 6:34:08 PM UTC+8, Freddy Chua wrote: I am using Dict to store my values. Since it is hash table, I thought that the performance would remain fairly constant even as the dictionary grows bigger. But this is not what I am experience at the moment. When the size of my Dict grows, the cost of retrieval increases as well. Can someone help me here??? I really need the dictionary to be fast...
Re: [julia-users] Re: Pretty sure Dict (dictionary) is slow
type List_Node bus_stop::Bus_Stop bus_stops::Dict{Int64, Bus_Stop} next::List_Node prev::List_Node num_next::Int64 num_prev::Int64 distance_to_next::Float64 distance_to_prev::Float64 function List_Node(bus_stop::Bus_Stop) list_node = new() list_node.bus_stop = bus_stop list_node.bus_stops = Dict{Int64, Bus_Stop}() list_node.bus_stops[bus_stop.id] = bus_stop list_node.next = list_node list_node.prev = list_node list_node.num_next = -1 list_node.num_prev = -1 list_node.distance_to_next = 0.0 list_node.distance_to_prev = 0.0 list_node end end abstract EdgeAbstract type Bus_Stop id::Int64 name::ASCIIString latitude::Float64 longitude::Float64 #edges::ObjectIdDict edges::Dict{Bus_Stop, EdgeAbstract} Bus_Stop(id::Int64) = (bs = new(); bs.id = id; bs) end type Edge : EdgeAbstract src::Bus_Stop tar::Bus_Stop speed::Float64 distance::Float64 function Edge(src::Bus_Stop, tar::Bus_Stop, distance::Float64, speed::Float64) edge = new() edge.src = src edge.tar = tar edge.distance = distance edge.speed = speed edge end end function summation_time(origin_node::List_Node, destination_node::List_Node) tmp = 0.0 current_node = origin_node while current_node != destination_node src_bus_stop = current_node.bus_stop tar_bus_stop = current_node.next.bus_stop edge = src_bus_stop.edges[tar_bus_stop] tmp += edge.distance / edge.speed current_node = current_node.next end return tmp end I noticed calling the summation_time function repeatedly degrades the performance... Anything wrong in what I did ? On Tuesday, April 1, 2014 10:43:52 PM UTC+8, Stefan Karpinski wrote: If you can provide some example code, lots of people here are more than happy to help performance optimize it, but without example code, it's hard to give you anything more specific than don't allocate more than you have to and don't make copies of things if you don't have to. Using mutating APIs (functions with ! at the end) is helpful. The Dict itself isn't allocating small objects on the heap, but you may very well be generating a lot of garbage along the way.
Re: [julia-users] Re: Pretty sure Dict (dictionary) is slow
A possible flaw I have is the circular dependency in the data structures between Bus_Stop and Edge.
Re: [julia-users] Re: Pretty sure Dict (dictionary) is slow
I found this, https://groups.google.com/forum/#!searchin/julia-users/garbage/julia-users/6_XvoLBzN60/EHCrT46tIQYJ Might try to turn off GC and see whether performance improves, will update here later... On Tuesday, April 1, 2014 11:07:03 PM UTC+8, Stefan Karpinski wrote: This code doesn't seem to create a List, Nodes or insert them into a Dict – it just walks over a preexisting linked list. On Tue, Apr 1, 2014 at 10:59 AM, Freddy Chua fred...@gmail.comjavascript: wrote: A possible flaw I have is the circular dependency in the data structures between Bus_Stop and Edge.
Re: [julia-users] Re: Pretty sure Dict (dictionary) is slow
Alright, I am pretty certain that macro nogc(ex) quote try gc_disable() local val = $(esc(ex)) finally gc_enable() end val end end Does the trick... My program does an iterative gradient descent, a kind of mathematical optimisation algorithm. So I loop through a for loop multiple times. In each loop, nothing gets created or destroyed, so GC is not needed at all. It turns out that turning off GC improves the performance significantly, probably 100x. This issue is serious, I wonder can there be a better way of determining when to call GC. I guessed disabling GC manually is not the intention of the compiler designers.. On Tuesday, April 1, 2014 11:12:07 PM UTC+8, Freddy Chua wrote: I found this, https://groups.google.com/forum/#!searchin/julia-users/garbage/julia-users/6_XvoLBzN60/EHCrT46tIQYJ Might try to turn off GC and see whether performance improves, will update here later... On Tuesday, April 1, 2014 11:07:03 PM UTC+8, Stefan Karpinski wrote: This code doesn't seem to create a List, Nodes or insert them into a Dict – it just walks over a preexisting linked list. On Tue, Apr 1, 2014 at 10:59 AM, Freddy Chua fred...@gmail.com wrote: A possible flaw I have is the circular dependency in the data structures between Bus_Stop and Edge.
Re: [julia-users] Re: Pretty sure Dict (dictionary) is slow
Alright, these are my timings are disabling gc before disabling gc each for loop takes 911.240040 after disabling gc each for loop takes 30.351131 around 30x improvement and if my loop run 10 times, it would have been a 300x improvement... I hope julia devs do consider improving the GC invocation... On Tuesday, April 1, 2014 11:28:44 PM UTC+8, Freddy Chua wrote: Alright, I am pretty certain that macro nogc(ex) quote try gc_disable() local val = $(esc(ex)) finally gc_enable() end val end end Does the trick... My program does an iterative gradient descent, a kind of mathematical optimisation algorithm. So I loop through a for loop multiple times. In each loop, nothing gets created or destroyed, so GC is not needed at all. It turns out that turning off GC improves the performance significantly, probably 100x. This issue is serious, I wonder can there be a better way of determining when to call GC. I guessed disabling GC manually is not the intention of the compiler designers.. On Tuesday, April 1, 2014 11:12:07 PM UTC+8, Freddy Chua wrote: I found this, https://groups.google.com/forum/#!searchin/julia-users/garbage/julia-users/6_XvoLBzN60/EHCrT46tIQYJ Might try to turn off GC and see whether performance improves, will update here later... On Tuesday, April 1, 2014 11:07:03 PM UTC+8, Stefan Karpinski wrote: This code doesn't seem to create a List, Nodes or insert them into a Dict – it just walks over a preexisting linked list. On Tue, Apr 1, 2014 at 10:59 AM, Freddy Chua fred...@gmail.com wrote: A possible flaw I have is the circular dependency in the data structures between Bus_Stop and Edge.
Re: [julia-users] Re: Pretty sure Dict (dictionary) is slow
Strange, although my for loop does not create any additional memory, the memory usage increases to 60GB after turning off GC... On Tuesday, April 1, 2014 11:44:06 PM UTC+8, Freddy Chua wrote: Alright, these are my timings are disabling gc before disabling gc each for loop takes 911.240040 after disabling gc each for loop takes 30.351131 around 30x improvement and if my loop run 10 times, it would have been a 300x improvement... I hope julia devs do consider improving the GC invocation... On Tuesday, April 1, 2014 11:28:44 PM UTC+8, Freddy Chua wrote: Alright, I am pretty certain that macro nogc(ex) quote try gc_disable() local val = $(esc(ex)) finally gc_enable() end val end end Does the trick... My program does an iterative gradient descent, a kind of mathematical optimisation algorithm. So I loop through a for loop multiple times. In each loop, nothing gets created or destroyed, so GC is not needed at all. It turns out that turning off GC improves the performance significantly, probably 100x. This issue is serious, I wonder can there be a better way of determining when to call GC. I guessed disabling GC manually is not the intention of the compiler designers.. On Tuesday, April 1, 2014 11:12:07 PM UTC+8, Freddy Chua wrote: I found this, https://groups.google.com/forum/#!searchin/julia-users/garbage/julia-users/6_XvoLBzN60/EHCrT46tIQYJ Might try to turn off GC and see whether performance improves, will update here later... On Tuesday, April 1, 2014 11:07:03 PM UTC+8, Stefan Karpinski wrote: This code doesn't seem to create a List, Nodes or insert them into a Dict – it just walks over a preexisting linked list. On Tue, Apr 1, 2014 at 10:59 AM, Freddy Chua fred...@gmail.com wrote: A possible flaw I have is the circular dependency in the data structures between Bus_Stop and Edge.
Re: [julia-users] Re: Pretty sure Dict (dictionary) is slow
There's a function here where the loop takes place.. https://github.com/JuliaLang/julia/issues/6357#issuecomment-3996 I don't really allocate anything in the loop.. Freddy Chua On Wed, Apr 2, 2014 at 12:24 AM, Stefan Karpinski ste...@karpinski.orgwrote: You still haven't shown any code that actually allocates anything, so it's pretty hard to say why or how that's happening. On Tue, Apr 1, 2014 at 11:55 AM, Freddy Chua freddy...@gmail.com wrote: Strange, although my for loop does not create any additional memory, the memory usage increases to 60GB after turning off GC... On Tuesday, April 1, 2014 11:44:06 PM UTC+8, Freddy Chua wrote: Alright, these are my timings are disabling gc before disabling gc each for loop takes 911.240040 after disabling gc each for loop takes 30.351131 around 30x improvement and if my loop run 10 times, it would have been a 300x improvement... I hope julia devs do consider improving the GC invocation... On Tuesday, April 1, 2014 11:28:44 PM UTC+8, Freddy Chua wrote: Alright, I am pretty certain that macro nogc(ex) quote try gc_disable() local val = $(esc(ex)) finally gc_enable() end val end end Does the trick... My program does an iterative gradient descent, a kind of mathematical optimisation algorithm. So I loop through a for loop multiple times. In each loop, nothing gets created or destroyed, so GC is not needed at all. It turns out that turning off GC improves the performance significantly, probably 100x. This issue is serious, I wonder can there be a better way of determining when to call GC. I guessed disabling GC manually is not the intention of the compiler designers.. On Tuesday, April 1, 2014 11:12:07 PM UTC+8, Freddy Chua wrote: I found this, https://groups.google.com/forum/#!searchin/julia- users/garbage/julia-users/6_XvoLBzN60/EHCrT46tIQYJ Might try to turn off GC and see whether performance improves, will update here later... On Tuesday, April 1, 2014 11:07:03 PM UTC+8, Stefan Karpinski wrote: This code doesn't seem to create a List, Nodes or insert them into a Dict - it just walks over a preexisting linked list. On Tue, Apr 1, 2014 at 10:59 AM, Freddy Chua fred...@gmail.comwrote: A possible flaw I have is the circular dependency in the data structures between Bus_Stop and Edge.
Re: [julia-users] Re: Pretty sure Dict (dictionary) is slow
abstract EdgeAbstract type Bus_Stop id::Int64 edges::Dict{Bus_Stop, EdgeAbstract} end type Edge : EdgeAbstract src::Bus_Stop tar::Bus_Stop speed::Float64 distance::Float64 end I have isolated the problem. It is this circular dependency that is currently not supported in Julia that causes the loosely typed code and memory allocation problem. I subsequently remove the circular dependency and the abstract type. Now the code works fast. So I have identified two much needed improvements.. 1) circular dependency support, 2) Garbage collection improvement. Both I believe are already under consideration.
Re: [julia-users] Any method to save the variables in workspace to file?
Oops, I git pull Julia everyday. So this would not work for me then.. On Wednesday, April 2, 2014 1:03:48 AM UTC+8, Avik Sengupta wrote: While HDF5 is the best option for cross platform and long term data storage, note that Julia does have a native serialize operation. http://docs.julialang.org/en/latest/stdlib/base/?highlight=serialize#Base.serialize However, as the documentation suggests, this reliably works only within the same julia version. Regards - Avik On Tuesday, 1 April 2014 13:51:44 UTC+1, Isaiah wrote: One option is the JLD feature of HDF5 package: https://github.com/timholy/HDF5.jl On Tue, Apr 1, 2014 at 8:41 AM, Freddy Chua fred...@gmail.com wrote: in matlab, there's save and load in java, there's object serialization So does julia have this feature?
Re: [julia-users] Where to download the benchmark program source codes
I did more tests. I found out that for larger inputs of fib, the Julia version does not perform as well as Java or C. For C, if I compile with -O3 optimisation flags, it would be much much faster than the Julia equivalent. I am not saying Julia is slow, it has lots of potential as a relatively young language. On Sunday, March 30, 2014 10:06:20 PM UTC+8, Isaiah wrote: https://github.com/JuliaLang/julia/tree/master/test/perf I also wonder why no tests were done with Java.. There is an open PR for Java, which you could check out and try: https://github.com/JuliaLang/julia/pull/5260 On Sun, Mar 30, 2014 at 9:55 AM, Freddy Chua fred...@gmail.comjavascript: wrote: Hi, I wonder where can I download the source code of these benchmarks, I want to try it on my own... I also wonder why no tests were done with Java.. Fortran JuliaPythonR MatlabOctaveMathe-matica JavaScriptGo gcc 4.8.10.2 2.7.3 3.0.2R2012a 3.6.48.0V8 3.7.12.22 go1 fib0.260.9130.37 411.361992.00 3211.81 64.462.181.03parse_int 5.031.6013.9559.40 1463.167109.8529.542.434.79 quicksort1.111.14 31.98524.29101.841132.04 35.743.511.25mandel 0.860.85 14.19106.97 64.58316.956.073.49 2.36pi_sum0.801.00 16.3315.421.29237.411.32 0.841.41rand_mat_stat 0.641.6613.5210.84 6.6114.984.523.28 8.12 rand_mat_mul0.961.01 3.413.981.103.41 1.1614.608.51
[julia-users] Optimize code for memory usage or speed?
Hi, I have a question. Suppose the data I have are only small integer values in the range of 1-10. Should I use Int64 or Int8. Using Int64 would be consistent with my system word size and would likely be faster. But using Int8 would definitely save more memory. So, which should I use?
[julia-users] Where to download the benchmark program source codes
Hi, I wonder where can I download the source code of these benchmarks, I want to try it on my own... I also wonder why no tests were done with Java.. FortranJuliaPythonRMatlabOctaveMathe-maticaJavaScriptGogcc 4.8.10.22.7.3 3.0.2R2012a3.6.48.0V8 3.7.12.22go1fib0.260.9130.37411.361992.003211.8164.46 2.181.03parse_int5.031.6013.9559.401463.167109.8529.542.434.79quicksort1.11 1.1431.98524.29101.841132.0435.743.511.25mandel0.860.8514.19106.9764.58 316.956.073.492.36pi_sum0.801.0016.3315.421.29237.411.320.841.41 rand_mat_stat0.641.6613.5210.846.6114.984.523.288.12rand_mat_mul0.961.013.41 3.981.103.411.1614.608.51
[julia-users] Re: Optimize code for memory usage or speed?
I did some tests, turns out that in terms of speed, Uint8 is equivalent to Int64. Uint32 is twice as slow. On Sunday, March 30, 2014 2:27:17 PM UTC+8, Freddy Chua wrote: Hi, I have a question. Suppose the data I have are only small integer values in the range of 1-10. Should I use Int64 or Int8. Using Int64 would be consistent with my system word size and would likely be faster. But using Int8 would definitely save more memory. So, which should I use?
Re: [julia-users] Where to download the benchmark program source codes
Can't compile perf.c It says I need a perf.h On Sunday, March 30, 2014 10:06:17 PM UTC+8, Stefan Karpinski wrote: They're linked to on the home page right above the table. There is a Java implementation in this pull request: https://github.com/JuliaLang/julia/pull/5260, which looks to be about ready to merge, actually. Then we'll have to get the Java benchmark working on our test machine and rerun all the benchmarks. I'm guessing that Java ≈ C so I'm not sure that the results will be all that interesting, but I guess it's worth having Java on there. On Sun, Mar 30, 2014 at 9:55 AM, Freddy Chua fred...@gmail.comjavascript: wrote: Hi, I wonder where can I download the source code of these benchmarks, I want to try it on my own... I also wonder why no tests were done with Java.. Fortran JuliaPythonR MatlabOctaveMathe-matica JavaScriptGo gcc 4.8.10.2 2.7.3 3.0.2R2012a 3.6.48.0V8 3.7.12.22 go1 fib0.260.9130.37 411.361992.00 3211.81 64.462.181.03parse_int 5.031.6013.9559.40 1463.167109.8529.542.434.79 quicksort1.111.14 31.98524.29101.841132.04 35.743.511.25mandel 0.860.85 14.19106.97 64.58316.956.073.49 2.36pi_sum0.801.00 16.3315.421.29237.411.32 0.841.41rand_mat_stat 0.641.6613.5210.84 6.6114.984.523.28 8.12 rand_mat_mul0.961.01 3.413.981.103.41 1.1614.608.51
Re: [julia-users] Where to download the benchmark program source codes
I did some simple benchmark on for loop a=0 for i=1:10 a+=1 end The C equivalent runs way faster... does that mean julia is slow on loops ? On Sunday, March 30, 2014 10:06:20 PM UTC+8, Isaiah wrote: https://github.com/JuliaLang/julia/tree/master/test/perf I also wonder why no tests were done with Java.. There is an open PR for Java, which you could check out and try: https://github.com/JuliaLang/julia/pull/5260 On Sun, Mar 30, 2014 at 9:55 AM, Freddy Chua fred...@gmail.comjavascript: wrote: Hi, I wonder where can I download the source code of these benchmarks, I want to try it on my own... I also wonder why no tests were done with Java.. Fortran JuliaPythonR MatlabOctaveMathe-matica JavaScriptGo gcc 4.8.10.2 2.7.3 3.0.2R2012a 3.6.48.0V8 3.7.12.22 go1 fib0.260.9130.37 411.361992.00 3211.81 64.462.181.03parse_int 5.031.6013.9559.40 1463.167109.8529.542.434.79 quicksort1.111.14 31.98524.29101.841132.04 35.743.511.25mandel 0.860.85 14.19106.97 64.58316.956.073.49 2.36pi_sum0.801.00 16.3315.421.29237.411.32 0.841.41rand_mat_stat 0.641.6613.5210.84 6.6114.984.523.28 8.12 rand_mat_mul0.961.01 3.413.981.103.41 1.1614.608.51
[julia-users] Circular Dependency in Composite Types
Hi I believe this question have not been asked as I could not find anything related to circular So I have two composite types type A foo::B end type B bar::A end The execution of this script results in a undefined error. How do I resolve this?
[julia-users] Re: New Docs?
I like it too... On Thursday, March 13, 2014 8:49:44 AM UTC+8, andrew cooke wrote: did the docs just change style? nice!
[julia-users] How to see the list of defined variables?
In matlab, I can type who to see the defined variables in the workspace. What do I type in Julia?
[julia-users] Re: How to see the list of defined variables?
Cool thanks! On Monday, March 10, 2014 3:21:28 PM UTC+8, Andrea Pagnani wrote: whos() and not whose() sorry On Monday, March 10, 2014 8:12:39 AM UTC+1, Freddy Chua wrote: In matlab, I can type who to see the defined variables in the workspace. What do I type in Julia?
[julia-users] Julia does not show the exact line of the error
I noticed that when I have a while loop and the bug occurs somewhere within the while look, the Julia interpreter does not show the exact location of where the error occurred. I do think that this is definitely a missing feature, and hope the developers of Julia implement this feature soon.
[julia-users] how to undefine variable in composite types
Suppose I have a Type type Foo a b end f = Foo(1,2) f.a = 1 f.b = 2 how do I test whether f.a is defined I do this isdefined(f, 1) but isdefined(f, 'a') does not work another question how do i undefine f.a such that isdefined(f,1) now returns false
Re: [julia-users] how to undefine variable in composite types
Thanks... i think that's a missing feature On Sunday, March 9, 2014 2:18:11 AM UTC+8, Stefan Karpinski wrote: 1. isdefined(f, :a) 2. you can't. On Sat, Mar 8, 2014 at 12:55 PM, Freddy Chua fred...@gmail.comjavascript: wrote: Suppose I have a Type type Foo a b end f = Foo(1,2) f.a = 1 f.b = 2 how do I test whether f.a is defined I do this isdefined(f, 1) but isdefined(f, 'a') does not work another question how do i undefine f.a such that isdefined(f,1) now returns false
[julia-users] Is it possible to create function in composite types
Is it possible to create function in composite types with access to the composite type variables?