Hey there Freddy. The first thing you can do to speed up your code is to
throw it inside of a function. Simply replacing your first line (which is
begin) with function domytest() speeds up your code significantly. I
get a runtime of about 1.5 seconds from running the function versus ~70
seconds
Cool it works better now. I thought having the codes inside begin and end
is already avoiding the global scoping of the variables. But I would still
like to point out that java is still twice as fast than Julia. I am not
sure how is scala compared to Julia. But Julia syntax is wa easier than
That is a great tip, thanks.
I am having similar issues. I had the program in a file and run it with
include(prog.jl) at the julia prompt. It used to take 12 seconds per
1 loops. Now Putting it into a function and also changed some
variables to const, I got it down to 3 seconds per
I just hope that Julia can be faster than Java someday...
On Sunday, April 27, 2014 2:03:28 PM UTC+8, Freddy Chua wrote:
This code takes 60+ secs to execute on my machine. The Java equivalent
takes only 0.2 secs!!! Please tell me how to optimise the following
code.begin
begin
N = 1
The clue is to structure it more like a c/java program and less like a matlab
script. Mathworks has made great efforts to be able to run poorly structured
programs fast. Julia focuses on generating fast machine code, but we currently
don't optimize well for the common case where global
I can get a further speedup by eliminating the x_n temporary variable, and
just using x[k,n] instead of x_n[k]. I did this because using Julia's
profiler http://julia.readthedocs.org/en/latest/stdlib/profile/ showed
the x_n = x[:,n] line as one of the most computationally expensive. This
brings
Here's the Java code.
import java.util.Random;
public class LeastSquaresError
{
public static void main(String [] args)
{
int N = 10;
int K = 100;
double rate = 1e-2;
int ITERATIONS = 100;
double [] y = new double[N];
double [] x = new double[N*K];
double []
Stochastic Gradient Descent is one of the most important optimisation
algorithm in Machine Learning. So having it perform better than Java is
important to have more widespread adoption.
On Sunday, April 27, 2014 2:03:28 PM UTC+8, Freddy Chua wrote:
This code takes 60+ secs to execute on my
Since we have made sure that our for loops have the right boundaries, we
can assure the compiler that we're not going to step out of the bounds of
an array, and surround our code in the @inbounds macro. This is not
something you should do unless you're certain that you'll never try to
access
wooh, this @inbounds thing is new to me... At least it does shows that
Julia is comparable to Java.
On Sunday, April 27, 2014 3:04:26 PM UTC+8, Elliot Saba wrote:
Since we have made sure that our for loops have the right boundaries, we
can assure the compiler that we're not going to step out
I highly suggest you read through the whole Performance
Tipshttp://julia.readthedocs.org/en/latest/manual/performance-tips/
page I linked to above; it has documentation on all these little features
and stuff. I did get a small improvement (~5%) by enabling SIMD extensions
on the two inner for
I agree with Elliot, take a look at the performance tips.
Also, you may want to move the tic(), toc() out of the function, make sure
you compile it first, and then use @time function calll to time it.
you may also get a considerable boost by using @simd in your for loops
(together with
I'm very surprised that Java is that much faster than the initial
implementation provided (after its been wrapped in a function). Feel like
there is something non-obvious going on...
On Sunday, April 27, 2014 5:33:06 AM UTC-4, Carlos Becker wrote:
I agree with Elliot, take a look at the
You are mistaken. The improvement is in the Julia implementation.
On Sunday, April 27, 2014 11:13:12 PM UTC+8, Iain Dunning wrote:
I'm very surprised that Java is that much faster than the initial
implementation provided (after its been wrapped in a function). Feel like
there is something
On Sunday, April 27, 2014 12:04:26 AM UTC-7, Elliot Saba wrote:
Since we have made sure that our for loops have the right boundaries, we
can assure the compiler that we're not going to step out of the bounds of
an array, and surround our code in the @inbounds macro. This is not
something
yep, x never changes...
On Monday, April 28, 2014 12:25:14 AM UTC+8, Jason Merrill wrote:
On Sunday, April 27, 2014 12:04:26 AM UTC-7, Elliot Saba wrote:
Since we have made sure that our for loops have the right boundaries, we
can assure the compiler that we're not going to step out of the
begin
N = 1
K = 100
rate = 1e-2
ITERATIONS = 100
# generate y
y = rand(N)
# generate x
x = rand(K, N)
# generate w
w = zeros(Float64, K)
tic()
for i=1:ITERATIONS
for n=1:N
y_hat = 0.0
x_n = x[:,n]
for k=1:K
y_hat += w[k] * x_n[k]
end
for k=1:K
w[k] += rate * (y[n] - y_hat) * x_n[k]
end
17 matches
Mail list logo