This is my first julia code, I am happy it did the right thing, but
compared with the Matlab code that did the same thing, it runs so slowly.
The Matlab code takes about 90s, but the julia below takes 130s.
Potential(x::Float64, y::Float64) = -2./sqrt(x^2+1.) -2./sqrt(y^2+1.) +
1./sqrt((x-y)^2
Answer posted: http://stackoverflow.com/a/34259207/508431
On Thu, Dec 10, 2015 at 11:08 AM, Tim Stiles
wrote:
> How does one use the --output-bc flag? Does it compile llvm byte code for
> arbitrary julia modules? I have not been able to find any documentation and
> there are some interested pe
Oh, just realized that the upper-bound on BaseTestNext is listed as “0.5”, not
“0.5-“, so currently things are actually OK and BaseTestNext is installable on
0.5 dev versions. So this will only be a problem once there’s a 0.5 release.
Carry on!
-s
> On Dec 13, 2015, at 9:53 PM, Spencer Russel
Is there a mechanism for REQIRE entries (when developing a package) that depend
on the Julia version? For instance, I’m developing a package using the new
testing framework in Julia 0.5, which is available in the BaseTestNext package
for 0.4. Currently BaseTestNext is listed as not compatible wi
Thanks! That works fine.
I must admit, this is all a bit novel for me :) Is my understanding of the
difference between the following two pieces of code correct:
astore = []
a = [1,2]
b = similar(a)
for j = 1:5
for i = eachindex(a)
ai = a[i]
b[i] = ai + 1
end
a = b
push!(astore,
On Sun, Dec 13, 2015 at 9:03 PM, Thomas Moore wrote:
> Thanks!
>
> So just to clarify, the difference between this code:
>
> astore = []
> a = [1,2]
> b = similar(a)
> for j = 1:5
>
>
> for i = eachindex(a)
> ai = a[i]
> b[i] = ai + 1
> end
> a = b
> push!(astore,a);
> println(as
What kind of terminal did you start Julia from? The shell mode is a bit
misleading in that it doesn't actually execute through a shell. So it can't run
shell builtins from either posix sh or windows cmd. It executes other programs
that are on your path. Some of the default path settings were cha
Thanks!
So just to clarify, the difference between this code:
astore = []
a = [1,2]
b = similar(a)
for j = 1:5
for i = eachindex(a)
ai = a[i]
b[i] = ai + 1
end
a = b
push!(astore,a);
println(astore)
end
and this code
astore = []
a = [1,2]
for j = 1:5
b = similar(a)
for
On Sun, Dec 13, 2015 at 7:06 PM, Dominique Orban
wrote:
> I'm writing Julia functions to perform a sparse matrix factorization and I'm
> wondering if what I'm observing is a manifestation of type instability. I'm
> using Julia 0.4 so I can use @code_warntype, but it's surprisingly quiet and
> does
On Sun, Dec 13, 2015 at 6:59 PM, Thomas Moore wrote:
> Hi,
>
> Your explanation makes sense, but, coming from MATLAB, I must admit I'm not
> familiar with the numerical values of a particular array (in this case
> astore) changing when another variable (in this case a) is changed. I hope
> you don
I'm writing Julia functions to perform a sparse matrix factorization and
I'm wondering if what I'm observing is a manifestation of type instability.
I'm using Julia 0.4 so I can use @code_warntype, but it's surprisingly
quiet and doesn't reveal much (there are no ANYs or UNIONs; the "Variables"
Hi,
Your explanation makes sense, but, coming from MATLAB, I must admit I'm not
familiar with the numerical values of a particular array (in this case
astore) changing when another variable (in this case a) is changed. I hope
you don't mind a few follow-up questions :)
- Is this behaviour ex
Thanks, but I have still some trouble.
I try to better explain my problems:
1. .*ji file creation*: I can create a .ji file for a module with
precompilation (e.g. with Base.compilecache()); attempting to create .ji
file from a julia source using julia executable with command:
$JULIA
probably less about what is typical today and more about what is next ...
ARM is engineering processors for what is next:
https://en.wikipedia.org/w/index.php?title=ARM_big.LITTLE&redirect=no
newer ARM processors are designed with parallelism explicitly targeted
and floating point issues r
This thursday Dec 17 Ehsan Totoni of Intel Labs will speak on
ParallelAccelerator.jl [1] in San Francisco to the SF Julia Users group
[2]. ParallelAccelerator is a compiler that performs aggressive analysis
and optimization on top of the Julia compiler. It can automatically
eliminate overheads su
I suspect this is a bug introduced in 0.4.2, and an issue on Julia language
may be the right place for reporting. However, reading up on related issues
has got me quite confused and it may already be a known issue. If anybody
could briefly point to an explanation of how the shell commands are
s
AFAIK typical SBC CPUs are not heavily optimized for floating point;
there is an order of magnitude difference compated to an x86. I don't
understand how a cluster would make economic sense, even for tasks that
parellelize well (and then there is the network overhead).
Best,
Tamas
On Sun, Dec 13
while this SBC would represent a substantial improvement
over the Pi systems currently in market, i suspect that the
most notable aspect is the price ...
generally, as the price points come down, clusters become
much more feasible ...
parallelism is next.
Assuming that the comparative advantage of Julia is in scientific
programming, do people really run these on such very low-powered
hardware?
I am not questioning this, just curious what the use cases are. A
colleague built a Blackberry Pi 2 "desktop" last year to try it out and
it was barely copin
anyone in line for one of these ARM boards
http://pine64.com/
with plans to test Julia on a Ubuntu version
of the SBC ... ?
certainly looks interesting ...
20 matches
Mail list logo