'plot' creates a result in the remote process
you have to fetch this result into the main process, the one which draws to
screen like this:
result = @spawn plot(randn(100))
# do some stufff...
# when ready to fetch fetch!
fetch(result)
This will display the plot...
in the first version you read a single large chunk while in the second you
read many small chunks.
I think reading many small chunks is much slower due to how disk IO works.
First it has to look for the data, and second, read only a small chunk
while it could read a larger chunk using the same
You can test the actual function used with the @which macro. In the 1st
example
it calls into io.jl line 118 (at least as of version 0.3):
read{T}(s::IO, t::Type{T}, d1::Int, dims::Int...) =
read(s, t, tuple(d1,dims...))
this calls into io.jl line 123:
read{T}(s::IO, ::Type{T},
just to clarify: there is no loop since the array is a bits type. Otherwise
the else clause calls into io.jl line 125:
function read!{T}(s::IO, a::Array{T})
for i = 1:length(a)
a[i] = read(s, T)
end
return a
end
which does include a for loop.
So I guess its smart to use
I've seen that many Python conferences use next day video sevices:
http://nextdayvideo.com/
they seem to do a good work, maybe its worth talking with them to get a
quote for next time?
I use a SharedArray for the tree and another for the particles. Each
process then traverses the tree for its share of particles. There is no
communication overhead.
As long as you can represent you data structures as a bits array this
method is good
just got bitten by this. Due to ccall changes:
Winston segfaults
and PyPlot doesn't compile
Gadfly still works
The root cause seems to be the changes to the ccall api:
https://groups.google.com/forum/?fromgroups=#!topic/julia-users/qi6HpMrAS_A
I opened an issue on ZMQ.jl:
https://github.com/JuliaLang/ZMQ.jl/issues/71
will try a quick fix later
(fixing a typo, the 403 line above has to be be actually 407)
This would explain my IJulia kernel problems (discussed in a separate
thread opened a few minutes ago) seems this broke ZMQ.jl
I'll try to fix later
OK, got it working just do the following on ZMQ.jl:
change line 403 from:
Message(m::ByteString) = Message(m, convert(Ptr{Uint8},
pointer(m.data)), sizeof(m))
to:
Message(m::ByteString) = Message(m, Base.unsafe_convert(Ptr{Uint8},
pointer(m.data)), sizeof(m))
change line 500 from
fixed here:
https://github.com/JuliaLang/ZMQ.jl/pull/72
hi,
I updated to current Julia nightly, and it is now causing the IJulia kernel
to crash. Some investigation lead to unmatched convert(::Type{Ptr{Uint8}},
::ASCIIString) etc. in ZMQ.jl
I tried to fix these but now I'm getting segfaults.
Anybody experiencing the same? IPython works fine with
Doesn't help for me. I read all the threads on such issues, and this one
seems different.
For e.g. ZMQ.jl does contain a line
Message(m::ByteString) = Message(m, convert(Ptr{Uint8}, m), sizeof(m))
This actually doesn't work in current Julia, and I'm using the latest
ZMQ.jl etc.
I'm working with arrays of immutables each containing several fields. Now
creating new immutables based on old ones has become a real pain:
old = myarray[i]
myarray[i].foo = myimmutable(old.foo, bar, old.x, old.y, etc.)
imagine this for 15 fields...!
So I made a macro to ease this, it can be
I'm using your second suggestion now, building special macros for each
type. This does the job.
I think solution is something like
https://github.com/JuliaLang/julia/pull/6122 which I hope gets merged soon
The problem is performance -- `reconstruct` is ~100X slower than the macro
in the benchmark below. The macro solution is as fast as having mutating
data actually, too bad it is no completely generic...
a = [IM(0,0,0,0,0,0,0,0,0,0,0,0) for i in 1:100];
function frec(arr::Array{IM,1})
scatter does the job. Just answered my own question :)
in Matlab for e.g.:
plot( , 'markersize',8)
Is this feature present in Winston?
Thanks!
I've updated the benchmarks, added a pooled addition, and run all in latest
Julia. The results:
Current simple plus: ~20 sec
Pooled addition : ~1.5 sec (saturated, increasing pool doesn't
change this number)
in-place addition : ~0.8 sec
See code below. I guess current operators
`sum` is implemented this way actually, so in the example above I could use
it to get similar results. Nevertheless this could be a good addition IMO
any thoughts?
I'm working on a PR based on this benchmark, see image below, comments
welcome!
https://lh6.googleusercontent.com/-oudffKuh7f4/VM9RNAiCXvI/NOY/e1_1I8ecvbE/s1600/Screenshot%2Bfrom%2B2015-02-02%2B12%3A26%3A27.png
I opened a place-holder issue for this until PR is opened:
https://github.com/JuliaLang/julia/issues/10030
I'm using sublime, considering atom
You can use functors to achieve speed and flexibilit like in C++, see
here: http://numericextensionsjl.readthedocs.org/en/latest/functors.html
I find that a SharedArray is often a good alternative for a multithreaded
solution. would it work here?
great, does this mean untested functions are now actually counted in
coverage?
Tim,
in your implementation you are assuming all possible types for x implement
some operation performed inside the 'no do something with x' comment. Just
making this assumption explicit allows solving the problem in a type-stable
manner. Consider the following:
alias NumberArray
yeah, you're right about scala having a repl. and I also forgot that
Haskell has one! but I still prefer Julia over those languages :)
well, I've just opened a pull request where qrfact is type-stable (using
staged functions of course) see here:
https://github.com/JuliaLang/julia/pull/9575
The same could be done for the other functions you mentioned (e.g.
factorize, sqrtm, etc.) and I'll do if the current one is positively
it should still work if the file name is known only at runtime, consider
the following script:
filename = myrandomfile*string(rand(1:5))*.hdf
vecname = myvec
@load_vector_from_hdf5_file(filename, vecname)
w/o specifying types, w/o type instability, and file name decided at
runtime.
I have
There is type-instability only at global scope, that's the point. So when
you are using the repl (or just using the global scope) you won't have to
write types. My suggestion is only to disallow type-instability at inner
scopes so they become fully statically typed; the next step would be to
But using a function for this is wrong because macros allow you to load the
vector at runtime w/o specifying types and without inner scope
type-instability. So why insisting using a function instead of a macro? The
only scenario I can think of is if the name of the file is only known at
Hi, I Just want to discuss this idea: type instability in functions is a source
of slowness, and in fact there are several tools to catch instances of it. I
would even say that ising type instability in functions is considered bad style.
the most important use case for type instability seems
there is an issue when using a module fails: I work on a computer w/o
access to github, I had to just copy Images.jl to use it. Using Images
resulted in error as I missed a dependency. After I fixed this Using Images
did not complain yet imread was not defined. Only after restarting Julia
did
in the Cython code you turned off bounds checking. This can be done for
Julia with the @inbounds macro. Just use it in your loops like this:
@inbounds for i in whatever
...
end
also @simd may help, sems you can use it in a couple of the innrmost loops.
It sems also simple to parallelize with a
Consider the following simple example:
using HDF5, JLD
addprocs(2)
# create and save the shared array
a = SharedArray(Int64, 100)
save(example.jld, a, a)
# clear the shared array...
a = nothing
# load what we saved...
a = load(example.jld)[a]
# try to use it...
@parallel for i in 1:length(a)
One more issue: The example above can be fixed by reassigning `a` to a new
shared array. But in my case this won't work since I have this array within
a wrapper type... seems like another semi related issue?
Traits/Interfaces can easily solve the redefinition of points in geometry
packages. All I want is to use something that has getx and gety, I don't
rally care about the it's type hierarchy! see for e.g. this:
https://github.com/mauro3/Traits.jl
About replacing BigInts with (some small number) X
maybe this can help:
https://github.com/skariel/TriangleIntersect.jl
it intersects rays with triangles in 3D...
Fast robust Voronoi and Delaunay triangulations, using
GeometricalPredicates.jl
See code here:
https://github.com/skariel/VoronoiDelaunay.jl
a PR for inclusion in METADATA.jl is open.
This includes some really some basic stuff:
- Creating 2D trianglulations
- Navigating
- Iterating
-
*About the image - *it's all random points, the text has higher density of
them
*A comparison with Fortune's algorithm - *the Julia implementation is
somewhat faster than CGAL, and CGAL is faster than Boost Voronoi which uses
Fortune's algorithm, see here:
Good questions, I think I have some answers:
1) The problem using an integer lattice is that Int64s would regularly
overflow. In order to not overflow, for the 2D case, you would need to use
at most 16-bits for coordinate information. Of course it is worse for the
3D case. So a fast and simple
just opened the PR for METADATA.jl
Fast and robust 2D 3D geometrical predicates. For documentation see here:
https://github.com/skariel/GeometricalPredicates.jl
This is used in a Delaunay/Voronoi implementation which I'll also package
which is faster than CGAL.
In addition, it could be used as the basis for a fast and robust
all of julialang.org seems down
working on the GeometricPredicates package, I want to generate a few mostly
similar types like this:
macro mymac(name)
n = name.args[1]
quote
type $n
x::Int64
$n(num::Int64) = new(num)
end
end
end
@mymac(:mynewtype)
println(mynewtype)
It
Thanks, this solved it!
I'm using the unquoted version now, I was just testing different things,
got a bit confused by this behavior. Should I open an issue about the
(explicit) constructor-less version?
I just did something like what you describe and the following method worked
very fast:
Don't use layers. Instead use Nan's to cut the line. So for eg. An array
1,2,3,nan,4,5,6 will draw as two lines.
see here:
https://gist.github.com/skariel/3d2018f9341a058e00fc
I just found this interesting article about garbage collection:
http://people.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf
Turns out GC can significantly affect performance when memory available is
~3X the needed memory (for e.g. because a GC touches more memory pages
relative to manual handling
when using hulls to calculate Delaunay then you calculate everything in d+1
dimensions, which for 2D has a 4X slower predicate. Also predicates are
mostly precalculated (already implemented in the updated gist above), so
their repeated calculation in the swap method should be fast.
Anyway, let
How close did you get to c speed?
they are MIT licensed, no need for permission :)
how efficient is Voroni construction using conic hulls, I think Qhull which
uses convex hulls is way slower than what I plan with the algorithms
described in here: http://arxiv.org/pdf/0901.4107v3.pdf
function f{T:Float64}(x::T...)
print(x)
end
function f{T:BigInt}(x::T...)
print(x)
end
see here:
https://gist.github.com/skariel/da85943803a6f57a52fd
it implements fast and robust 2D and 3D orientation and in-circle tests
according to the algorithms described in this paper:
http://arxiv.org/abs/0901.4107
i.e. calculate using regular Floats while constraining the error. If in
On Monday, May 12, 2014 5:16:21 PM UTC+3, Ariel Keselman wrote:
see here:
https://gist.github.com/skariel/da85943803a6f57a52fd
it implements fast and robust 2D and 3D orientation and in-circle tests
according to the algorithms described in this paper:
http://arxiv.org/abs/0901.4107
Most of the calculation text is just generated with sympy+some text
processing, so I'm not afraid of typos :)
The calculation could be organized better resulting with much less terms if
I move the origin to overlap with one of the SphereND points.
Also I like very much the idea of having an
see simplified behavior below:
https://lh4.googleusercontent.com/-buanLj1oJlU/U2DhZ4Fo2XI/HS4/xC8WkdiahEM/s1600/Capture1.PNG
yes, inv(rand(3,3)) crushes!
and indeed, using blas_set_num_threads(1) beforehand helps
Thanks!
Just installed Julia 64bit for Windows (downloaded from the official site)
and added Ijulia and Gadfly packages w/o errors. When using gadfly Julia
just cruhed w/o any message. See attached image
any help appreciated...
Thanks,
Ariel
attachment: Capture.PNG
from the comments:
Guido's advice has been extremely helpful, but so far we haven't been able
to get any code from him :/
62 matches
Mail list logo