Re: [julia-users] Re: Ray tracing for complex geometry

2015-11-24 Thread kleinsplash

Hi, 

@Simon Danish. So posted issue as suggested:  new issue 
. Have tried the 
screenshot(window, path="screenshot.png") and get ArgumentError: function 
screenshot does not accept keyword arguments. Where window = 
visualize(obj). Obviously I need to go back and look at the documentation. 

@Andre Bieler this is awesome! thank you - just what I was looking for - 
you should add it to something, anything. Not sure if that be Meshes.jl, 
MeshIO.jl or Meshing.jl. 

As a side note: I was directed toward using Barycentric_coordinate_system 
 something 
like accurate-point-in-triangle-test.html 
  
But will be looking at yours first. 
@Steve Kelly thanks - needed that pep talk. And it was easy we probably 
wouldn't be doing it. 


On Tuesday, 24 November 2015 06:11:15 UTC+2, Andre Bieler wrote:
>
> I dont know if this is still needed. But I think I did a similar thing a 
> while ago.
>
> One difference being I used .ply files instead of .obj files.
> From what I understand it should be straight forward to transform .obj to 
> .ply
> with something like paraview or meshlab.
> (It has to be ascii .ply for my example to work)
>
> Attached is an example script that computes the intersection point of a 
> line of sight
> and a collection of triangles (your surface mesh I assume)
> it returns the index of the triangle which is intersected and the 
> coordinates of the
> point of intersection.
>
> If multiple triangles are intersected it returns the index and coordinates 
> of the closest
> triangle.
>
> if no intersection is found it returns -1 as index and [0,0,0] as the 
> intersection point
> (this is to have the function return type stable)
>
> I attached a zip archive containing the necessary julia file and an example
> .ply file of a triangulated cow :)
>
> All you have to define is the starting point of the line of sight "pStart"
> and the direction towards it is pointing "r".
> "r" has to be a unit vector
>
>
> Any comments are welcome.
>
> Cheers
> Andre
>
>
>

[julia-users] Re: ERROR: LoadError: UndefVarError: mesh not defined. using MeshIO

2015-11-24 Thread kleinsplash
Thanks. Will keep those in mind. 

On Monday, 23 November 2015 20:19:15 UTC+2, Simon Danisch wrote:
>
> Well, it's load, not mesh... 
> I admit, the Readme is not very helpfull. 
> By the way, the REPL with typeof(), xdump(), println(), methodswith() and 
> auto completion are your friend when it comes to figuring out how a type 
> works. 
>
> Am Montag, 23. November 2015 15:38:44 UTC+1 schrieb kleinsplash: 
> > My aim is to load an object file and access its faces, vertices and 
> normals. 
> > 
> > My current code: 
> > 
> > 
> > 
> > using FileIO 
> > using GLVisualize 
> > using MeshIO 
> > 
> > obj_file = "path_to_obj/file.obj" 
> > mesh(obj_file) 
> > I am getting the error: 
> > 
> > ERROR: LoadError: UndefVarError: mesh not defined 
> >  in include at ./boot.jl:261 
> >  in include_from_node1 at ./loading.jl:304 
> >  in process_options at ./client.jl:280 
> >  in _start at ./client.jl:378 
> > 
> > Am I calling the mesh fxn incorrectly? 
>
>

[julia-users] Re: Ray tracing for complex geometry

2015-11-24 Thread Simon Danisch
Sorry, that was the method definition, not the actual call!
So it's just an optional second argument!
Ah yeah, if implementing it from scratch is an option, ray tracing in its 
simple form is probably the most approachable (although still slow compared 
to visualizing it with OpenGL).


Am Freitag, 20. November 2015 16:18:46 UTC+1 schrieb kleinsplash:
>
> I was wondering if someone could help me out with a decision/offer an 
> opinion:
>
> I need a ray tracer that deals with complex geometry (a fast ray tracer 
> that can create 1000's of point clouds in minimal time) 
> Python has methods: http://pyopengl.sourceforge.net/ that I could get to 
> grips with. But I want to stick with Julia. 
>
> I have found these resources: 
> https://github.com/JuliaGL/ModernGL.jl - not sure if this has a ray 
> tracing option
> http://www.cs.columbia.edu/~keenan/Projects/QuaternionJulia/ - looks 
> crazy complicated
>
> https://github.com/JuliaLang/julia/blob/master/test/perf/kernel/raytracer.jl 
> - looks like only handles simple geometry
>
> Could someone point me in the right direction?
>
>  
>


[julia-users] Error while Building PyCall

2015-11-24 Thread Tony Kelman
Can you try with Julia 0.4.1? There was one backport that should hopefully make 
it a little bit less susceptible to this lingering issue with precompilation.

[julia-users] Slow startup w/--proc and --proc=80 .. can crash my machine

2015-11-24 Thread Páll Haraldsson

$ time julia --proc=auto -e ""

real 0m3.292s
user 0m15.464s
sys 0m0.593s


"auto" means 8 CPUs (+1 for master) on my machine. Maybe it should mean 4 
because of hyperthreading? Or some in-between (6?) number? I didn't wait 15 
sec, more like 4.


I'm not really worried about startup wait for procs=4, 8 or 80, unless this 
indicates problems elsewhere, just curious.. and it seemed abnormal (at 
first) that the wait would get longer with higher numbers or even 1.. The 
point of many procs is parallel speedup, even if all the CPUs have to do 
the same on startup, in theory it should run in parallel, I guess this is 
just too much for the [L3] cache.. The wait gets to be really long with 
--proc=80 (that I do not have, and thus not a worry for me to get fast, 
just not to crash..). 


Does it ever make sense to go above the number of virtual CPUs for --proc? 
I was just testing out the slowdown with up to --proc=80 that crashed with 
VM off (but worked when on) on my Ubuntu 14.04 Linux and I got a black 
screen with a brown blinking cursor and couldn't even get to a virtual 
terminal, and had to reset (not entirely unexpected..). On a second try it 
got frozen for a long while I couldn't get a virtual terminal and got my 
session closed in the end, but managed to not have to restart..


I'm not too worried about proc=1, but should I make a PR that limits procs 
to at most the number of [virtual] CPUs? I think I could manage that, or 
maybe, if you can think of a reason to go higher, say at most a double the 
number of CPUs? More complicated would be to take the amount of [virtual] 
memory into account. If there is a reason to go higher, can't the number of 
workers always be changed from within the program? The programmer should 
know better and maybe have that capability, but for users it seems not user 
friendly to be able to crash by invoking from the shell with high numbers 
like --proc=80, that options seems not needed just waiting for tinkerers 
like me that like to try everything out.. :-/


$ time julia --proc=0 -e ""
ERROR: julia: -p,--procs= must be an integer >= 1

I can see with --proc=1:

$ ps aux |grep julia
qwerty8278 45.7  1.5 8720228 127244 pts/9  Sl+  17:05   0:01 julia 
--proc=1
qwerty8282 22.7  1.4 8616920 116076 ?  Ssl  17:05   0:00 
/usr/bin/julia -Cx86-64 -J/usr/lib/x86_64-linux-gnu/julia/sys.so --bind-to 
130.208.69.54 --worker

that you get one worker, on top of the one master, but is it mostly a 
waste? Should it say "must be an integer >= 2 and less than number of 
virtual processors"? Does proc=1 ever make sense? Is it for testing or 
should it maybe do the same as if proc is skipped (1 CPU vs 1+1)?

Is this 8616920 memory use (1.4% on my 8 GB) about a constant that can't be 
reduced much? It would mean that a low end Android phone (512 MB) would max 
out at 4.4 cores, if that (as the system must use something and you have 
zram "compressed VM" (and no "real VM")) and crash with proc=5, maybe 4 or 
lower.


Note I tested all with virtual memory off (and then also on) as I have lots 
of VM (on an SSD), maybe too much (with some of the swap used). I did not 
expect Julia to use a constant amount of memory multiplied by --proc 
because of copy-on-write (COW) but that is in fact what happens. COW 
probably doesn't help on fork, as most of the work Julia does is considered 
data not (only) code..?

Compared to:

$ time julia -e ""

real 0m0.137s
user 0m0.117s
sys 0m0.038s

$ time julia --proc=1 -e ""

real 0m1.646s
user 0m2.302s
sys 0m0.213s


$ time julia --proc=2 -e ""

real 0m2.307s
user 0m4.475s
sys 0m0.317s


$ time julia --proc=3 -e ""

real 0m2.509s
user 0m6.118s
sys 0m0.437s


$ time julia --proc=4 -e ""

real 0m2.608s 
user 0m7.845s
sys 0m0.502s



$ time julia --proc=8 -e ""

real 0m3.003s
user 0m15.457s
sys 0m0.824s



$ time julia --proc=80 -e ""

real 0m26.826s
user 2m15.768s
sys 0m8.688s



I also tested with VM on:

top - 15:19:51 up 12 days, 22:55, 12 users, load average: 0,47, 3,35, 3,79
Tasks: 267 total, 1 running, 266 sleeping, 0 stopped, 0 zombie
%Cpu0 : 0,0 us, 0,0 sy, 0,0 ni,100,0 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st
%Cpu1 : 0,3 us, 0,0 sy, 0,0 ni, 99,7 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st
%Cpu2 : 0,3 us, 0,0 sy, 0,0 ni, 99,7 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st
%Cpu3 : 0,0 us, 0,0 sy, 0,0 ni, 99,3 id, 0,7 wa, 0,0 hi, 0,0 si, 0,0 st
%Cpu4 : 0,0 us, 0,3 sy, 0,0 ni, 99,7 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st
%Cpu5 : 0,0 us, 0,0 sy, 0,0 ni,100,0 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st
%Cpu6 : 0,0 us, 0,0 sy, 0,0 ni,100,0 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st
%Cpu7 : 0,0 us, 0,0 sy, 0,0 ni,100,0 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st
KiB Mem: 8130224 total, 1201108 used, 6929116 free, 1212 buffers
KiB Swap: 31264764 total, 2585184 used, 28679580 free. 116152 cached Mem


top - 15:01:12 up 12 days, 22:37, 12 users, load average: 1,60, 1,72, 1,54
Tasks: 298 total, 2 running, 296 sleeping, 0 stopped, 0 zombie
%Cpu(s): 16,4 us, 1,0 sy, 0,0 ni, 82,0 id, 0,5 wa, 0,0 hi, 0,0 si, 0,0 

Re: [julia-users] Using Meshes.ji

2015-11-24 Thread kleinsplash
I know this is a *really *simple question - so forgive me for asking.. I am 
not a programmer but I do want to make Julia the language I use - and move 
away from the others that shall not be named.. I would really like an 
explicit syntax answer to this: 


obj_file = "pathto.obj"
obj = load(obj_file)
display(obj.vertices)


which gives me:

502-element Array{FixedSizeArrays.Point{3,Float32},1}:
 Point(0.00117,-0.02631,0.03907)  
 Point(-0.00896,-0.02466,0.03908) 
 ⋮

 Point(-0.01634,-0.0178,-0.05919) 
 Point(-0.01751,-0.01913,-0.06169)

How do I access this info? 
So I can then do a scatter3D plot. 

I have tried:
obj.vertices[1,:][1,1]

which gets me this far

FixedSizeArrays.Point{3,Float32}((0.00117f0,-0.02631f0,0.03907f0))

and this 
obj.vertices[1,:][1,1][1]
which gets the first value

0.00117f0

But it really doesn't seem that elegant.. which typically means I am doing 
something wrong. 

I want to put all vertices in a matrix and not look like its a hack. 


On Monday, 23 November 2015 16:53:24 UTC+2, kleinsplash wrote:
>
> Thanks to group managed to plot using GLVisualize:
>
> using GLVisualize
> using FileIO
> w,r = glscreen()
> view(visualize(obj))
> r()
>
> from Simon's answer here: 
> https://groups.google.com/d/msg/julia-users/W8D6kAJhREI/N89EOObzCAAJ  
>  
>
> On Monday, 23 November 2015 13:53:50 UTC+2, kleinsplash wrote:
>>
>> Looks like accessing has changed a bit - so to get to the faces I did 
>>
>> obj = load("path.obj")
>> faces = obj.faces
>>
>> So far I haven't had much luck in either plotting this or accessing the 
>> faces and the verticies. As soon as I figure it out I will get back. This 
>> is linked to my other question, for some reason I wasn't automatically 
>> subscribed to this one. 
>>
>>
>> On Monday, 9 November 2015 18:45:01 UTC+2, Steve Kelly wrote:
>>>
>>> The faces can be accessed with faces(load("foo.obj")) or mesh.faces.
>>>
>>> Probably the easiest way to display the mesh at this point is with 
>>> ThreeJS.jl: 
>>> https://github.com/rohitvarkey/ThreeJS.jl/blob/master/examples/mesh.jl. 
>>> This approach should work in IJulia and Blink.
>>>
>>> GLVisualize has some good demos and a much more responsive backend, but 
>>> it needs some work to run in OpenGL < 3.3 and the working commits aren't on 
>>> Metadata yet. Meshes is kind of a weird state right now, and most of the 
>>> functionality can be had with GeometryTypes, Meshing, and MeshIO. We have 
>>> been working the past few months to finish the coupling between data 
>>> structures for geometry and visualization. It would be great to hear your 
>>> application, and see if we could achieve something in the short term that 
>>> would work for you. Personally I use Meshlab when I do solid modelling in 
>>> Julia which slows down my iteration time, and it would be nice to have a 
>>> mesh viewer in the workflow.
>>>
>>> Best,
>>> Steve
>>> On Nov 9, 2015 9:55 AM, "Ashley Kleinhans"  
>>> wrote:
>>>
 Hi,

 I am new at this - but have decided that Julia is my language of 
 choice. So I begin silly question stage: 

 Could someone talk me through how to access and display an .obj file? 

 I have gotten so far:

 using Meshes
 using PyPlot
 using FileIO
 using MeshIO

 obj = load(filename)
 vts = obj.vertices 


 Which gives me: 

 502-element Array{FixedSizeArrays.Point{3,Float32},1}:



 One example point being: 

 Point(0.00117,-0.02631,0.03907)


 

   

 How do I access the verticies to use them with plot? 

 -A




[julia-users] Re: ANN: Julia 0.4.1 and 0.3.12 released

2015-11-24 Thread Páll Haraldsson
On Monday, November 9, 2015 at 8:47:50 PM UTC, bernhard wrote:
>
> Out of curiosity: why is this release 54mb (up from 40mb for 0.4). I am 
> referring to the file size of the Win 64bit binaries here.
>

Another curiosity..:

The number of MB sound about right, and when I checked about the same as 
the Linux version ("Generic Linux binaries"). I tried installing the 
Windows version and it takes almost half a gig..

On Linux you have dependencies, but do they make up for the difference (my 
0.4.1 from a repository is only 101 MB installed)? My guess, that seems 
wrong, is that if you do not use pure Julia code, and include C (or Fortran 
source code) then you use the C compiler you can count on to be installed 
on Linux. Am I wrong, since there is does not seem to be a dependency for a 
full compiler (or is there not one as there should always be a compiler, 
and dependencies are skipped?). What I only found:

libgcc1 132 kB
libgfortran3 1234 kB

Does the Windows build of Julia download more after/while running the 
self-extracting executable? Such as a C compiler..?


> Am Montag, 9. November 2015 20:37:48 UTC+1 schrieb Tony Kelman:
>>
>> Hello all! The latest bugfix releases of both 0.4.x and 0.3.x Julia lines 
>> have been released. Binaries are available from the usual place 
>>  (see Old releases 
>>  for 0.3.12), and as is 
>> typical with such things, please report all issues to either the issue 
>> tracker , or email the 
>> julia-users list. (If you respond to this message on julia-users, please do 
>> not cc julia-news which is intended to be low-volume.)
>>
>> These are bugfix releases. 0.3.12 is likely to be the last release in the 
>> 0.3.x line unless any critical bugs or regressions are identified. To see 
>> the list of bugs fixed since 0.3.11, see this commit log 
>> . Bugfix 
>> backports to the 0.4.x line will be continuing with a target of one point 
>> release per month. See this commit log 
>>  for the 
>> list of bugs fixed between 0.4.0 and 0.4.1. If you are a package author and 
>> want to rely on functionality that did not work in 0.4.0 but does work in 
>> 0.4.1 in your package, please be sure to change the minimum julia version 
>> in your REQUIRE file to 0.4.1 accordingly. If you're not sure about this, 
>> you can test your package specifically against 0.4.0 on Travis and/or 
>> locally.
>>
>> These are recommended upgrades for anyone using previous releases, and 
>> should act as drop-in replacements for their respective minor-version 
>> predecessors. If 0.3.12 introduces any regressions relative to any previous 
>> 0.3.x release, or if 0.4.1 introduces any regressions relative to 0.4.0, 
>> please let us know.
>>
>> -Tony
>>
>>

[julia-users] How to close an HttpServer?

2015-11-24 Thread Eric Forgy
Hi,

I remember reading a question similar to (or exactly like) this one, but 
can't find it again.

I can start an HttpServer easily enough, but how to close it? I can see 
that WebSockets has a "close" method, but I can't find a way to close an 
HttpServer. I am probably confused and this question makes no sense :)

Thanks


[julia-users] Re: Pre-ANN: PlotlyJS.jl

2015-11-24 Thread Eric Forgy
On Tuesday, November 24, 2015 at 2:03:20 AM UTC+8, Tony Kelman wrote:
>
> Try using dirname(@__FILE__) instead of Pkg.dir


Thanks Tony. That did the trick. I am getting closer to having a decent 
package now.

You should be able to 


julia>Pkg.clone("https://github.com/EricForgy/PlotlyJS.jl.git";)
julia> using PlotlyJS
Listening on 0.0.0.0:8000...


Then open any browser to http://localhost:8000/plot.ly and run


julia> include(Pkg.dir("plotlyJS","test","runtests.jl"))
 

You should see this nice interactive chart:



It can now handle both Plotly.plot and Plotly.newPlot. Feedback welcome.


[julia-users] Re: Using Meshes.ji

2015-11-24 Thread Simon Danisch
This is the best GLVisualize can do for you right meow:




You get this with:


using GLVisualize, FileIO, Colors, GeometryTypes, GLAbstractionwindow, 
renderloop = glscreen()obj = load("cat.obj")obj_vizz = visualize(obj, color=
RGBA{Float32}(0,0,0,0.1))point3d_vizz = visualize(vertices(obj), scale=
Vec3f0(0.01))axis = visualize(boundingbox(point3d_vizz).value, :grid)view
(obj_vizz)view(point3d_vizz)view(axis)renderloop()
A lot of improvements are on its way (like better camera, more consistent 
API and better axis), but this is still not out yet.

I'm not sure what would be possible with a combination of MeshIO and e.g. 
PyPlot, but I guess it could be better axis/labels + print quality, while 
being slower ;)
You quite likely need to do something like:

points = vertices(obj) #GeometryTypes exports vertices(...)
x,y,z = [p[1] for p in points], map(p->p[2], points), map(last, points) # 
three different ways of getting the x,y,z coordinates

and then go from there with the PyPlot/matplotlib docs.


Am Montag, 9. November 2015 15:55:57 UTC+1 schrieb kleinsplash:


Am Montag, 9. November 2015 15:55:57 UTC+1 schrieb kleinsplash:
>
> Hi,
>
> I am new at this - but have decided that Julia is my language of choice. 
> So I begin silly question stage: 
>
> Could someone talk me through how to access and display an .obj file? 
>
> I have gotten so far:
>
> using Meshes
> using PyPlot
> using FileIO
> using MeshIO
>
> obj = load(filename)
> vts = obj.vertices 
>
>
> Which gives me: 
>
> 502-element Array{FixedSizeArrays.Point{3,Float32},1}:
>
>
>
> One example point being: 
>
> Point(0.00117,-0.02631,0.03907)
>
>   
>  
>
>   
>
> How do I access the verticies to use them with plot? 
>
> -A
>
>
>

[julia-users] still no luck with pyjulia

2015-11-24 Thread Neal Becker
Just updated to julia-0.4.1 on fedora 23.
Updated PyCall
julia> Pkg.build("PyCall")
INFO: Building PyCall
INFO: PyCall is using /usr/bin/python (Python 2.7.10) at /usr/bin/python, 
libpython = /usr/lib64/libpython2.7.so

Just for good measure, re-ran python setup.py install --user on pyjulia 
after git pull (but it was already up-to-date).

j = julia.Julia()
ERROR: UndefVarError: dlpath not defined
Traceback (most recent call last):
  File "", line 1, in 
  File "julia/core.py", line 244, in __init__
raise JuliaError('error starting up the Julia process')
julia.core.JuliaError: error starting up the Julia process

which julia
/usr/bin/julia

Any ideas?

I've been patiently waiting to explore julia (coming from python).  For me, 
the only reasonable path is to incrementally add julia to my python, so I 
need pyjulia to work.



[julia-users] Re: How to close an HttpServer?

2015-11-24 Thread Steven Sagaert
There isn't a function for that. You can shut it down either by killing the 
process  or by building in your program a "shutdown" message that when it 
receives this http request then exits the julia program by calling quit()

When I wrote a HttpServer based service I also thought this was a missing 
feature of HttpServer.

On Tuesday, November 24, 2015 at 11:42:49 AM UTC+1, Eric Forgy wrote:
>
> Hi,
>
> I remember reading a question similar to (or exactly like) this one, but 
> can't find it again.
>
> I can start an HttpServer easily enough, but how to close it? I can see 
> that WebSockets has a "close" method, but I can't find a way to close an 
> HttpServer. I am probably confused and this question makes no sense :)
>
> Thanks
>


Re: [julia-users] Re: `(,a) = (1,2)` returns an error

2015-11-24 Thread Zheng Wendell
I didn't know the convention that _ is used as any superfluous assignment.
So I can write `_ = a; a = b; b = _` for integer swap?


On Mon, Nov 23, 2015 at 7:28 PM, Seth wrote:

> Why not simply
>
> _, x, _, y = f()
>
> ? This seems easier to read and understand. _ is a valid variable but is
> typically used for superfluous assignment.
>
>
> On Monday, November 23, 2015 at 9:56:27 AM UTC-8, Sisyphuss wrote:
>>
>> OK, so if I have a function which returns a tuple (a,b,c,d),
>> but I only want the 2nd and 4th value, the most concise way is
>>
>> (x,y) = f()[[2;4]]
>>
>>


[julia-users] Re: Error while Building PyCall

2015-11-24 Thread Steven G. Johnson
Just relaunching Julia and re-running Pkg.build("PyCall") should fix the 
problem.  As Tony said, the need to re-launch Julia should go away in 0.4.1


Re: [julia-users] Re: `(,a) = (1,2)` returns an error

2015-11-24 Thread Steven G. Johnson


On Tuesday, November 24, 2015 at 8:16:39 AM UTC-5, Sisyphuss wrote:
>
> I didn't know the convention that _ is used as any superfluous assignment. 
> So I can write `_ = a; a = b; b = _` for integer swap?
>

Yes, but you can also just do

   a,b = b,a


Re: [julia-users] Re: `(,a) = (1,2)` returns an error

2015-11-24 Thread Zheng Wendell
On Tue, Nov 24, 2015 at 2:21 PM, Steven G. Johnson  wrote:

>
>
> On Tuesday, November 24, 2015 at 8:16:39 AM UTC-5, Sisyphuss wrote:
>>
>> I didn't know the convention that _ is used as any superfluous
>> assignment. So I can write `_ = a; a = b; b = _` for integer swap?
>>
>
> Yes, but you can also just do
>
>a,b = b,a
>

Great !


[julia-users] Re: drandn sometimes crashes

2015-11-24 Thread Andreas Noack
Please file this issue at DistributedArrays.jl

On Monday, November 23, 2015 at 8:13:06 AM UTC-5, Antonio Suriano wrote:
>
> addprocs(3)
>
> @everywhere using DistributedArrays
>
> function tony(N)
> return sum(drandn(N,N))
> end
>
>
> function pesante(N)
> a=zeros(N,N)
> for i = 1:N
> for j=1:N
> a[i,j]=tony(1000)
> end
> end
> return a
> end
>
> matrice= pesante(100)
>
>
> 
> julia 0.4.1
>
> when calling pesante with N>30 sometimes it crashes
>
> LoadError: BoundsError: attempt to access 0-element Array{Any,1}
>   at index [2]
> while loading In[3], in expression starting on line 20
>
> fatal error on 2: ERROR: MethodError: `convert` has no method matching 
> convert(::Type{RemoteRef{T<:AbstractChannel}}, ::Base.RemoteDoMsg)
> This may have arisen from a call to the constructor 
> RemoteRef{T<:AbstractChannel}(...),
> since type constructors fall back to convert methods.
> Closest candidates are:
>   call{T}(::Type{T}, ::Any)
>   convert{T}(::Type{T}, !Matched::T)
>   RemoteRef()
>   ...
>  in setindex! at array.jl:313
>  in deserialize_array at serialize.jl:616
>  in deserialize_datatype at serialize.jl:651
>  in handle_deserialize at serialize.jl:477 (repeats 2 times)
>  in deserialize_datatype at serialize.jl:651
>  in message_handler_loop at multi.jl:861
>  in anonymous at task.jl:63
> Worker 2 terminated.
> ERROR (unhandled task failure): EOFError: read end of file
>  in read at stream.jl:911
>  in message_handler_loop at multi.jl:861
>  in anonymous at task.jl:63
>


[julia-users] Re: Ray tracing for complex geometry

2015-11-24 Thread Andre Bieler
@Simon
GLVisualize looks great! I ll try using it as soon as there
is an opportunity. (and then bother you with questions.. ;) )

@kleinsplash
Not sure if those meshing libraries are the right place to put my stuff.

I didnt even know about all these packages until I posted my answer
here. I will get rid of my custom mesh loading prcedures and start
using whats already available.

On a somewhat related note. I do have an Octree implementation
in Julia which I use for meshing/partitioning of 3D domains, maybe
that could be interesting for someone?
It generates an octree around triangulated surface meshes.
E.g. here
https://github.com/abieler/dsmc/blob/master/doc/cow_mesh2.png

If there is interest I d be happy to share/clean up my code.




[julia-users] Re: drandn sometimes crashes

2015-11-24 Thread Antonio Suriano
Done

https://github.com/JuliaParallel/DistributedArrays.jl/issues/50


Il giorno martedì 24 novembre 2015 15:14:45 UTC+1, Andreas Noack ha scritto:
>
> Please file this issue at DistributedArrays.jl
>
>
>

[julia-users] Re: Error while Building PyCall

2015-11-24 Thread Vishnu Raj
Thank you both :)
Relaunching and building worked!! :)


[julia-users] Cholmod Factor re-use

2015-11-24 Thread Matthew Pearce
Hello

I was going to investigate whether re-using the structure of a sparse 
cholesky could save me some time. 

Trying it, I got this:

julia> C = cholfact(K)
Base.SparseMatrix.CHOLMOD.Factor{Float64}
type:  LLt
method: supernodal
maxnnz:  0
nnz:   4117190


julia> D = cholfact!(C, K)
ERROR: MethodError: `cholfact!` has no method matching cholfact!(::Base.
SparseMatrix.CHOLMOD.Factor{Float64}, ::SparseMatrixCSC{Float64,Int64})

help?> cholfact!
search: cholfact! cholfact

  ..  cholfact!(A [,LU=:U [,pivot=Val{false}]][;tol=-1.0]) -> Cholesky
  
  ``cholfact!`` is the same as :func:`cholfact`, but saves space by 
overwriting the input ``A``, instead of creating a copy. ``cholfact!`` can 
also reuse the symbolic factorization from a different matrix ``F`` with 
the same structure when used as: ``cholfact!(F::CholmodFactor, A)``.

julia> VERSION
v"0.5.0-dev+749"


So I have a `Factor` object, but it looks like `cholfact!` wants a 
`CholmodFactor`. Is this some kind of hangover from previous development, 
or have I missed something?

Cheers

Matthew


[julia-users] Re: Cholmod Factor re-use

2015-11-24 Thread Pieterjan Robbe
is this of any help?

https://groups.google.com/forum/#!msg/julia-users/tgO3hd238Ac/olgfSJLXvzoJ


[julia-users] Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Pieterjan Robbe
Consider the problem of taking a linear combination of m (n x n)-matrices 
stored in a (n x n x m)-array A. The weights are stored in a length-m vector 
w. 
In Matlab, we can accomplish this by

n = 100;

m = 1;


A = rand(n,n,m);

y = randn(1,m);


times = zeros(1,10);


for cntr = 1:10

   tic;

   Z = sum(repmat(reshape(y,[1 1 m]),[n n 1]).*A,3);

   times(cntr) = toc;

end


sprintf('time = %f',mean(times))

This gives time = 0.3259 on my machine using R2014b. (Note: I am aware of 
the ridiculous amount of memory allocated by this solution, however, it 
turns out to be 25% faster than using bsxfun, for instance)

A Julia-implementation that accomplishes the same thing is

function matrixLinComb(A::Array{Float64,3},y::Array{Float64,2})
  Z = reshape(sum(reshape(y,1,1,m).*A,3),n,n)
end

n = 100
m = 1

A = rand(n,n,m)
y = randn(1,m)

times = zeros(10)

for cntr = 1:10
  times[cntr] = @elapsed matrixLinComb(A,y)
end
println("time = $(mean(times))")

This gives time = 0.5130 using v0.4.0 . I've tried out different other 
implementations (broadcast, reshaping A and *multiply etc.), though this 
turned out the be the fastest method. Does anyone has an idea why Matlab's 
repmat function is superior to the Julia-implementation (or, as 
my colleagues call it - Julia doesn't work)? Thanks!



[julia-users] Re: ANN: Julia 0.4.1 and 0.3.12 released

2015-11-24 Thread Tony Kelman
No compiler. Git on Windows is several hundred MB.

[julia-users] Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Tony Kelman
Are you running matlab single threaded? Instead of storing a 3 dimensional 
array and doing indexing and reshape, in Julia you'd be better off storing an 
array of matrices.

[julia-users] Re: CUDART and CURAND problem on running the same "do" loop twice

2015-11-24 Thread Andrei Zh
While investigating this issue we found that CUDArt and CURAND may behave 
differently on different platforms. Could somebody from user group with 
enabled CUDA try the following code in Julia REPL and report: 

 * operating system
 * whether this code succeeded or failed; if failed, then what is the 
message. 

The code is (just copy and paste):

Pkg.add("CUDArt")  # if it's not installed yet
Pkg.add("CURAND")  # if it's not installed yet

using CUDArt
using CURAND

rng = create_generator()
d_a = curand(rng,Float64, 1000)
destroy_generator(rng)
a = to_host(d_a)
device_reset()
result = devices(dev->capability(dev)[1]>=2) do devlist
end

rng = create_generator()
d_a = curand(rng,Float64, 1000)
destroy_generator(rng)
a = to_host(d_a)
device_reset()
result = devices(dev->capability(dev)[1]>=2) do devlist
end

The goal of this experiment is to see if cuRAND can survive after 
`cudaDeviceReset()`. So far it works on one Linux machine and fails on one 
Windows machine, so some statistics will be very helpful.

Thanks. 


On Friday, November 6, 2015 at 10:46:54 PM UTC+3, Joaquim Masset Lacombe 
Dias Garcia wrote:
>
> I was playing with the GPU (in both windows and mac) libraries and I came 
> up with the following errors:
> The first time I execute the do loop, every thing goes well.
> But If i try a second time in a row I get this invalid pointer error.
>
> I got the same error in all my do loops, any ideas?
>
> best,
> Joaquim
>
> *My code running on terminal:*
>
> julia> using CUDArt
> julia> using CURAND
> julia> result = devices(dev->capability(dev)[1]>=2) do devlist
>d_a = curand(Float32, 1000);
>a = to_host(d_a);
>end
>
> 1000-element Array{Float64,1}:
>  0.438451
>  0.460365
>  0.250215
>  0.494744
>  0.0530111
>  0.337699
>  0.396763
>  0.874419
>  0.482167
>  0.0428398
>  ?
>  0.563937
>  0.80706
>  0.190015
>  0.334969
>  0.622164
>  0.710596
>  0.0125895
>  0.990388
>  0.467796
>  0.24313
>
> julia> result = devices(dev->capability(dev)[1]>=2) do devlist
>d_a = curand(Float32, 1000);
>a = to_host(d_a);
>end
> WARNING: CUDA error triggered from:
>
>  in checkerror at 
> C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\libcudart-6.5.jl
> :15
>  in copy! at C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\arrays.jl:152
>  in to_host at C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\arrays.jl:87
>  in anonymous at none:3
>  in devices at 
> C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\device.jl:61ERROR:
> Launch failed, perhaps due to an invalid pointer
>  in checkdrv at C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\module.jl:6
>  in close at C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\device.jl:136
>  in devices at C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\device.jl:63
>
>

[julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Pieterjan Robbe
Matlab was running multithreaded, the single-threaded version (LASTN = 
maxNumCompThreads(1)) gives time = 0.3449, still better. The Julia-version 
where I store the matrices in an Array now gives time = 0.4078, better than 
the previous Julia-version but still not what I would expect.


[julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Steven G. Johnson
I don't see why you find the results surprising.  This sort of vectorized 
operation is the sort of thing where Matlab does well (all the inner loops 
are C or Fortran code), and you wouldn't expect Julia code of the same 
vectorized style to be faster.  (The Julia code is a bit slower, but by a 
small margin, only about 20%, which is well within the margin of variation 
that you would expect from different implementations of similar algorithms.)

Where Matlab falls short, and where Julia can have a huge advantage, is on 
problems that don't vectorize well, so that you have to write your own 
inner loops. 

However, even on a vectorizable problem like this, you can still often do 
far better than generic vectorized code by writing your own inner loops 
that are specialized to the problem at hand.  For example, the following 
Julia implementation of your computation is (a) about 7x faster than your 
original Julia code on my machine and (b) much more generic (it handles 
arrays of any type, and the sizes are inferred from the size of the 
arguments, rather than using the global variable "n" in your original code 
— in this way, it is very different than a similar implementation in C or 
Fortran).

function matrixLinComb2{T1,T2}(A::AbstractArray{T1,3},y::AbstractArray{T2})
m,n,p = size(A)
length(y) == p || throw(ArgumentError("y should be length $p"))
C = zeros(promote_type(T1,T2), m,n)
for k = 1:p
yk = y[k]
for j = 1:n, i = 1:m
@inbounds C[i,j] += yk * A[i,j,k]
end
end
return C
end


Of course, it's also longer the vectorized code, but that's a typical 
tradeoff of optimization.  You only write code like this for 
performance-critical functions, and then a few straighforward extra lines 
are worth it.


[julia-users] Re: Pre-ANN: PlotlyJS.jl

2015-11-24 Thread Randy Zwitch
"The same concept should work with Vega.jl in Juno."

I don't use Juno, but I'm happy to accept a pull request to Vega.jl if 
there are people who want to integrate Juno as well. The behavior from the 
REPL is to generate a standalone webpage and open a browser, and plot 
inline in Jupyter Notebook (which is what I use).

On Monday, November 23, 2015 at 11:47:38 AM UTC-5, Eric Forgy wrote:
>
> Hi everyone,
>
>
> In this post 
>  
> from last Wednesday, Hans-Peter pointed out that plotly.js was made open 
> source (  https://plot.ly/javascript/open-source-announcement/ ).
>
>
> Anyway, I tried my hands at writing a wrapper for the JS API. There 
> already exists Plotly.jl, but that is an API to interact with the company's 
> server requiring login info, etc. With the opening of the JS API, the 
> interaction between Julia and Plotly can be much simpler, I think.
>
>
> In the other post, I mused that it should be straightforward to hook up 
> Plotly.js with Juno, so here is a snapshot:
>
>
>
> 
>
>
> To make it interactive, I used WebSockets.jl. One neat artifact of that is 
> you can stream the charts to multiple connections. 
>
>
> Here is a silly video demo of me controlling 4 browsers (only 3 are 
> visible) including an iPhone from Juno: 
> https://www.youtube.com/watch?v=mWDyyfVNqP0
>
>
> The same concept should work with Vega.jl in Juno.
>
>
> I call this a "Pre-announcement", because it is not really ready for the 
> wild yet, but it is far enough that if some friendlies would like to have a 
> look and help me get it in shape, that would be more than welcome.
>
>
> Currently, I am struggling a bit to get the paths correct. If I "git 
> clone" it and run it from the REPL, then I can make it work, but if I 
> "Pkg.clone" it, the path to the html containing the scripts is wrong. 
>
>
> Here are the lines in question:
>
>
> # res = Response(open(readall,Pkg.dir("Plotly","src","web","plotly.html")))
> res = Response(open(readall,"web/plotly.html"))
>
> When run from the REPL (after CD'ing to the directory), the code will find 
> plotly.html. But if I Pkg.clone, it can't find it. It seems to find it when 
> I use the commented Pkg.dir, but then that doesn't work when I'm trying to 
> test before committing to Github and Pkg.checkout etc. Any ideas?
>
> Anyway, this has been kind of fun and if others can help improve it, that 
> would be a great learning experience for me.
>
> Best regards,
> Eric
>


[julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Pieterjan Robbe
Thanks for the fast response, that explains a lot. I don't see the 7x 
speedup on my machine though, but probably the problem is too small to see 
significant differences.


Re: [julia-users] Re: CUDART and CURAND problem on running the same "do" loop twice

2015-11-24 Thread Tim Holy
It didn't throw any errors for me.

julia> versioninfo()
Julia Version 0.4.2-pre+1
Commit b8b3517* (2015-11-08 10:37 UTC)
Platform Info:
  System: Linux (x86_64-linux-gnu)
  CPU: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
  LAPACK: libopenblas64_
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

--Tim

On Tuesday, November 24, 2015 08:17:33 AM Andrei Zh wrote:
> While investigating this issue we found that CUDArt and CURAND may behave
> differently on different platforms. Could somebody from user group with
> enabled CUDA try the following code in Julia REPL and report:
> 
>  * operating system
>  * whether this code succeeded or failed; if failed, then what is the
> message.
> 
> The code is (just copy and paste):
> 
> Pkg.add("CUDArt")  # if it's not installed yet
> Pkg.add("CURAND")  # if it's not installed yet
> 
> using CUDArt
> using CURAND
> 
> rng = create_generator()
> d_a = curand(rng,Float64, 1000)
> destroy_generator(rng)
> a = to_host(d_a)
> device_reset()
> result = devices(dev->capability(dev)[1]>=2) do devlist
> end
> 
> rng = create_generator()
> d_a = curand(rng,Float64, 1000)
> destroy_generator(rng)
> a = to_host(d_a)
> device_reset()
> result = devices(dev->capability(dev)[1]>=2) do devlist
> end
> 
> The goal of this experiment is to see if cuRAND can survive after
> `cudaDeviceReset()`. So far it works on one Linux machine and fails on one
> Windows machine, so some statistics will be very helpful.
> 
> Thanks.
> 
> 
> On Friday, November 6, 2015 at 10:46:54 PM UTC+3, Joaquim Masset Lacombe
> 
> Dias Garcia wrote:
> > I was playing with the GPU (in both windows and mac) libraries and I came
> > up with the following errors:
> > The first time I execute the do loop, every thing goes well.
> > But If i try a second time in a row I get this invalid pointer error.
> > 
> > I got the same error in all my do loops, any ideas?
> > 
> > best,
> > Joaquim
> > 
> > *My code running on terminal:*
> > 
> > julia> using CUDArt
> > julia> using CURAND
> > julia> result = devices(dev->capability(dev)[1]>=2) do devlist
> > 
> >d_a = curand(Float32, 1000);
> >a = to_host(d_a);
> >
> >end
> > 
> > 1000-element Array{Float64,1}:
> >  0.438451
> >  0.460365
> >  0.250215
> >  0.494744
> >  0.0530111
> >  0.337699
> >  0.396763
> >  0.874419
> >  0.482167
> >  0.0428398
> >  ?
> >  0.563937
> >  0.80706
> >  0.190015
> >  0.334969
> >  0.622164
> >  0.710596
> >  0.0125895
> >  0.990388
> >  0.467796
> >  0.24313
> > 
> > julia> result = devices(dev->capability(dev)[1]>=2) do devlist
> > 
> >d_a = curand(Float32, 1000);
> >a = to_host(d_a);
> >
> >end
> > 
> > WARNING: CUDA error triggered from:
> >  in checkerror at
> > 
> > C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\libcudart-6.5.jl
> > 
> > :15
> > :
> >  in copy! at C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\arrays.jl:152
> >  in to_host at C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\arrays.jl:87
> >  in anonymous at none:3
> >  in devices at
> > 
> > C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\device.jl:61ERROR:
> > Launch failed, perhaps due to an invalid pointer
> > 
> >  in checkdrv at C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\module.jl:6
> >  in close at C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\device.jl:136
> >  in devices at C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\device.jl:63



[julia-users] Re: CUDART and CURAND problem on running the same "do" loop twice

2015-11-24 Thread Kristoffer Carlsson
The original code in the OP fails for me but there is no problem with the one 
you posted. Ubuntu machine.

[julia-users] Re: pmap - intermingled output from workers on v0.4

2015-11-24 Thread bernhard
In my view it is natural, that the order of the "output" (print statements) 
is intermingled, as the code runs in parallel. To my knowledge this was the 
same in 0.3 . Is it possible that you had no workers at all? (I.e. nprocs() 
evaluates to 1).
Also, I cannot see any noticable delay...


Am Montag, 23. November 2015 22:32:51 UTC+1 schrieb Greg Plowman:
>
> I should add this problem is only when using *remote* workers. (In my 
> case ssh on Windows).
>
> The following code produces intermingled output with multiple workers on 
> multiple machines (Julia v0.4)
> Output is orderly when using Julia v0.3, or with v0.4 when workers are on 
> local machine only.
>
>
> function Launch()
> @everywhere function sim(trial, numIterations)
> println("Starting trial $trial")
> s = 0.0
> for i = 1:numIterations
> s += sum(sqrt(rand(10^6)))
> end
> println("Finished trial $trial")
> s
> end
> 
> numTrials = 100
> numIterations = 100
> println("Running random simulation: $numTrials trials of 
> $numIterations iterations ... ")
> results = pmap(sim, 1:numTrials, fill(numIterations, numTrials))
> end 
>
>
>

Re: [julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Tim Holy
Did you run it twice, so you're not including JIT-compilation?

--Tim

On Tuesday, November 24, 2015 09:01:32 AM Pieterjan Robbe wrote:
> Thanks for the fast response, that explains a lot. I don't see the 7x
> speedup on my machine though, but probably the problem is too small to see
> significant differences.



Re: [julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Pieterjan Robbe
Of course :) I also increased the number of experiments (100 instead of 10) 
and discarded the first entry of the result. When using Steven's function, 
the results are more or less comparable.

   mean  min max
Matlab0.39550.34700.4978
Julia   0.46690.31090.5757

I'm happy with the results now.



[julia-users] Re: Problem with "using PyPlot"

2015-11-24 Thread Ilya Orson
I am on Ubuntu 14.04 and using the Julia PPA. 

The problem discussed in the issue is the same  I have indeed. 

A solution that worked for me was to uninstall anaconda and install 
everything from the pip-python manager and then set Julia ENV["PYTHON"]to 
detect that python path.

Thanks!


[julia-users] Metaprogramming and function scope

2015-11-24 Thread Pieterjan Robbe
Why can't I parse a function evaluation of a function defined within the 
scope of another function?
i.e., the following terminates with an UndefVarError: bar not defined:

*function* foo()
  *function* bar()
x
  end
  *return* eval(parse("bar()"))
end

x = 7
foo()

However, I can do

*function* foo()
  *function* bar()
x
  end
  *return* bar()
end

x = 7
foo()

and

*function* foo()
  *return* eval(parse("bar()"))
end

*function* bar()
  x
end

x = 7
foo()

Many thanks!
-Pieterjan


[julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Steven G. Johnson


On Tuesday, November 24, 2015 at 12:01:32 PM UTC-5, Pieterjan Robbe wrote:
>
> Thanks for the fast response, that explains a lot. I don't see the 7x 
> speedup on my machine though, but probably the problem is too small to see 
> significant differences.
>

That's odd, I'm benchmarking the exact same size you are, and the 
difference is very clear:

function matrixLinComb(A::Array{Float64,3},y::Array{Float64,2})
  Z = reshape(sum(reshape(y,1,1,m).*A,3),n,n)
end

function matrixLinComb2{T1,T2}(A::Array{T1,3},y::Array{T2})
m,n,p = size(A)
length(y) == p || throw(ArgumentError("y should be length $p"))
C = zeros(promote_type(T1,T2), m,n)
for k = 1:p
yk = y[k]
for j = 1:n, i = 1:m
@inbounds C[i,j] += yk * A[i,j,k]
end
end
return C
end

n = 100
m = 1

A = rand(n,n,m)
y = randn(1,m)

println("matrixLinComb: ", minimum([@elapsed(matrixLinComb(A,y)) for i = 
1:10]))
println("matrixLinComb2: ", minimum([@elapsed(matrixLinComb2(A,y)) for i = 
1:10])) 


gives

matrixLinComb: 0.633029344

matrixLinComb2: 0.084245998


Note that I'm using the minimum time here, not the mean, so it 
automatically ignores JIT times and other one-time spikes.


[julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Steven G. Johnson
(Note that I'm using Julia 0.4 on a 2012 Mac.)


[julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Pieterjan Robbe
Oops looks like I was timing the wrong thing :) Sorry!
My results are the similar now (on a 2014 Mac, Julia 0.4): 
time = 0.5201348707

time = 0.0634930367
Thanks a lot!


Op dinsdag 24 november 2015 19:20:15 UTC+1 schreef Steven G. Johnson:
>
> (Note that I'm using Julia 0.4 on a 2012 Mac.)
>


Re: [julia-users] Metaprogramming and function scope

2015-11-24 Thread Isaiah Norton
`eval` operates in global scope only:
http://docs.julialang.org/en/release-0.4/manual/metaprogramming/#eval-and-effects

On Tue, Nov 24, 2015 at 1:12 PM, Pieterjan Robbe 
wrote:

> Why can't I parse a function evaluation of a function defined within the
> scope of another function?
> i.e., the following terminates with an UndefVarError: bar not defined:
>
> *function* foo()
>   *function* bar()
> x
>   end
>   *return* eval(parse("bar()"))
> end
>
> x = 7
> foo()
>
> However, I can do
>
> *function* foo()
>   *function* bar()
> x
>   end
>   *return* bar()
> end
>
> x = 7
> foo()
>
> and
>
> *function* foo()
>   *return* eval(parse("bar()"))
> end
>
> *function* bar()
>   x
> end
>
> x = 7
> foo()
>
> Many thanks!
> -Pieterjan
>


[julia-users] Re: still no luck with pyjulia

2015-11-24 Thread Neal Becker
OK, I found some info here:
https://github.com/JuliaLang/pyjulia/issues/38

I tried the code suggested at the end, from ronisbr

I modified it to say:

julia = PyDLL('/usr/lib64/julia/libjulia.so', RTLD_GLOBAL)


I'm using the standard fedora 23 julia package.
rpm -qf /usr/lib64/julia/libjulia.so
julia-0.4.1-1.fc23.x86_64

Attempting to execute this on python2 gives:

[nbecker@nbecker7 pyjulia]$ python2 test1.py

signal (11): Segmentation fault
unknown function (ip: 0x7f4e710c05ef)
jl_apply_generic at /usr/lib64/julia/libjulia.so (unknown line)
jl_call2 at /usr/lib64/julia/libjulia.so (unknown line)
ffi_call_unix64 at /lib64/libffi.so.6 (unknown line)
ffi_call at /lib64/libffi.so.6 (unknown line)
_ctypes_callproc at /usr/lib64/python2.7/lib-dynload/_ctypes.so (unknown 
line)
unknown function (ip: 0x7f4e722c29d4)
PyObject_Call at /lib64/libpython2.7.so.1.0 (unknown line)
PyEval_EvalFrameEx at /lib64/libpython2.7.so.1.0 (unknown line)
PyEval_EvalCodeEx at /lib64/libpython2.7.so.1.0 (unknown line)
PyEval_EvalCode at /lib64/libpython2.7.so.1.0 (unknown line)
unknown function (ip: 0x7f4e79e12bdf)
PyRun_FileExFlags at /lib64/libpython2.7.so.1.0 (unknown line)
PyRun_SimpleFileExFlags at /lib64/libpython2.7.so.1.0 (unknown line)
Py_Main at /lib64/libpython2.7.so.1.0 (unknown line)
__libc_start_main at /lib64/libc.so.6 (unknown line)
_start at python2 (unknown line)
unknown function (ip: (nil))
Segmentation fault (core dumped)


Neal Becker wrote:

> Just updated to julia-0.4.1 on fedora 23.
> Updated PyCall
> julia> Pkg.build("PyCall")
> INFO: Building PyCall
> INFO: PyCall is using /usr/bin/python (Python 2.7.10) at /usr/bin/python,
> libpython = /usr/lib64/libpython2.7.so
> 
> Just for good measure, re-ran python setup.py install --user on pyjulia
> after git pull (but it was already up-to-date).
> 
> j = julia.Julia()
> ERROR: UndefVarError: dlpath not defined
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "julia/core.py", line 244, in __init__
> raise JuliaError('error starting up the Julia process')
> julia.core.JuliaError: error starting up the Julia process
> 
> which julia
> /usr/bin/julia
> 
> Any ideas?
> 
> I've been patiently waiting to explore julia (coming from python).  For
> me, the only reasonable path is to incrementally add julia to my python,
> so I need pyjulia to work.




[julia-users] Re: ANN: Julia 0.4.1 and 0.3.12 released

2015-11-24 Thread Páll Haraldsson
On Tuesday, November 24, 2015 at 3:53:06 PM UTC, Tony Kelman wrote:
>
> No compiler. Git on Windows is several hundred MB.


Wow, thanks for answering. Are you sure? If it's ok to ask in this thread..:

["Add remove programs" shows Julia at 300 KB.. while in reality, Julia is 
460-470 MB.. at almost 6.300 files..]

I just checked here:

https://git-scm.com/downloads

and git seems about 28 MB for the Windows version. That is probably 
compressed (self-extracting archive), but I would be amazed if it would 
decompress to 400 MB, and I see "only" actually 180-190 MB, any Idea why so 
much larger than on Linux? On my Ubuntu Linux git package seems to be (not 
compressed) 21.3 MB (not sure maybe it just has dependencies.. that have 
more..).

I'm guessing git is just one of the dependencies on Windows (as on Linux), 
and that the as with "Generic Linux binaries", that is about the same size, 
all the dependencies must be provided (then the compression is rather 
good..). [I tried to reinstall Julia, to see if something would get 
downloaded. Unless Windows has some cache (but I guess not, nothing similar 
to a package manager yet..?), everything seems included.]

I assume then that the .deb (or whatever else on Linux) is much smaller 
(only with pointers to dependencies).


I was thinking if I could distribute a Julia app that starts with 
downloading (or bundled with) the Julia runtime (the Windows one if on that 
platform, see another thread I made..) and half a gig seems kind of much to 
install (or download in compressed form), even excluding the app itself..

I know about the build_executable.jl (that I didn't get running..). I 
assume that would trim this down by a lot, say for sure exclude git (seems 
only relevant for developers)?

Could you get by something close to like 191376 bytes for /usr/bin/julia 
+ 24197448 bytes for libjulia.so and not much else (except for the parts of 
the Julia standard library (source code) that you actually use, say you are 
not using BLAS etc.)?



[julia-users] Re: ANN: Julia 0.4.1 and 0.3.12 released

2015-11-24 Thread Tony Kelman
Asking questions is fine but please make an effort to be concise, and 
reduce your word count when you do so.

Parts of Git are implemented in shell, perl, and tcl/tk, in addition to C. 
These (mostly the shell script parts) require a posix environment to run on 
Windows, which Git 2.x now accomplishes by including a forked copy of an 
MSYS2 environment - a modified version of Cygwin. There are also many 
different sub-commands in git which on unix are generally just differently 
named symbolic links to the same executable, but Windows does not support 
symbolic links to files without changing a default security setting so in 
Git-for-Windows the subcommands have to be copies of the executable.

The generic Linux binaries actually do not include Git, you have to 
manually install Git through your distribution's package manager before you 
can use Julia's Pkg.


On Tuesday, November 24, 2015 at 11:20:15 AM UTC-8, Páll Haraldsson wrote:
>
> On Tuesday, November 24, 2015 at 3:53:06 PM UTC, Tony Kelman wrote:
>>
>> No compiler. Git on Windows is several hundred MB.
>
>
> Wow, thanks for answering. Are you sure? If it's ok to ask in this 
> thread..:
>
> ["Add remove programs" shows Julia at 300 KB.. while in reality, Julia is 
> 460-470 MB.. at almost 6.300 files..]
>
> I just checked here:
>
> https://git-scm.com/downloads
>
> and git seems about 28 MB for the Windows version. That is probably 
> compressed (self-extracting archive), but I would be amazed if it would 
> decompress to 400 MB, and I see "only" actually 180-190 MB, any Idea why so 
> much larger than on Linux? On my Ubuntu Linux git package seems to be (not 
> compressed) 21.3 MB (not sure maybe it just has dependencies.. that have 
> more..).
>
> I'm guessing git is just one of the dependencies on Windows (as on Linux), 
> and that the as with "Generic Linux binaries", that is about the same size, 
> all the dependencies must be provided (then the compression is rather 
> good..). [I tried to reinstall Julia, to see if something would get 
> downloaded. Unless Windows has some cache (but I guess not, nothing similar 
> to a package manager yet..?), everything seems included.]
>
> I assume then that the .deb (or whatever else on Linux) is much smaller 
> (only with pointers to dependencies).
>
>
> I was thinking if I could distribute a Julia app that starts with 
> downloading (or bundled with) the Julia runtime (the Windows one if on that 
> platform, see another thread I made..) and half a gig seems kind of much to 
> install (or download in compressed form), even excluding the app itself..
>
> I know about the build_executable.jl (that I didn't get running..). I 
> assume that would trim this down by a lot, say for sure exclude git (seems 
> only relevant for developers)?
>
> Could you get by something close to like 191376 bytes for /usr/bin/julia 
> + 24197448 bytes for libjulia.so and not much else (except for the parts of 
> the Julia standard library (source code) that you actually use, say you are 
> not using BLAS etc.)?
>
>

Re: [julia-users] Metaprogramming and function scope

2015-11-24 Thread Pieterjan Robbe
That makes sense :) Is there a workaround? I need to define some (global 
constant) variables, Z1, Z2, Z3 etc. (that's where the parsing comes from) by 
calling a function (bar) that does something with data defined in foo(). I'd 
like to keep this inside a single function, since it's the initialization step 
of a more complex simulation.

[julia-users] Precompilation and functions with keyword arguments

2015-11-24 Thread Tim Holy
I've been experimenting further with SnoopCompile and Immerse/Gadfly, trying to 
shave off more of the time-to-first-plot. If I pull out all the stops (using 
the 
"userimg.jl" strategy), I can get it down to about 2.5 seconds. However, doing 
the same plot a second time is about 0.02s. This indicates that despite my 
efforts, there's still a lot that's not being cached.

About 0.3s of that total (not much, but it's all I have data on) can be 
observed via snooping as re-compilation of functions that you might imagine 
should have been precompiled. The biggest offenders are all functions with 
keyword arguments. In miniature, I think you can see this here:

julia> function foo(X; thin=true)
   svdfact(X)
   end
foo (generic function with 1 method)

# Before compiling this, let's make sure the work compiling foo isn't hidden
# by other compilation needs:
julia> A = rand(3,3)
3x3 Array{Float64,2}:
 0.570780.33557   0.56497  
 0.0679035  0.944406  0.816098 
 0.0922775  0.404697  0.0900726

julia> svdfact(A)
Base.LinAlg.SVD{Float64,Float64,Array{Float64,2}}(3x3 Array{Float64,2}:
 -0.507226   0.861331   0.0288001
 -0.825227  -0.475789  -0.304344 
 -0.248438  -0.178138   0.952127 ,
[1.4844598265207638,0.5068781079827415,0.19995120630810712],3x3 
Array{Float64,2}:
 -0.248222  -0.707397  -0.661797
  0.873751  -0.458478   0.162348
  0.418264   0.537948  -0.731893)

# OK, let's precompile foo
julia> @time precompile(foo, (Matrix{Float64},))
  0.000469 seconds (541 allocations: 35.650 KB)

julia> @time foo(A)
  0.001174 seconds (18 allocations: 3.063 KB)
Base.LinAlg.SVD{Float64,Float64,Array{Float64,2}}(3x3 Array{Float64,2}:
 -0.507226   0.861331   0.0288001
 -0.825227  -0.475789  -0.304344 
 -0.248438  -0.178138   0.952127 ,
[1.4844598265207638,0.5068781079827415,0.19995120630810712],3x3 
Array{Float64,2}:
 -0.248222  -0.707397  -0.661797
  0.873751  -0.458478   0.162348
  0.418264   0.537948  -0.731893)

# Note the 2nd call is 10x faster, despite precompilation
julia> @time foo(A)
  0.000164 seconds (18 allocations: 3.063 KB)
Base.LinAlg.SVD{Float64,Float64,Array{Float64,2}}(3x3 Array{Float64,2}:
 -0.507226   0.861331   0.0288001
 -0.825227  -0.475789  -0.304344 
 -0.248438  -0.178138   0.952127 ,
[1.4844598265207638,0.5068781079827415,0.19995120630810712],3x3 
Array{Float64,2}:
 -0.248222  -0.707397  -0.661797
  0.873751  -0.458478   0.162348
  0.418264   0.537948  -0.731893)

# Note adding a keyword argument to the call causes a further 10x slowdown...
julia> @time foo(A; thin=true)
  0.014787 seconds (3.36 k allocations: 166.622 KB)
Base.LinAlg.SVD{Float64,Float64,Array{Float64,2}}(3x3 Array{Float64,2}:
 -0.507226   0.861331   0.0288001
 -0.825227  -0.475789  -0.304344 
 -0.248438  -0.178138   0.952127 ,
[1.4844598265207638,0.5068781079827415,0.19995120630810712],3x3 
Array{Float64,2}:
 -0.248222  -0.707397  -0.661797
  0.873751  -0.458478   0.162348
  0.418264   0.537948  -0.731893)

# ...but only for the first call
julia> @time foo(A; thin=true)
  0.000209 seconds (19 allocations: 3.141 KB)
Base.LinAlg.SVD{Float64,Float64,Array{Float64,2}}(3x3 Array{Float64,2}:
 -0.507226   0.861331   0.0288001
 -0.825227  -0.475789  -0.304344 
 -0.248438  -0.178138   0.952127 ,
[1.4844598265207638,0.5068781079827415,0.19995120630810712],3x3 
Array{Float64,2}:
 -0.248222  -0.707397  -0.661797
  0.873751  -0.458478   0.162348
  0.418264   0.537948  -0.731893)


Obviously the times here don't add up to much, but for a project the size of 
Gadfly it might matter.

I should add that 
precompile(foo, (Vector{Any}, Matrix{Float64}))
doesn't seem to do anything useful.

Any ideas?

Best,
--Tim


[julia-users] Memory-efficient / in-place allocation of array subsets?

2015-11-24 Thread Gord Stephen
Hi all,

With a function like this...

function demo()
   A = ones(350,350,25000)
   return A[1:175, 1:175, :],  A[1:175, 176:end, :]
end

is there a memory-efficient (in-place I guess) way to store the return 
values while having A deallocated, if I can fit A in memory, but not both A 
and the return values?

Using subarrays acheives the desired functionality, but all of A needs to 
be kept in memory:

function demo2()
   A = ones(350,350,25000)
   return slice(A, 1:175, 1:175, :),  slice(A, 1:175, 176:350, :)
end

Thanks,
Gord


[julia-users] Re: Google releases TensorFlow as open source

2015-11-24 Thread Marcin Elantkowski
Hi, 

I'm just learning Julia, but so far it looks amazing. What I'm missing the 
most, though, is *Theano*. 

Theano provides an extremely flexible symbolic API, much more mature than 
*mxnet* (e.g.: *for* loops, *IfElse*). TensorFlow seems to replicate that, 
but it remains to be seen how it compares to other frameworks. 

Since there is not much info on the web, I'd like to ask You: 

   - How hard do you think would it be to port theano to Julia? 
   - Or maybe you guys know of anyone working on doing that already?


W dniu poniedziałek, 9 listopada 2015 22:02:36 UTC+1 użytkownik Phil Tomson 
napisał:
>
> Google has released it's deep learning library called TensorFlow as open 
> source code:
>
> https://github.com/tensorflow/tensorflow
>
> They include Python bindings, Any ideas about how easy/difficult it would 
> be to create Julia bindings?
>
> Phil
>


[julia-users] Breakpoint, stop(),pause()

2015-11-24 Thread digxx
Is there sth like a breakpoint?stop or pause function?
I have looked for either of it but couldnt find it...
I dont want to run the entire script but stop in between to look at some 
temporary result?
Is that somehow possible?


Re: [julia-users] Re: ANN: Julia 0.4.1 and 0.3.12 released

2015-11-24 Thread Isaiah Norton
>
> ["Add remove programs" shows Julia at 300 KB.. while in reality, Julia is
> 460-470 MB.. at almost 6.300 files..]


IIRC, this number is reported by the installer, and we probably don't
explicitly report it (or just report some default).
If anyone wants to have a go at NSIS Script to fix this, PRs would be
welcome.


On Tue, Nov 24, 2015 at 2:20 PM, Páll Haraldsson 
wrote:

> On Tuesday, November 24, 2015 at 3:53:06 PM UTC, Tony Kelman wrote:
>>
>> No compiler. Git on Windows is several hundred MB.
>
>
> Wow, thanks for answering. Are you sure? If it's ok to ask in this
> thread..:
>
> ["Add remove programs" shows Julia at 300 KB.. while in reality, Julia is
> 460-470 MB.. at almost 6.300 files..]
>
> I just checked here:
>
> https://git-scm.com/downloads
>
> and git seems about 28 MB for the Windows version. That is probably
> compressed (self-extracting archive), but I would be amazed if it would
> decompress to 400 MB, and I see "only" actually 180-190 MB, any Idea why so
> much larger than on Linux? On my Ubuntu Linux git package seems to be (not
> compressed) 21.3 MB (not sure maybe it just has dependencies.. that have
> more..).
>
> I'm guessing git is just one of the dependencies on Windows (as on Linux),
> and that the as with "Generic Linux binaries", that is about the same size,
> all the dependencies must be provided (then the compression is rather
> good..). [I tried to reinstall Julia, to see if something would get
> downloaded. Unless Windows has some cache (but I guess not, nothing similar
> to a package manager yet..?), everything seems included.]
>
> I assume then that the .deb (or whatever else on Linux) is much smaller
> (only with pointers to dependencies).
>
>
> I was thinking if I could distribute a Julia app that starts with
> downloading (or bundled with) the Julia runtime (the Windows one if on that
> platform, see another thread I made..) and half a gig seems kind of much to
> install (or download in compressed form), even excluding the app itself..
>
> I know about the build_executable.jl (that I didn't get running..). I
> assume that would trim this down by a lot, say for sure exclude git (seems
> only relevant for developers)?
>
> Could you get by something close to like 191376 bytes for /usr/bin/julia
> + 24197448 bytes for libjulia.so and not much else (except for the parts of
> the Julia standard library (source code) that you actually use, say you are
> not using BLAS etc.)?
>
>


Re: [julia-users] Breakpoint, stop(),pause()

2015-11-24 Thread Isaiah Norton
Some functionality is available here:
https://github.com/toivoh/Debug.jl

A generally-usable debugger is still work-in-progress.

On Tue, Nov 24, 2015 at 3:07 PM, digxx  wrote:

> Is there sth like a breakpoint?stop or pause function?
> I have looked for either of it but couldnt find it...
> I dont want to run the entire script but stop in between to look at some
> temporary result?
> Is that somehow possible?
>


[julia-users] Re: Google releases TensorFlow as open source

2015-11-24 Thread datnamer
Question- Why do you need theano?

Aside from the benefits of symbolic graph optimization, what does Theano 
provide that Julia doesn't do? With Julia you can write normal imperative 
code that is easier to read/write than theano, and then do autodiff on that.

On Tuesday, November 24, 2015 at 2:43:46 PM UTC-5, Marcin Elantkowski wrote:
>
> Hi, 
>
> I'm just learning Julia, but so far it looks amazing. What I'm missing the 
> most, though, is *Theano*. 
>
> Theano provides an extremely flexible symbolic API, much more mature than 
> *mxnet* (e.g.: *for* loops, *IfElse*). TensorFlow seems to replicate 
> that, but it remains to be seen how it compares to other frameworks. 
>
> Since there is not much info on the web, I'd like to ask You: 
>
>- How hard do you think would it be to port theano to Julia? 
>- Or maybe you guys know of anyone working on doing that already?
>
>
> W dniu poniedziałek, 9 listopada 2015 22:02:36 UTC+1 użytkownik Phil 
> Tomson napisał:
>>
>> Google has released it's deep learning library called TensorFlow as open 
>> source code:
>>
>> https://github.com/tensorflow/tensorflow
>>
>> They include Python bindings, Any ideas about how easy/difficult it would 
>> be to create Julia bindings?
>>
>> Phil
>>
>

Re: [julia-users] funky ccall macro

2015-11-24 Thread Dömötör Gulyás
in case anyone is left wondering... I've found a hackish way to brute force
this, essentially summarized in
https://github.com/one-more-minute/ObjectiveC.jl/issues/8

Maybe a macro guru can find a still better way, one that doesn't require
re-parsing stringified variables :)

On 24 November 2015 at 01:34, Dömötör Gulyás  wrote:

> So apparently, this: https://github.com/JuliaLang/julia/issues/14110 is
> not a bug. Which leaves me with the question, how to accomplish
> concatenating the argument types tuple for ccall()? There doesn't seem to
> be a way to accomplish the concatenation (without also evaluating the type
> parameters) in ways that result in a valid argument type list for ccall().
> The idea is to not have to eval() at runtime for every ccall(), as
> ObjectiveC.jl currently does.
>
> A simplified core part of the above:
>
> julia> U = (Ptr{UInt8},)
>
> (Ptr{UInt8},)
>
>
> julia> :((Ptr{Void},Ptr{Void})..., $U...)
>
> :(((Ptr{Void},Ptr{Void})...,(Ptr{UInt8},)...))
>
>
> julia> Meta.show_sexpr(:((Ptr{Void},Ptr{Void})..., $U...))
>
> (:tuple, (:..., (:tuple, (:curly, :Ptr, :Void), (:curly, :Ptr, :Void))),
> (:..., (Ptr{UInt8},)))
>
>
> The suggestion is not evaluate the macro, which still leaves the
> unexpanded (Ptr{UInt8},) in there, and ccall() complains about the types.
> With some mucking about of objc_msgSendTupleConcat sometimes that shows up
> as (:curly, :Ptr, :UInt8) but only in ways that fail at the ccall or
> julia trying to evaluate the tuple concatenation.
>
> Is there another way of doing this? I wasn't able to get the intended
> behavior with a templated function, either, but it feels like it should be
> possible.
>


[julia-users] Re: What is the best way to delete unwanted subgraph?

2015-11-24 Thread Aleksandr Mikheev
It works! Thank you.


[julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Jutho
I don't see why in Julia or Matlab you would want to use repmat or 
broadcasting. For me, the following simple line of code:

m,n,p = size(A)
reshape(reshape(A,m*n,p)*y,(m,n))

accomplishes your task, and it has about the same speed as matrixLinComb2. It 
writes it as a simple multiplication so it can use BLAS and vectorisation etc. 

[julia-users] Proposal: NoveltyColors.jl

2015-11-24 Thread Randy Zwitch
Since the Julia ecosystem is getting bigger, I figured I'd propose this 
here first and see what people think is the right way forward (instead of 
wasting people's time at METADATA)

In the R community, they've created two packages of novelty color schemes: Wes 
Anderson  and Beyonce 
. While humorous, these color palettes are 
interesting to me and I'd like to make them available in Vega.jl (and Julia 
more broadly). Should I:

1) Not do it at allbecause this is a serious, scientific community!
2) Do two separate packages, mimicking R
3) Create a single NoveltyColors.jl package, in case there are other 
palettes that come up in the future
4) Make a feature request at Colors.jl (really not my favorite choice, 
since there is so much cited research behind the palettes)

I neglected to mention ColorBrewer.jl (which Vega.jl uses), since 
ColorBrewer is a known entity in the plotting community.

What do people think? Note, I'm not looking for anyone to do the work (I'll 
do it), just looking for packaging input.


Re: [julia-users] funky ccall macro

2015-11-24 Thread Jameson
There's a bit of a mix in what you are doing, which is where it is getting 
confusing. In general, you want to make sure all of the evaluation work is 
happening at runtime and you only work with quoted expressions during macro 
evaluation. This takes a bit of practice to get used to, since you have to 
keep track of when you are intending each expression to be evaluated (and 
in what scope). (ccall argument type expansion is a little bit too strict 
about this -- most other places will accept a value spliced into the AST). 
In the issue linked below, you had P as a tuple, but have U as an 
expression.

In this case, I think you want to check that `U.head === :tuple`, then call 
`prepend!(U.args, [Ptr{Void},Ptr{Void}]); ccallTypes = U`



On Tuesday, November 24, 2015 at 3:39:42 PM UTC-5, Dömötör Gulyás wrote:
>
> in case anyone is left wondering... I've found a hackish way to brute 
> force this, essentially summarized in 
> https://github.com/one-more-minute/ObjectiveC.jl/issues/8
>
> Maybe a macro guru can find a still better way, one that doesn't require 
> re-parsing stringified variables :)
>
> On 24 November 2015 at 01:34, Dömötör Gulyás  wrote:
>
>> So apparently, this: https://github.com/JuliaLang/julia/issues/14110 is 
>> not a bug. Which leaves me with the question, how to accomplish 
>> concatenating the argument types tuple for ccall()? There doesn't seem to 
>> be a way to accomplish the concatenation (without also evaluating the type 
>> parameters) in ways that result in a valid argument type list for ccall(). 
>> The idea is to not have to eval() at runtime for every ccall(), as 
>> ObjectiveC.jl currently does.
>>
>> A simplified core part of the above:
>>
>> julia> U = (Ptr{UInt8},)
>>
>> (Ptr{UInt8},)
>>
>>
>> julia> :((Ptr{Void},Ptr{Void})..., $U...)
>>
>> :(((Ptr{Void},Ptr{Void})...,(Ptr{UInt8},)...))
>>
>>
>> julia> Meta.show_sexpr(:((Ptr{Void},Ptr{Void})..., $U...))
>>
>> (:tuple, (:..., (:tuple, (:curly, :Ptr, :Void), (:curly, :Ptr, :Void))), 
>> (:..., (Ptr{UInt8},)))
>>
>>
>> The suggestion is not evaluate the macro, which still leaves the 
>> unexpanded (Ptr{UInt8},) in there, and ccall() complains about the 
>> types. With some mucking about of objc_msgSendTupleConcat sometimes that 
>> shows up as (:curly, :Ptr, :UInt8) but only in ways that fail at the 
>> ccall or julia trying to evaluate the tuple concatenation.
>>
>> Is there another way of doing this? I wasn't able to get the intended 
>> behavior with a templated function, either, but it feels like it should be 
>> possible.
>>
>
>

[julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Sisyphuss

On Tuesday, November 24, 2015 at 4:56:26 PM UTC+1, Tony Kelman wrote:
>
> Are you running matlab single threaded? Instead of storing a 3 dimensional 
> array and doing indexing and reshape, in Julia you'd be better off storing 
> an array of matrices.


I just realize this two weeks ago. 


[julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Pieterjan Robbe
Oh genius ;) why didn't I come up with that myself? Thanks a lot!

[julia-users] Can't understand what did I break.

2015-11-24 Thread Aleksandr Mikheev
So some time ago I wrote a little code and it worked fine. Then I did some 
experiments, but I change everything back after that. However, it seems 
that something is broken in my code. I spent at least an hour to figure 
whats going on, but still can't figure it out. Can someone help me? Here is 
my code:

module Geometry
> export GeometryFunc
> type pores{Float64}
> R::Float64
> coords::Vector{Float64}
> end
> function GeometryFunc(L,phi,distribution)
>   V_avg = (4*pi/3)*sum((distribution[1,:].^3).*distribution[2,:])
>   N = round(Int,(L^3*log(1/(1-phi)))/V_avg)
>   lattice = Vector{pores}(N)
>   R_max = maximum(distribution[1,:])
>   for j=1:1:N
>   lattice[j]=pores(0.0,[0.0,0.0,0.0])
>   end
> for i=1:1:N
> lattice[i].coords = rand(3)*L
> Q = rand()
> if Q < distribution[2,1]
> lattice[i].R = distribution[1,1]
> else
> tempsum1 = 0
> tempsum2 = distribution[2,1]
> for k = 2:1:length(distribution[1,:])
> tempsum1 = tempsum1 + distribution[2,k-1]
> tempsum2 = tempsum2 + distribution[2,k]
> if Q < tempsum2 && Q > tempsum1
> lattice[i].R = distribution[1,k]
> end
> end
> end
> end
> return lattice, R_max, N
> end
> end


And here is how I usually call it:

lattice=Geometry.GeometryFunc(100,0.3,[0.95 0.97 0.99 1.01 1.03 1.05; 0.07 
> 0.10 0.25 0.40 0.10 0.08])


After running the code I can some constructions like:

Geometry.pores{Float64}(0.95, 
[96.62765023884106,36.948114669490664,70.08384291700011])


How it is even possible?



[julia-users] Re: Proposal: NoveltyColors.jl

2015-11-24 Thread Gabriel Gellner
As an end user that would love this, I would prefer a single package. Put 
all them tasty, wacky colors in one place!

On Tuesday, 24 November 2015 14:08:35 UTC-8, Randy Zwitch wrote:
>
> Since the Julia ecosystem is getting bigger, I figured I'd propose this 
> here first and see what people think is the right way forward (instead of 
> wasting people's time at METADATA)
>
> In the R community, they've created two packages of novelty color schemes: 
> Wes 
> Anderson  and Beyonce 
> . While humorous, these color palettes 
> are interesting to me and I'd like to make them available in Vega.jl (and 
> Julia more broadly). Should I:
>
> 1) Not do it at allbecause this is a serious, scientific community!
> 2) Do two separate packages, mimicking R
> 3) Create a single NoveltyColors.jl package, in case there are other 
> palettes that come up in the future
> 4) Make a feature request at Colors.jl (really not my favorite choice, 
> since there is so much cited research behind the palettes)
>
> I neglected to mention ColorBrewer.jl (which Vega.jl uses), since 
> ColorBrewer is a known entity in the plotting community.
>
> What do people think? Note, I'm not looking for anyone to do the work 
> (I'll do it), just looking for packaging input.
>


Re: [julia-users] funky ccall macro

2015-11-24 Thread Dömötör Gulyás
You've summarized it very nicely, I think. I did a lot of messing about
with eval() and macroexpand(), but could not find a way to have julia
"correctly" fuse the two tuples U and P without resorting to the parse(). I
cannot get the U and P parts keep the (:tuple, (:curly ...), ...)
structure, which ccall seems to need in this context and at the same time
have the tuples concatenated. Any other way I tried, it was either
completely messed up, or either P or U were prematurely turned into
(Ptr{...}) in the AST.

Nevermind that the macro errors are useless, with no real indication of
where the error is happening, except for the source call site.

On 24 November 2015 at 23:09, Jameson  wrote:

> There's a bit of a mix in what you are doing, which is where it is getting
> confusing. In general, you want to make sure all of the evaluation work is
> happening at runtime and you only work with quoted expressions during macro
> evaluation. This takes a bit of practice to get used to, since you have to
> keep track of when you are intending each expression to be evaluated (and
> in what scope). (ccall argument type expansion is a little bit too strict
> about this -- most other places will accept a value spliced into the AST).
> In the issue linked below, you had P as a tuple, but have U as an
> expression.
>
> In this case, I think you want to check that `U.head === :tuple`, then
> call `prepend!(U.args, [Ptr{Void},Ptr{Void}]); ccallTypes = U`
>
>
>
> On Tuesday, November 24, 2015 at 3:39:42 PM UTC-5, Dömötör Gulyás wrote:
>>
>> in case anyone is left wondering... I've found a hackish way to brute
>> force this, essentially summarized in
>> https://github.com/one-more-minute/ObjectiveC.jl/issues/8
>>
>> Maybe a macro guru can find a still better way, one that doesn't require
>> re-parsing stringified variables :)
>>
>> On 24 November 2015 at 01:34, Dömötör Gulyás  wrote:
>>
>>> So apparently, this: https://github.com/JuliaLang/julia/issues/14110 is
>>> not a bug. Which leaves me with the question, how to accomplish
>>> concatenating the argument types tuple for ccall()? There doesn't seem to
>>> be a way to accomplish the concatenation (without also evaluating the type
>>> parameters) in ways that result in a valid argument type list for ccall().
>>> The idea is to not have to eval() at runtime for every ccall(), as
>>> ObjectiveC.jl currently does.
>>>
>>> A simplified core part of the above:
>>>
>>> julia> U = (Ptr{UInt8},)
>>>
>>> (Ptr{UInt8},)
>>>
>>>
>>> julia> :((Ptr{Void},Ptr{Void})..., $U...)
>>>
>>> :(((Ptr{Void},Ptr{Void})...,(Ptr{UInt8},)...))
>>>
>>>
>>> julia> Meta.show_sexpr(:((Ptr{Void},Ptr{Void})..., $U...))
>>>
>>> (:tuple, (:..., (:tuple, (:curly, :Ptr, :Void), (:curly, :Ptr, :Void))),
>>> (:..., (Ptr{UInt8},)))
>>>
>>>
>>> The suggestion is not evaluate the macro, which still leaves the
>>> unexpanded (Ptr{UInt8},) in there, and ccall() complains about the
>>> types. With some mucking about of objc_msgSendTupleConcat sometimes that
>>> shows up as (:curly, :Ptr, :UInt8) but only in ways that fail at the
>>> ccall or julia trying to evaluate the tuple concatenation.
>>>
>>> Is there another way of doing this? I wasn't able to get the intended
>>> behavior with a templated function, either, but it feels like it should be
>>> possible.
>>>
>>
>>


Re: [julia-users] Re: CUDART and CURAND problem on running the same "do" loop twice

2015-11-24 Thread Andrei
On Tue, Nov 24, 2015 at 8:03 PM, Kristoffer Carlsson 
wrote:

> The original code in the OP fails for me


Yes, this is expected behavior: for convenience, CURAND.jl creates default
random number generator, which obviously becomes invalid after
`device_reset()`. At the same time, my last code snippet creates new and
explicit RNG, which fixes the issue on Linux.

The problem is that on some platforms even creating new generator doesn't
help, so I'm trying to understand the difference.

@Tim, @Kristoffer, could you also specify CUDA version in use, please?


Re: [julia-users] funky ccall macro

2015-11-24 Thread Jameson Nash
> to have julia "correctly" fuse the two tuples U and P

U wasn't a tuple, it was an Expr(:tuple). the two are printed similarly,
since the latter is the AST for creating the former, but you do have to
work with them slightly differently.

On Tue, Nov 24, 2015 at 5:34 PM Dömötör Gulyás  wrote:

> You've summarized it very nicely, I think. I did a lot of messing about
> with eval() and macroexpand(), but could not find a way to have julia
> "correctly" fuse the two tuples U and P without resorting to the parse(). I
> cannot get the U and P parts keep the (:tuple, (:curly ...), ...)
> structure, which ccall seems to need in this context and at the same time
> have the tuples concatenated. Any other way I tried, it was either
> completely messed up, or either P or U were prematurely turned into
> (Ptr{...}) in the AST.
>
> Nevermind that the macro errors are useless, with no real indication of
> where the error is happening, except for the source call site.
>
> On 24 November 2015 at 23:09, Jameson  wrote:
>
>> There's a bit of a mix in what you are doing, which is where it is
>> getting confusing. In general, you want to make sure all of the evaluation
>> work is happening at runtime and you only work with quoted expressions
>> during macro evaluation. This takes a bit of practice to get used to, since
>> you have to keep track of when you are intending each expression to be
>> evaluated (and in what scope). (ccall argument type expansion is a little
>> bit too strict about this -- most other places will accept a value spliced
>> into the AST). In the issue linked below, you had P as a tuple, but have U
>> as an expression.
>>
>> In this case, I think you want to check that `U.head === :tuple`, then
>> call `prepend!(U.args, [Ptr{Void},Ptr{Void}]); ccallTypes = U`
>>
>>
>>
>> On Tuesday, November 24, 2015 at 3:39:42 PM UTC-5, Dömötör Gulyás wrote:
>>>
>>> in case anyone is left wondering... I've found a hackish way to brute
>>> force this, essentially summarized in
>>> https://github.com/one-more-minute/ObjectiveC.jl/issues/8
>>>
>>> Maybe a macro guru can find a still better way, one that doesn't require
>>> re-parsing stringified variables :)
>>>
>>> On 24 November 2015 at 01:34, Dömötör Gulyás 
>>> wrote:
>>>
 So apparently, this: https://github.com/JuliaLang/julia/issues/14110
 is not a bug. Which leaves me with the question, how to accomplish
 concatenating the argument types tuple for ccall()? There doesn't seem to
 be a way to accomplish the concatenation (without also evaluating the type
 parameters) in ways that result in a valid argument type list for ccall().
 The idea is to not have to eval() at runtime for every ccall(), as
 ObjectiveC.jl currently does.

 A simplified core part of the above:

 julia> U = (Ptr{UInt8},)

 (Ptr{UInt8},)


 julia> :((Ptr{Void},Ptr{Void})..., $U...)

 :(((Ptr{Void},Ptr{Void})...,(Ptr{UInt8},)...))


 julia> Meta.show_sexpr(:((Ptr{Void},Ptr{Void})..., $U...))

 (:tuple, (:..., (:tuple, (:curly, :Ptr, :Void), (:curly, :Ptr,
 :Void))), (:..., (Ptr{UInt8},)))


 The suggestion is not evaluate the macro, which still leaves the
 unexpanded (Ptr{UInt8},) in there, and ccall() complains about the
 types. With some mucking about of objc_msgSendTupleConcat sometimes that
 shows up as (:curly, :Ptr, :UInt8) but only in ways that fail at the
 ccall or julia trying to evaluate the tuple concatenation.

 Is there another way of doing this? I wasn't able to get the intended
 behavior with a templated function, either, but it feels like it should be
 possible.

>>>
>>>
>


Re: [julia-users] Proposal: NoveltyColors.jl

2015-11-24 Thread Stefan Karpinski
I feel like sticking them in Colors.jl is fine – if at some point that
package gets too big by some metric, we can split it up, but I don't get
the impression that we're there yet.

On Tuesday, November 24, 2015, Gabriel Gellner 
wrote:

> As an end user that would love this, I would prefer a single package. Put
> all them tasty, wacky colors in one place!
>
> On Tuesday, 24 November 2015 14:08:35 UTC-8, Randy Zwitch wrote:
>>
>> Since the Julia ecosystem is getting bigger, I figured I'd propose this
>> here first and see what people think is the right way forward (instead of
>> wasting people's time at METADATA)
>>
>> In the R community, they've created two packages of novelty color
>> schemes: Wes Anderson  and
>> Beyonce . While humorous, these color
>> palettes are interesting to me and I'd like to make them available in
>> Vega.jl (and Julia more broadly). Should I:
>>
>> 1) Not do it at allbecause this is a serious, scientific community!
>> 2) Do two separate packages, mimicking R
>> 3) Create a single NoveltyColors.jl package, in case there are other
>> palettes that come up in the future
>> 4) Make a feature request at Colors.jl (really not my favorite choice,
>> since there is so much cited research behind the palettes)
>>
>> I neglected to mention ColorBrewer.jl (which Vega.jl uses), since
>> ColorBrewer is a known entity in the plotting community.
>>
>> What do people think? Note, I'm not looking for anyone to do the work
>> (I'll do it), just looking for packaging input.
>>
>


Re: [julia-users] Re: Proposal: NoveltyColors.jl

2015-11-24 Thread Tom Breloff
Single package preferred, and if possible it would be great to be fully
compatible with Colors.jl.  It might be ideal if it was part of Colors.jl,
but loaded on demand, perhaps by calling:

function i_am_feeling_wacky_today()
  @eval include("wacky.jl")
end

or some similar trickery...

On Tue, Nov 24, 2015 at 5:27 PM, Gabriel Gellner 
wrote:

> As an end user that would love this, I would prefer a single package. Put
> all them tasty, wacky colors in one place!
>
>
> On Tuesday, 24 November 2015 14:08:35 UTC-8, Randy Zwitch wrote:
>>
>> Since the Julia ecosystem is getting bigger, I figured I'd propose this
>> here first and see what people think is the right way forward (instead of
>> wasting people's time at METADATA)
>>
>> In the R community, they've created two packages of novelty color
>> schemes: Wes Anderson  and
>> Beyonce . While humorous, these color
>> palettes are interesting to me and I'd like to make them available in
>> Vega.jl (and Julia more broadly). Should I:
>>
>> 1) Not do it at allbecause this is a serious, scientific community!
>> 2) Do two separate packages, mimicking R
>> 3) Create a single NoveltyColors.jl package, in case there are other
>> palettes that come up in the future
>> 4) Make a feature request at Colors.jl (really not my favorite choice,
>> since there is so much cited research behind the palettes)
>>
>> I neglected to mention ColorBrewer.jl (which Vega.jl uses), since
>> ColorBrewer is a known entity in the plotting community.
>>
>> What do people think? Note, I'm not looking for anyone to do the work
>> (I'll do it), just looking for packaging input.
>>
>


Re: [julia-users] Proposal: NoveltyColors.jl

2015-11-24 Thread Stefan Karpinski
Why not have them available by default? Do these make loading Colors much
slower?

On Tuesday, November 24, 2015, Tom Breloff  wrote:

> Single package preferred, and if possible it would be great to be fully
> compatible with Colors.jl.  It might be ideal if it was part of Colors.jl,
> but loaded on demand, perhaps by calling:
>
> function i_am_feeling_wacky_today()
>   @eval include("wacky.jl")
> end
>
> or some similar trickery...
>
> On Tue, Nov 24, 2015 at 5:27 PM, Gabriel Gellner  > wrote:
>
>> As an end user that would love this, I would prefer a single package. Put
>> all them tasty, wacky colors in one place!
>>
>>
>> On Tuesday, 24 November 2015 14:08:35 UTC-8, Randy Zwitch wrote:
>>>
>>> Since the Julia ecosystem is getting bigger, I figured I'd propose this
>>> here first and see what people think is the right way forward (instead of
>>> wasting people's time at METADATA)
>>>
>>> In the R community, they've created two packages of novelty color
>>> schemes: Wes Anderson  and
>>> Beyonce . While humorous, these color
>>> palettes are interesting to me and I'd like to make them available in
>>> Vega.jl (and Julia more broadly). Should I:
>>>
>>> 1) Not do it at allbecause this is a serious, scientific community!
>>> 2) Do two separate packages, mimicking R
>>> 3) Create a single NoveltyColors.jl package, in case there are other
>>> palettes that come up in the future
>>> 4) Make a feature request at Colors.jl (really not my favorite choice,
>>> since there is so much cited research behind the palettes)
>>>
>>> I neglected to mention ColorBrewer.jl (which Vega.jl uses), since
>>> ColorBrewer is a known entity in the plotting community.
>>>
>>> What do people think? Note, I'm not looking for anyone to do the work
>>> (I'll do it), just looking for packaging input.
>>>
>>
>


[julia-users] [ANN] (sort of) VisualRegressionTests.jl

2015-11-24 Thread Tom Breloff
I just put this together 
today: https://github.com/tbreloff/VisualRegressionTests.jl

It's a generalized version of the regression test framework that I made for 
Plots.jl.  I was thinking that it might belong in JuliaGraphics.  For those 
involved, what do you think?  I'm happy to move it over, or not.

This isn't registered yet... I mainly wanted to get opinions on where it 
should live.

Best,
Tom


[julia-users] Re: [ANN] (sort of) VisualRegressionTests.jl

2015-11-24 Thread Simon Danisch
JuliaGraphics sounds like a good fit to me! JuliaImages 
 is probably the only other viable option.
I'd vote for JuliaGraphics, though.

Am Dienstag, 24. November 2015 23:55:10 UTC+1 schrieb Tom Breloff:
>
> I just put this together today: 
> https://github.com/tbreloff/VisualRegressionTests.jl
>
> It's a generalized version of the regression test framework that I made 
> for Plots.jl.  I was thinking that it might belong in JuliaGraphics.  For 
> those involved, what do you think?  I'm happy to move it over, or not.
>
> This isn't registered yet... I mainly wanted to get opinions on where it 
> should live.
>
> Best,
> Tom
>


Re: [julia-users] Memory-efficient / in-place allocation of array subsets?

2015-11-24 Thread Tim Holy
I'm not certain I really understand what you're asking. Definitely, SubArrays 
keep their parent array from being garbage-collected. But the memory overhead 
of a SubArray is very small (and constant, i.e., not proportional to the 
amount of data represented). So if (as you say) you can fit A in memory, 
epsilon more should not be a problem.

The alternative is to not create A in the first place, and return just what you 
need as stand-alone arrays.

Best,
--Tim

On Tuesday, November 24, 2015 10:04:56 AM Gord Stephen wrote:
> Hi all,
> 
> With a function like this...
> 
> function demo()
>A = ones(350,350,25000)
>return A[1:175, 1:175, :],  A[1:175, 176:end, :]
> end
> 
> is there a memory-efficient (in-place I guess) way to store the return
> values while having A deallocated, if I can fit A in memory, but not both A
> and the return values?
> 
> Using subarrays acheives the desired functionality, but all of A needs to
> be kept in memory:
> 
> function demo2()
>A = ones(350,350,25000)
>return slice(A, 1:175, 1:175, :),  slice(A, 1:175, 176:350, :)
> end
> 
> Thanks,
> Gord



Re: [julia-users] Proposal: NoveltyColors.jl

2015-11-24 Thread Tim Holy
NoveltyColors.jl sounds great.

--Tim

On Tuesday, November 24, 2015 02:08:35 PM Randy Zwitch wrote:
> Since the Julia ecosystem is getting bigger, I figured I'd propose this
> here first and see what people think is the right way forward (instead of
> wasting people's time at METADATA)
> 
> In the R community, they've created two packages of novelty color schemes:
> Wes Anderson  and Beyonce
> . While humorous, these color palettes are
> interesting to me and I'd like to make them available in Vega.jl (and Julia
> more broadly). Should I:
> 
> 1) Not do it at allbecause this is a serious, scientific community!
> 2) Do two separate packages, mimicking R
> 3) Create a single NoveltyColors.jl package, in case there are other
> palettes that come up in the future
> 4) Make a feature request at Colors.jl (really not my favorite choice,
> since there is so much cited research behind the palettes)
> 
> I neglected to mention ColorBrewer.jl (which Vega.jl uses), since
> ColorBrewer is a known entity in the plotting community.
> 
> What do people think? Note, I'm not looking for anyone to do the work (I'll
> do it), just looking for packaging input.



Re: [julia-users] Re: CUDART and CURAND problem on running the same "do" loop twice

2015-11-24 Thread Tim Holy
6.5

--Tim

On Wednesday, November 25, 2015 01:37:22 AM Andrei wrote:
> On Tue, Nov 24, 2015 at 8:03 PM, Kristoffer Carlsson 
> wrote:
> > The original code in the OP fails for me
> 
> Yes, this is expected behavior: for convenience, CURAND.jl creates default
> random number generator, which obviously becomes invalid after
> `device_reset()`. At the same time, my last code snippet creates new and
> explicit RNG, which fixes the issue on Linux.
> 
> The problem is that on some platforms even creating new generator doesn't
> help, so I'm trying to understand the difference.
> 
> @Tim, @Kristoffer, could you also specify CUDA version in use, please?



Re: [julia-users] Re: CUDART and CURAND problem on running the same "do" loop twice

2015-11-24 Thread Kristoffer Carlsson
7.5 here

Re: [julia-users] Re: CUDART and CURAND problem on running the same "do" loop twice

2015-11-24 Thread Joaquim Masset Lacombe Dias Garcia
Interesting, both my machines (windows and mac) have CUDA 7.0, possibly 
thats the issue, since the C code in 
https://github.com/JuliaGPU/CURAND.jl/issues/3#issuecomment-159319580 fails 
in both.

If some linux user could test this version, our statistics would be more 
complete. I will try 6.5 and 7.5

Em terça-feira, 24 de novembro de 2015 21:21:41 UTC-2, Tim Holy escreveu:
>
> 6.5 
>
> --Tim 
>
> On Wednesday, November 25, 2015 01:37:22 AM Andrei wrote: 
> > On Tue, Nov 24, 2015 at 8:03 PM, Kristoffer Carlsson  > 
> > wrote: 
> > > The original code in the OP fails for me 
> > 
> > Yes, this is expected behavior: for convenience, CURAND.jl creates 
> default 
> > random number generator, which obviously becomes invalid after 
> > `device_reset()`. At the same time, my last code snippet creates new and 
> > explicit RNG, which fixes the issue on Linux. 
> > 
> > The problem is that on some platforms even creating new generator 
> doesn't 
> > help, so I'm trying to understand the difference. 
> > 
> > @Tim, @Kristoffer, could you also specify CUDA version in use, please? 
>
>

Re: [julia-users] Proposal: NoveltyColors.jl

2015-11-24 Thread Randy Zwitch
I can't believe that a few hundred lines of code, with 5-8 colors a piece 
is going to do anything to load times. I was just concerned about adding 
frivolity to Colors.jl, since it has so much cited research that goes along 
with it. That, and given that ColorBrewer.jl exists separate from Colors.jl 
made it seem like Colors.jl might already be in a steady-state.

On Tuesday, November 24, 2015 at 5:51:07 PM UTC-5, Stefan Karpinski wrote:
>
> Why not have them available by default? Do these make loading Colors much 
> slower?
>
> On Tuesday, November 24, 2015, Tom Breloff > 
> wrote:
>
>> Single package preferred, and if possible it would be great to be fully 
>> compatible with Colors.jl.  It might be ideal if it was part of Colors.jl, 
>> but loaded on demand, perhaps by calling:
>>
>> function i_am_feeling_wacky_today()
>>   @eval include("wacky.jl")
>> end
>>
>> or some similar trickery...
>>
>> On Tue, Nov 24, 2015 at 5:27 PM, Gabriel Gellner 
>>  wrote:
>>
>>> As an end user that would love this, I would prefer a single package. 
>>> Put all them tasty, wacky colors in one place!
>>>
>>>
>>> On Tuesday, 24 November 2015 14:08:35 UTC-8, Randy Zwitch wrote:

 Since the Julia ecosystem is getting bigger, I figured I'd propose this 
 here first and see what people think is the right way forward (instead of 
 wasting people's time at METADATA)

 In the R community, they've created two packages of novelty color 
 schemes: Wes Anderson  and 
 Beyonce . While humorous, these color 
 palettes are interesting to me and I'd like to make them available in 
 Vega.jl (and Julia more broadly). Should I:

 1) Not do it at allbecause this is a serious, scientific community!
 2) Do two separate packages, mimicking R
 3) Create a single NoveltyColors.jl package, in case there are other 
 palettes that come up in the future
 4) Make a feature request at Colors.jl (really not my favorite choice, 
 since there is so much cited research behind the palettes)

 I neglected to mention ColorBrewer.jl (which Vega.jl uses), since 
 ColorBrewer is a known entity in the plotting community.

 What do people think? Note, I'm not looking for anyone to do the work 
 (I'll do it), just looking for packaging input.

>>>
>>

Re: [julia-users] Proposal: NoveltyColors.jl

2015-11-24 Thread Tom Breloff
I guess I assumed its like loading a database into memory, but maybe its
lighter weight than I think. Either way I'm sure I'd make use of it
someday.

On Tuesday, November 24, 2015, Tim Holy  wrote:

> NoveltyColors.jl sounds great.
>
> --Tim
>
> On Tuesday, November 24, 2015 02:08:35 PM Randy Zwitch wrote:
> > Since the Julia ecosystem is getting bigger, I figured I'd propose this
> > here first and see what people think is the right way forward (instead of
> > wasting people's time at METADATA)
> >
> > In the R community, they've created two packages of novelty color
> schemes:
> > Wes Anderson  and Beyonce
> > . While humorous, these color palettes
> are
> > interesting to me and I'd like to make them available in Vega.jl (and
> Julia
> > more broadly). Should I:
> >
> > 1) Not do it at allbecause this is a serious, scientific community!
> > 2) Do two separate packages, mimicking R
> > 3) Create a single NoveltyColors.jl package, in case there are other
> > palettes that come up in the future
> > 4) Make a feature request at Colors.jl (really not my favorite choice,
> > since there is so much cited research behind the palettes)
> >
> > I neglected to mention ColorBrewer.jl (which Vega.jl uses), since
> > ColorBrewer is a known entity in the plotting community.
> >
> > What do people think? Note, I'm not looking for anyone to do the work
> (I'll
> > do it), just looking for packaging input.
>
>


Re: [julia-users] [ANN] (sort of) VisualRegressionTests.jl

2015-11-24 Thread Tim Holy
Definitely useful, thanks!

--Tim

On Tuesday, November 24, 2015 02:55:10 PM Tom Breloff wrote:
> I just put this together
> today: https://github.com/tbreloff/VisualRegressionTests.jl
> 
> It's a generalized version of the regression test framework that I made for
> Plots.jl.  I was thinking that it might belong in JuliaGraphics.  For those
> involved, what do you think?  I'm happy to move it over, or not.
> 
> This isn't registered yet... I mainly wanted to get opinions on where it
> should live.
> 
> Best,
> Tom



[julia-users] Doc/bugfixes for new contributors

2015-11-24 Thread Katie H
Hello,

If you're looking to get started contributing to Julia, the "intro issue" 
tag on Github  is 
a good place to start. Here are a couple issues that have sat open for a 
while with no activity:

https://github.com/JuliaLang/julia/issues/13723
https://github.com/JuliaLang/julia/issues/13711
https://github.com/JuliaLang/julia/issues/13340

I/probably other contributors as well would be happy to help someone 
through their first PR on one of these or any other issue.

Cheers,
Katie


[julia-users] MPI-FFTW on Julia

2015-11-24 Thread Nathan Smith
Hi Folks, 

I'm interesting in trying to call MPI implementations of FFTW from Julia 
using the current MPI.jl and C-interface infrastructure. I'm a somewhat new 
user to julia and was wondering if these will be feasible or if I'm jumping 
in way over my head or if perhaps this isn't even possible at this point in 
time?

If its possible, any pointers towards the ingredients I might need to make 
this happen would be greatly appreciated.

Thanks for any advice, 
Nathan

PS: This language is killer, I've never had so much fun with my simulation 
work! 


Re: [julia-users] Memory-efficient / in-place allocation of array subsets?

2015-11-24 Thread Gord Stephen
The problem arises when I need to repeat the process more than once - when 
the SubArrays become out of scope they're garbage collected but A isn't - a 
second call to demo2() then exceeds available memory. Manually calling gc() 
doesn't seem to help eliminate A either. Is there a way to force a 
garbage-collected SubArray's parent to be deallocated (or maybe I'm missing 
something)?

Creating the stand-alone arrays would definitely be the best solution, but 
unfortunately it isn't an option in this particular context.

Thanks for the response!
Gord

On Tuesday, November 24, 2015 at 6:14:36 PM UTC-5, Tim Holy wrote:
>
> I'm not certain I really understand what you're asking. Definitely, 
> SubArrays 
> keep their parent array from being garbage-collected. But the memory 
> overhead 
> of a SubArray is very small (and constant, i.e., not proportional to the 
> amount of data represented). So if (as you say) you can fit A in memory, 
> epsilon more should not be a problem. 
>
> The alternative is to not create A in the first place, and return just 
> what you 
> need as stand-alone arrays. 
>
> Best, 
> --Tim 
>
> On Tuesday, November 24, 2015 10:04:56 AM Gord Stephen wrote: 
> > Hi all, 
> > 
> > With a function like this... 
> > 
> > function demo() 
> >A = ones(350,350,25000) 
> >return A[1:175, 1:175, :],  A[1:175, 176:end, :] 
> > end 
> > 
> > is there a memory-efficient (in-place I guess) way to store the return 
> > values while having A deallocated, if I can fit A in memory, but not 
> both A 
> > and the return values? 
> > 
> > Using subarrays acheives the desired functionality, but all of A needs 
> to 
> > be kept in memory: 
> > 
> > function demo2() 
> >A = ones(350,350,25000) 
> >return slice(A, 1:175, 1:175, :),  slice(A, 1:175, 176:350, :) 
> > end 
> > 
> > Thanks, 
> > Gord 
>
>

[julia-users] Re: Proposal: NoveltyColors.jl

2015-11-24 Thread Randy Zwitch
NoveltyColors.jl it is. Figured I can make whatever mess I want, we can 
always merge to Colors.jl later if the palettes are popular enough.

On Tuesday, November 24, 2015 at 5:08:35 PM UTC-5, Randy Zwitch wrote:
>
> Since the Julia ecosystem is getting bigger, I figured I'd propose this 
> here first and see what people think is the right way forward (instead of 
> wasting people's time at METADATA)
>
> In the R community, they've created two packages of novelty color schemes: 
> Wes 
> Anderson  and Beyonce 
> . While humorous, these color palettes 
> are interesting to me and I'd like to make them available in Vega.jl (and 
> Julia more broadly). Should I:
>
> 1) Not do it at allbecause this is a serious, scientific community!
> 2) Do two separate packages, mimicking R
> 3) Create a single NoveltyColors.jl package, in case there are other 
> palettes that come up in the future
> 4) Make a feature request at Colors.jl (really not my favorite choice, 
> since there is so much cited research behind the palettes)
>
> I neglected to mention ColorBrewer.jl (which Vega.jl uses), since 
> ColorBrewer is a known entity in the plotting community.
>
> What do people think? Note, I'm not looking for anyone to do the work 
> (I'll do it), just looking for packaging input.
>


Re: [julia-users] [ANN] (sort of) VisualRegressionTests.jl

2015-11-24 Thread Tom Breloff
Tim: would you like it in JuliaGraphics? (You probably have the most say in
this issue, since half those packages are yours)

On Tuesday, November 24, 2015, Tim Holy  wrote:

> Definitely useful, thanks!
>
> --Tim
>
> On Tuesday, November 24, 2015 02:55:10 PM Tom Breloff wrote:
> > I just put this together
> > today: https://github.com/tbreloff/VisualRegressionTests.jl
> >
> > It's a generalized version of the regression test framework that I made
> for
> > Plots.jl.  I was thinking that it might belong in JuliaGraphics.  For
> those
> > involved, what do you think?  I'm happy to move it over, or not.
> >
> > This isn't registered yet... I mainly wanted to get opinions on where it
> > should live.
> >
> > Best,
> > Tom
>
>


Re: [julia-users] Memory-efficient / in-place allocation of array subsets?

2015-11-24 Thread Tim Holy
On Tuesday, November 24, 2015 04:53:11 PM Gord Stephen wrote:
> The problem arises when I need to repeat the process more than once - when
> the SubArrays become out of scope they're garbage collected but A isn't - a
> second call to demo2() then exceeds available memory.

That sounds like a bug. Indeed, I can (sort of) reproduce this:

julia> function foo()
   A = rand(3,5)
   B = sub(A, 1:2, 3:4)
   finalizer(A, A->@schedule(println("GCed A")))
   B
   end
foo (generic function with 1 method)

julia> B = foo()
2x2 
SubArray{Float64,2,Array{Float64,2},Tuple{UnitRange{Int64},UnitRange{Int64}},1}:
 0.583567  0.719624
 0.60663   0.739911

julia> B = 0
0

julia> gc()


But call gc() immediately again, and it gets cleaned up:

julia> gc()
GCed A

I filed an issue:
https://github.com/JuliaLang/julia/issues/14127

--Tim



Re: [julia-users] [ANN] (sort of) VisualRegressionTests.jl

2015-11-24 Thread Tim Holy
Sure, that sounds great to me.

--Tim

On Tuesday, November 24, 2015 08:42:21 PM Tom Breloff wrote:
> Tim: would you like it in JuliaGraphics? (You probably have the most say in
> this issue, since half those packages are yours)
> 
> On Tuesday, November 24, 2015, Tim Holy  wrote:
> > Definitely useful, thanks!
> > 
> > --Tim
> > 
> > On Tuesday, November 24, 2015 02:55:10 PM Tom Breloff wrote:
> > > I just put this together
> > > today: https://github.com/tbreloff/VisualRegressionTests.jl
> > > 
> > > It's a generalized version of the regression test framework that I made
> > 
> > for
> > 
> > > Plots.jl.  I was thinking that it might belong in JuliaGraphics.  For
> > 
> > those
> > 
> > > involved, what do you think?  I'm happy to move it over, or not.
> > > 
> > > This isn't registered yet... I mainly wanted to get opinions on where it
> > > should live.
> > > 
> > > Best,
> > > Tom



[julia-users] Re: How to close an HttpServer?

2015-11-24 Thread Eric Forgy
FYI, I've opened an issue and having a go at fixing it, but could use some 
help from experts :)

https://github.com/JuliaWeb/HttpServer.jl/issues/82

On Tuesday, November 24, 2015 at 9:01:44 PM UTC+8, Steven Sagaert wrote:
>
> There isn't a function for that. You can shut it down either by killing 
> the process  or by building in your program a "shutdown" message that when 
> it receives this http request then exits the julia program by calling quit()
>
> When I wrote a HttpServer based service I also thought this was a missing 
> feature of HttpServer.
>
> On Tuesday, November 24, 2015 at 11:42:49 AM UTC+1, Eric Forgy wrote:
>>
>> Hi,
>>
>> I remember reading a question similar to (or exactly like) this one, but 
>> can't find it again.
>>
>> I can start an HttpServer easily enough, but how to close it? I can see 
>> that WebSockets has a "close" method, but I can't find a way to close an 
>> HttpServer. I am probably confused and this question makes no sense :)
>>
>> Thanks
>>
>

Re: [julia-users] Re: What is the best way to get two by two tables in Julia?

2015-11-24 Thread Arin Basu
Thanks a million Milan and Dan. I have learned hugely from the codes you 
shared and the packages you discussed. There is a need for dedicated 
biostatistics packages in Julia. For instance, I could not find a dedicated 
package on regression diagnostics (I tried RegTools but it did not compile 
for some reason in my machine Mac OSX El Capitan, Julia 0.4.1). 

Best,
Arin

On Monday, 23 November 2015 04:53:46 UTC+13, Milan Bouchet-Valat wrote:
>
> As I noted just a few days ago, I have written a small package to 
> compute frequency tables from arbitrary arrays, with an optimized 
> method for pooled data arrays : 
> https://github.com/nalimilan/FreqTables.jl 
>
> I've just pushed a fiw so it should now work on 0.4 (but not with 0.3). 
>
> We could easily add a method taking a DataFrame and symbol names for 
> columns to save some typing. 
>
>
> Regards 
>
> Le dimanche 22 novembre 2015 à 03:26 -0800, Dan a écrit : 
> > Hi Arin, 
> > It would be helpful to have more details about the input (a 
> > dataframe?) and output (a two-by-two table or a table indexed by 
> > categories?). Some code to give context to the question would be even 
> > more help (possibly in another language, such as R). 
> > 
> > Having said this, here is a starting point for some code: 
> > 
> > If these packages are missing Pkg.add works: 
> > 
> > using NamedArrays 
> > using DataFrames 
> > using RDatasets 
> > 
> > Gets the dataset and makes some categorical variables in DataFrames 
> > style: 
> > 
> > iris = dataset("datasets","iris") 
> > iris[:PetalWidth] = PooledDataArray(iris[:PetalWidth]) 
> > iris[:Species] = PooledDataArray(iris[:Species]) 
> > 
> > Define function for a `twobytwo` and a general categorical table 
> > `crosstable`: 
> > 
> > function twobytwo(data::DataFrame,cond1,cond2) 
> >nres= 
> > NamedArray(zeros(Int,2,2),Any[[false,true],[false,true]],["cond1","co 
> > nd2"]) 
> >for i=1:nrow(data) 
> >nres[Int(cond1(data[i,:]))+1,Int(cond2(data[i,:]))+1] += 1 
> >end 
> >nres 
> > end 
> > 
> > function crosstable(data::DataFrame,col1,col2) 
> >@assert isa(data[col1],PooledDataArray) 
> >@assert isa(data[col2],PooledDataArray) 
> >nres= 
> > NamedArray(zeros(Int,length(data[col1].pool),length(data[col2].pool)) 
> > ,Any[data[col1].pool,data[col2].pool],[col1,col2]) 
> >for i=1:nrow(data) 
> >nres[data[col1].refs[i],data[col2].refs[i]] += 1 
> >end 
> >nres 
> > end 
> > 
> > Finally, using the functions, make some tables: 
> > 
> > tbt = twobytwo(iris,r->r[1,:Species]=="setosa",r 
> > ->r[1,:PetalWidth]>=1.5) 
> > ct = crosstable(iris,:PetalWidth,:Species) 
> > 
> > My summary and conclusions: 
> > 1) Julia is general purpose and with a little familiarity any data 
> > handling is possible. 
> > 2) This is a basic data exploration operation and there must be some 
> > easy way to do this. 
> > 
> > Waiting for more opinions/solutions on this question, as it is also 
> > basic for my needs. 
> > 
> > Thanks for the question. 
> > 
> > On Sunday, November 22, 2015 at 3:34:56 AM UTC+2, Arin Basu wrote: 
> > > Hi All, 
> > > 
> > > Can you kindly advise how to get a simple way to do two by two 
> > > tables in Julia with two categorical variables. I have tried split 
> > > -apply-combine (by function) and it works with single variables, 
> > > but with two or more variables, I cannot get the table I want. 
> > > 
> > > This is really an issue if we need to do statistical data analysis 
> > > in Epidemiology. 
> > > 
> > > Any help or advice will be greatly appreciated. 
> > > 
> > > Arin Basu 
> > > 
>


[julia-users] Re: Simple Quesiton about svd() function

2015-11-24 Thread Michael Bullman
Thank you! Good to know for sure. Unfortunately I don't get to use my 
linear algebra as much as I used to :)

On Sunday, November 22, 2015 at 6:58:31 PM UTC-8, Michael Bullman wrote:
>
> Hi All, 
>
> I have a pretty easy question about how/why the svd() behaves how it does. 
>
> Why are my U and V matrices always a factor of -1 from the textbook 
> examples? I'm just getting my feet wet with all this, so I wanted to check 
> what the function returns vs what the textbook says the answers would be, 
> and it looks like it's always off by negative one. 
>
> julia> A = [1 2 ; 2 2; 2 1]
> 3x2 Array{Int64,2}:
>  1  2
>  2  2
>  2  1
>
> julia> U, s, V = svd(A, thin=false)
> (
> 3x3 Array{Float64,2}:
>  -0.514496   0.707107   0.485071
>  -0.685994   0.0   -0.727607
>  -0.514496  -0.707107   0.485071,
>
> [4.123105625617661,0.],
> 2x2 Array{Float64,2}:
>  -0.707107  -0.707107
>  -0.707107   0.707107)
>
>
> text book shows the 1,1 entry of U to be 
> julia> 3/sqrt(34)
> 0.5144957554275265
>
> without a negtive sign. really just all the negative signs are reversed. 
> source: http://www.math.iit.edu/~fass/477577_Chapter_2.pdf
>
> 2nd example:
> julia> A = [3 2 -2 ; 2 3 -2]
> 2x3 Array{Int64,2}:
>  3  2  -2
>  2  3  -2
>
> julia> U, s, V = svd(A, thin=false)
> (
> 2x2 Array{Float64,2}:
>  -0.707107  -0.707107
>  -0.707107   0.707107,
>
> [5.744562646538029,1.0],
> 3x3 Array{Float64,2}:
>  -0.615457  -0.707107 0.348155
>  -0.615457   0.707107 0.348155
>   0.492366   5.55112e-17  0.870388)
>
> which is U and V are negative
> http://www.d.umn.edu/~mhampton/m4326svd_example.pdf
>
> So did I just get back luck with example problems? I feel like it's 
> probably just a difference in convention or something, but figured I would 
> ask for a definitive answer. Thank you for any help
>
>

[julia-users] Re: Ray tracing for complex geometry

2015-11-24 Thread kleinsplash
I think thats useful, especially if you are doing object detection and 
avoidance on a mobile platform. I wont be able to use it right this minute, 
but pretty sure the guys at MoveIt! could. 

On Tuesday, 24 November 2015 16:27:05 UTC+2, Andre Bieler wrote:
>
> @Simon
> GLVisualize looks great! I ll try using it as soon as there
> is an opportunity. (and then bother you with questions.. ;) )
>
> @kleinsplash
> Not sure if those meshing libraries are the right place to put my stuff.
>
> I didnt even know about all these packages until I posted my answer
> here. I will get rid of my custom mesh loading prcedures and start
> using whats already available.
>
> On a somewhat related note. I do have an Octree implementation
> in Julia which I use for meshing/partitioning of 3D domains, maybe
> that could be interesting for someone?
> It generates an octree around triangulated surface meshes.
> E.g. here
> https://github.com/abieler/dsmc/blob/master/doc/cow_mesh2.png
>
> If there is interest I d be happy to share/clean up my code.
>
>
>

[julia-users] Re: Using Meshes.ji

2015-11-24 Thread kleinsplash
That cat is awesome. And so is the script thank you. The only thing is my 
screen comes up really small and I cant zoom in. This is as far as it goes: 



That is a wine glass. I want it bigger so I can fill it with virtual wine 
:) 

Thank you! I doubt I would have gotten to those mapping options right. 

Side Note: I am getting a warning from glscreen() saying: WARNING: 
Base.Uint8 is deprecated, use UInt8 instead.


On Tuesday, 24 November 2015 14:10:09 UTC+2, Simon Danisch wrote:
>
> This is the best GLVisualize can do for you right meow:
>
>
> 
>
>
> You get this with:
>
>
>
> using GLVisualize, FileIO, Colors, GeometryTypes, GLAbstraction
> window, renderloop = glscreen()
> obj = load("cat.obj")
> obj_vizz = visualize(obj, color=RGBA{Float32}(0,0,0,0.1))
> point3d_vizz = visualize(vertices(obj), scale=Vec3f0(0.01))
> axis = visualize(boundingbox(point3d_vizz).value, :grid)
> view(obj_vizz)
> view(point3d_vizz)
> view(axis)
> renderloop()
>
> A lot of improvements are on its way (like better camera, more consistent 
> API and better axis), but this is still not out yet.
>
> I'm not sure what would be possible with a combination of MeshIO and e.g. 
> PyPlot, but I guess it could be better axis/labels + print quality, while 
> being slower ;)
> You quite likely need to do something like:
>
> points = vertices(obj) #GeometryTypes exports vertices(...)
> x,y,z = [p[1] for p in points], map(p->p[2], points), map(last, points) # 
> three different ways of getting the x,y,z coordinates
>
> and then go from there with the PyPlot/matplotlib docs.
>
>
> Am Montag, 9. November 2015 15:55:57 UTC+1 schrieb kleinsplash:
>
>
> Am Montag, 9. November 2015 15:55:57 UTC+1 schrieb kleinsplash:
>>
>> Hi,
>>
>> I am new at this - but have decided that Julia is my language of choice. 
>> So I begin silly question stage: 
>>
>> Could someone talk me through how to access and display an .obj file? 
>>
>> I have gotten so far:
>>
>> using Meshes
>> using PyPlot
>> using FileIO
>> using MeshIO
>>
>> obj = load(filename)
>> vts = obj.vertices 
>>
>>
>> Which gives me: 
>>
>> 502-element Array{FixedSizeArrays.Point{3,Float32},1}:
>>
>>
>>
>> One example point being: 
>>
>> Point(0.00117,-0.02631,0.03907)
>>
>>  
>>   
>>
>>   
>>
>> How do I access the verticies to use them with plot? 
>>
>> -A
>>
>>
>>