Re: [julia-users] Re: SIMD multicore

2016-04-16 Thread Jiahao Chen
Yes, optimized BLAS implementations like MKL and OpenBLAS use vectorization
heavily.

  

Note that matrix addition A+B is fundamentally a very different beast from
matrix multiplication A*B. In the former you have O(N^2) work and O(N^2) data,
so the ratio of work to data is O(1). It is very likely that the operation is
memory bound, in which case there is little to gain from optimizing the
computations. In the latter you have O(N^3) work and O(N^2) data, so the ratio
of work to data is O(N). There exists a good possibility for the operation to
be compute bound, and so there is a payoff to optimize such computations.

  

Thanks,

  

Jiahao Chen

Research Scientist

Julia Lab

[jiahao.github.io

](http://jiahao.github.io "http://jiahao.github.io; )  

On Apr 16 2016, at 11:13 pm, Chris Rackauckas rackd...@gmail.com
wrote:  

> BLAS functions are painstakingly developed to be beautiful bastions of
parallelism (because of how ubiquitous their use is). The closest I think you
can get is ParallelAccelerator.jl's @acc which does a lot of optimizations all
together. However, it still won't match BLAS in terms of its efficiency since
BLAS is just really well optimized by hand. But give ParallelAccelerator a
try, it's a great tool for getting things to run fast with little work.  
  
On Saturday, April 16, 2016 at 4:50:50 PM UTC-7, Jason Eckstein wrote:

>

>> I often use julia muticore features with pmap and @parallel for loops.  So
the best way to achieve this is to split the array up into parts for each core
and then run SIMD loops on each parallel process?  Will there ever by a time
when you can add a tag like SIMD that will have the compiler automatically
does this like it does for BLAS functions?  
  
On Saturday, April 16, 2016 at 3:26:22 AM UTC-6, Valentin Churavy wrote:

>>

>>> Blas is using a combination of SIMD and multi-core processing. Multi-core
(threading) is coming in Julia v0.5 as an experimental feature.  
  
On Saturday, 16 April 2016 14:13:00 UTC+9, Jason Eckstein wrote:

>>>

 I noticed in Julia 4 now if you call A+B where A and B are matrices of
equal size, the llvm code shows vectorization indicating it is equivalent to
if I wrote my own function with an @simd tagged for loop.  I still notice
though that it uses a single core to maximum capacity but never spreads an
SIMD loop out over multiple cores.  In contrast if I use BLAS functions like
gemm! or even just A*B it will use every core of the processor.  I'm not sure
if these linear algebra operations also use simd vectorization but I imagine
they do since BLAS is very optimized.  Is there a way to write an SIMD loop
that spreads the data out across all processor cores, not just the multiple
functional units of a single core?



[julia-users] Re: SIMD multicore

2016-04-16 Thread Chris Rackauckas
BLAS functions are painstakingly developed to be beautiful bastions of 
parallelism (because of how ubiquitous their use is). The closest I think 
you can get is ParallelAccelerator.jl's @acc which does a lot of 
optimizations all together. However, it still won't match BLAS in terms of 
its efficiency since BLAS is just really well optimized by hand. But give 
ParallelAccelerator a try, it's a great tool for getting things to run fast 
with little work.

On Saturday, April 16, 2016 at 4:50:50 PM UTC-7, Jason Eckstein wrote:
>
> I often use julia muticore features with pmap and @parallel for loops.  So 
> the best way to achieve this is to split the array up into parts for each 
> core and then run SIMD loops on each parallel process?  Will there ever by 
> a time when you can add a tag like SIMD that will have the compiler 
> automatically does this like it does for BLAS functions?
>
> On Saturday, April 16, 2016 at 3:26:22 AM UTC-6, Valentin Churavy wrote:
>>
>> Blas is using a combination of SIMD and multi-core processing. Multi-core 
>> (threading) is coming in Julia v0.5 as an experimental feature. 
>>
>> On Saturday, 16 April 2016 14:13:00 UTC+9, Jason Eckstein wrote:
>>>
>>> I noticed in Julia 4 now if you call A+B where A and B are matrices of 
>>> equal size, the llvm code shows vectorization indicating it is equivalent 
>>> to if I wrote my own function with an @simd tagged for loop.  I still 
>>> notice though that it uses a single core to maximum capacity but never 
>>> spreads an SIMD loop out over multiple cores.  In contrast if I use BLAS 
>>> functions like gemm! or even just A*B it will use every core of the 
>>> processor.  I'm not sure if these linear algebra operations also use simd 
>>> vectorization but I imagine they do since BLAS is very optimized.  Is there 
>>> a way to write an SIMD loop that spreads the data out across all processor 
>>> cores, not just the multiple functional units of a single core?
>>>
>>

Re: [julia-users] How to delete multiple keys from a dictionary

2016-04-16 Thread Yichao Yu
On Sat, Apr 16, 2016 at 10:19 PM, Anonymous  wrote:
> The question basically says it all, delete!(mydict, key), will delete a
> single key, but how do I pass a vector of keys to delete! in order to delete
> multiple keys at once?

Just write a loop should work and be fast.


[julia-users] How to delete multiple keys from a dictionary

2016-04-16 Thread Anonymous
The question basically says it all, delete!(mydict, key), will delete a 
single key, but how do I pass a vector of keys to delete! in order to 
delete multiple keys at once?


Re: [julia-users] What is the proper way to allocate and free C struct?

2016-04-16 Thread Andrei Zh
This is exactly the answer I hoped to see! Thanks a lot!

On Sunday, April 17, 2016 at 2:46:15 AM UTC+3, Yichao Yu wrote:
>
> On Sat, Apr 16, 2016 at 6:55 PM, Andrei Zh  > wrote: 
> > I have a question regarding Struct Type correspondences section of the 
> > documentation. Let's say, in a C library we have a struct like this: 
> > 
> > typedef struct { 
> >   int rows; 
> >   int cols; 
> >   double* data; 
> > } xmatrix; 
> > 
> > In C code it can be created using function: 
> > 
> > int xmatrix_copy(void* src, xmatrix** mat, int rows, int cols); 
> > 
> > Note pointer-to-pointer in second argument. This C function allocates 
> new 
> > region of memory for `xmatrix` and fills in the values (rows, cols and 
> > pointer to data). I call this function from Julia like this: 
> > 
> > type XMatrix 
> >   rows::Cint 
> >   cols::Cint 
> >   data::Ptr{Float64} 
> > end 
> > 
> > 
> > data = Float64[1, 2, 3, 4, 5, 6] 
> > matptr = Array(Ptr{XMatrix}, 1) 
> > 
> > 
> > ccall((:xmatrix_copy), Cint, 
> > (Ptr{Void}, Ptr{Ptr{XMatrix}}, Cint, Cint), 
> > pointer(data), matptr, Cint(3), Cint(2)) 
> > 
> > mat = unsafe_load(matptr[1]) 
>
> This is wrong, it leaks memory. See below. 
>
> > 
> > Details to notice in `ccall`: 
> > 
> > 1. In type tuple I set type of xmatrix argument to be a 
> pointer-to-pointer 
> > to XMatrix. 
> > 2. As an argument, I pass 1-element array of pointers to XMatrix and let 
> C 
> > code to fill it in. 
> > 
> > This approach works and I get initialized `XMatrix`. Yet, I don't really 
> > understand all details of memory management in this case, In particular: 
> > 
> > 1. Who owns memory of an instance of `XMatrix`? Is `unsafe_load` copying 
> > fields or just assigns Julia type tag to C-allocated region of memory? 
>
> The owner of the memory is determined by the c library. In this case 
> it seems that the C library owns the memory (which is generally the 
> case if the library provide new and free functions) 
> Julia never (and can never) assign tag to C memory. Unsafe load does 
> exactly what the name suggests and is equivalent to `*ptr` in C. It 
> copies the content of the struct and the return value 
> has no aliasing with the original pointer. 
>
> I assume the C library want the original pointer so you need to keep 
> the pointer in the wrapper type and define conversion functions to 
> pass it to C. See this[1] for an example for the proper way to do it 
> (mostly the ptr field, the cconvert and unsafe_convert definitions and 
> the finalizer(explained below)). 
>
> > 2. What would be the proper way to free allocated memory given that C 
> > library provides function `xmatrix_free(xmatrix* mat)`? 
>
> ccall the free function. You usually also want a finalizer on the 
> wrapper type mentioned above so that the GC can free the memory for 
> you when the julia object is collected. Note that you should use 
> finalizer to finalize object only for types that you don't care too 
> much about their lifetime since the time when a finalizer is called is 
> undefined. 
>
> > 3. Aforementioned section (and the next one) suggest to use 
> `Ref{XMatrix}` 
> > instead. Can somebody provide an example of corresponding code using 
> refs 
> > instead of wrapping into an array? 
>
> Ref on a bits type is basically a 0-dim array 
>
> matout = Ref{Ptr{XMatrix}}() 
>
> ccall((:xmatrix_copy), Cint, 
> (Ptr{Void}, Ptr{Ptr{XMatrix}}, Cint, Cint), 
> data, matout, Cint(3), Cint(2)) # DO NOT USE 
> `pointer(data)` This is the single most common mistake when using 
> ccall. It is fine in the global scope but if this is in a function, 
> julia is free to collect the data array when you are in/before 
> entering the ccall. See the manual section about cconvert and 
> unsafe_convert 
>
> matptr = matout[] # This is the C managed/owned pointer that you 
> should embed in your julia type. 
>
> > 
> > Thanks, 
> > Andrei 
>
>
> [1] 
> https://github.com/yuyichao/LibArchive.jl/blob/4112772e8d1d7124c896bf98c5baee13149d3f8d/src/writer.jl#L28-L58
>  
>


[julia-users] Re: SIMD multicore

2016-04-16 Thread Jason Eckstein
I often use julia muticore features with pmap and @parallel for loops.  So 
the best way to achieve this is to split the array up into parts for each 
core and then run SIMD loops on each parallel process?  Will there ever by 
a time when you can add a tag like SIMD that will have the compiler 
automatically does this like it does for BLAS functions?

On Saturday, April 16, 2016 at 3:26:22 AM UTC-6, Valentin Churavy wrote:
>
> Blas is using a combination of SIMD and multi-core processing. Multi-core 
> (threading) is coming in Julia v0.5 as an experimental feature. 
>
> On Saturday, 16 April 2016 14:13:00 UTC+9, Jason Eckstein wrote:
>>
>> I noticed in Julia 4 now if you call A+B where A and B are matrices of 
>> equal size, the llvm code shows vectorization indicating it is equivalent 
>> to if I wrote my own function with an @simd tagged for loop.  I still 
>> notice though that it uses a single core to maximum capacity but never 
>> spreads an SIMD loop out over multiple cores.  In contrast if I use BLAS 
>> functions like gemm! or even just A*B it will use every core of the 
>> processor.  I'm not sure if these linear algebra operations also use simd 
>> vectorization but I imagine they do since BLAS is very optimized.  Is there 
>> a way to write an SIMD loop that spreads the data out across all processor 
>> cores, not just the multiple functional units of a single core?
>>
>

Re: [julia-users] What is the proper way to allocate and free C struct?

2016-04-16 Thread Yichao Yu
On Sat, Apr 16, 2016 at 6:55 PM, Andrei Zh  wrote:
> I have a question regarding Struct Type correspondences section of the
> documentation. Let's say, in a C library we have a struct like this:
>
> typedef struct {
>   int rows;
>   int cols;
>   double* data;
> } xmatrix;
>
> In C code it can be created using function:
>
> int xmatrix_copy(void* src, xmatrix** mat, int rows, int cols);
>
> Note pointer-to-pointer in second argument. This C function allocates new
> region of memory for `xmatrix` and fills in the values (rows, cols and
> pointer to data). I call this function from Julia like this:
>
> type XMatrix
>   rows::Cint
>   cols::Cint
>   data::Ptr{Float64}
> end
>
>
> data = Float64[1, 2, 3, 4, 5, 6]
> matptr = Array(Ptr{XMatrix}, 1)
>
>
> ccall((:xmatrix_copy), Cint,
> (Ptr{Void}, Ptr{Ptr{XMatrix}}, Cint, Cint),
> pointer(data), matptr, Cint(3), Cint(2))
>
> mat = unsafe_load(matptr[1])

This is wrong, it leaks memory. See below.

>
> Details to notice in `ccall`:
>
> 1. In type tuple I set type of xmatrix argument to be a pointer-to-pointer
> to XMatrix.
> 2. As an argument, I pass 1-element array of pointers to XMatrix and let C
> code to fill it in.
>
> This approach works and I get initialized `XMatrix`. Yet, I don't really
> understand all details of memory management in this case, In particular:
>
> 1. Who owns memory of an instance of `XMatrix`? Is `unsafe_load` copying
> fields or just assigns Julia type tag to C-allocated region of memory?

The owner of the memory is determined by the c library. In this case
it seems that the C library owns the memory (which is generally the
case if the library provide new and free functions)
Julia never (and can never) assign tag to C memory. Unsafe load does
exactly what the name suggests and is equivalent to `*ptr` in C. It
copies the content of the struct and the return value
has no aliasing with the original pointer.

I assume the C library want the original pointer so you need to keep
the pointer in the wrapper type and define conversion functions to
pass it to C. See this[1] for an example for the proper way to do it
(mostly the ptr field, the cconvert and unsafe_convert definitions and
the finalizer(explained below)).

> 2. What would be the proper way to free allocated memory given that C
> library provides function `xmatrix_free(xmatrix* mat)`?

ccall the free function. You usually also want a finalizer on the
wrapper type mentioned above so that the GC can free the memory for
you when the julia object is collected. Note that you should use
finalizer to finalize object only for types that you don't care too
much about their lifetime since the time when a finalizer is called is
undefined.

> 3. Aforementioned section (and the next one) suggest to use `Ref{XMatrix}`
> instead. Can somebody provide an example of corresponding code using refs
> instead of wrapping into an array?

Ref on a bits type is basically a 0-dim array

matout = Ref{Ptr{XMatrix}}()

ccall((:xmatrix_copy), Cint,
(Ptr{Void}, Ptr{Ptr{XMatrix}}, Cint, Cint),
data, matout, Cint(3), Cint(2)) # DO NOT USE
`pointer(data)` This is the single most common mistake when using
ccall. It is fine in the global scope but if this is in a function,
julia is free to collect the data array when you are in/before
entering the ccall. See the manual section about cconvert and
unsafe_convert

matptr = matout[] # This is the C managed/owned pointer that you
should embed in your julia type.

>
> Thanks,
> Andrei


[1] 
https://github.com/yuyichao/LibArchive.jl/blob/4112772e8d1d7124c896bf98c5baee13149d3f8d/src/writer.jl#L28-L58


[julia-users] What is the proper way to allocate and free C struct?

2016-04-16 Thread Andrei Zh
I have a question regarding Struct Type correspondences 

 
section of the documentation. Let's say, in a C library we have a struct 
like this:

typedef struct {
  int rows;
  int cols;
  double* data;
} xmatrix;

In C code it can be created using function:

int xmatrix_copy(void* src, xmatrix** mat, int rows, int cols);

Note pointer-to-pointer in second argument. This C function *allocates* *new 
region of memory* for `xmatrix` and fills in the values (rows, cols and 
pointer to data). I call this function from Julia like this:

type XMatrix
  rows::Cint
  cols::Cint
  data::Ptr{Float64}
end


data = Float64[1, 2, 3, 4, 5, 6]
matptr = Array(Ptr{XMatrix}, 1)


ccall((:xmatrix_copy), Cint, 
(Ptr{Void}, Ptr{Ptr{XMatrix}}, Cint, Cint), 
pointer(data), matptr, Cint(3), Cint(2))

mat = unsafe_load(matptr[1])

Details to notice in `ccall`:

1. In type tuple I set type of xmatrix argument to be a pointer-to-pointer 
to XMatrix.
2. As an argument, I pass 1-element array of pointers to XMatrix and let C 
code to fill it in. 

This approach works and I get initialized `XMatrix`. Yet, I don't really 
understand all details of memory management in this case, In particular: 

1. Who owns memory of an instance of `XMatrix`? Is `unsafe_load` copying 
fields or just assigns Julia type tag to C-allocated region of memory? 
2. What would be the proper way to free allocated memory given that C 
library provides function `xmatrix_free(xmatrix* mat)`? 
3. Aforementioned section (and the next one) suggest to use `Ref{XMatrix}` 
instead. Can somebody provide an example of corresponding code using refs 
instead of wrapping into an array?

Thanks, 
Andrei


Re: [julia-users] Re: Mathematical Computing course in Julia

2016-04-16 Thread Steven G. Johnson


On Saturday, April 16, 2016 at 12:44:54 AM UTC-4, Daniel Carrera wrote:
>
> Is there a way to try out your instructions on a computer where I have 
> previously installed IJulia and PyPlot? Or do I have to remove and re-add 
> IJulia and PyPlot?
>

Yes, just substitute Pkg.build for Pkg.add in my instructions.
 

> If I follow these instructions, will Julia also keep Jupyter and Python 
> updated? (i.e. every time I run Pkg.update()). Right now I have an eclectic 
> mix where Python is updated with apt, Jupyter with pip3, and Julia packages 
> with Pkg.update().
>

I don't think Conda auto-upgrades its installation when you do 
Pkg.update().  But if you do "import Conda; Conda.update()" then it will 
update all its packages (including Python, Jupyter, and Matplotlib) to the 
latest version. 


[julia-users] Re: Pkg.update() error

2016-04-16 Thread Steven G. Johnson
Looks like you are hitting #13458, but this was fixed in #13506 and should 
have been backported in Julia 0.4.1?  What version of Julia are you using?

In any case, just retrying Pkg.update() from a fresh Julia session should 
fix it.

On Saturday, April 16, 2016 at 2:03:52 PM UTC-4, digxx wrote:
>
> As a sidenote: I do have Conda installed? Should I maybe NOT use Pkg 
> anymore then?
>

No, Conda still uses Pkg.


Re: [julia-users] Re: worker waiting???

2016-04-16 Thread Yichao Yu
On Sat, Apr 16, 2016 at 5:35 PM, digxx  wrote:
> Sry, I accidentally sent my post.
> So the above version seems to run berserker
> if i put an @sync infront of @parallel it seems to work, but does it really?

It's not vary clear from your code above what you want to do although
there's a few things that I can say,

1. I don't think SharedArray have any automatic
locking/synchronization scheme (since it will be very expensive).
2. You need @sync in front @parallel is necessary if you want to wait
for the loop to finish. This is documented in the doc for parallel and
sync.


[julia-users] Re: worker waiting???

2016-04-16 Thread digxx
Sry, I accidentally sent my post.
So the above version seems to run berserker
if i put an @sync infront of @parallel it seems to work, but does it really?


[julia-users] worker waiting???

2016-04-16 Thread digxx
When accessing a SharedArray do the other workers wait when at one moment 
in time a specific worker is changing some entry another worker would like 
to access to too?
for example
power=SharedArray(Int64,10)
@parallel for i=1,100

if power(1)==0
power(1) = something
end

end


[julia-users] Re: asynchronous reading from file

2016-04-16 Thread James Fairbanks

Hi Tomas,

tomas writes:
> the skeleton of my current implementation looks like this
>
> rr = RemoteChannel()
> @async put!(rr, remotecall_fetch(loaddata,2)
>
> for ii in 1:maxiter
> #do some steps of the gradient descend
>
> #check if the data are ready and schedule next reading
> if isready(rr)
> append!(dss[1],take!(rr));
> @async put!(rr, remotecall_fetch(loaddata,2)
> end
> end

The example of pmap shown here uses @sync around a block with multiple
@async operations.
http://docs.julialang.org/en/release-0.4/manual/parallel-computing/#synchronization-with-remote-references

My usage for stuff like this is to wrap the I/O into a task
http://docs.julialang.org/en/release-0.4/manual/control-flow/#tasks-aka-coroutines
http://docs.julialang.org/en/release-0.4/stdlib/parallel/
I think that @async is a lower level API than using a `Task` that calls
`produce(data)` when it has the data and another Task that calls
`consume(iotask)` on the first task. This approach is similar to python
generators.

> I start the julia as julia -p 2, therefore I expect there will be a 
processor.
The `@async` and Tasks tools work in a single process. The `@spawn`
macro sends work to different processors.

> Can anyone explain me please, what am I doing wrong?
I am sure that others know better than I do.

Here is my Task based example. I am open to suggestions to make this
example clearer.

Julia code:
##
# set up a Task to do the IO in a "pseudothread"
# read from STDIN in a loop up to 20 lines.
iotask = @task begin
info("reading from stdin")
for i in 1:20
s = readline(STDIN)
produce(s)
end
end

# our fake computation just preppends val to our input
function f(x)
return "val:$x"
end

# a function that takes values and applies f to them in a worker Task aka 
"pseudothread"
# this function uses @task instead of creating a 0-argument function and 
passing it to Task().
function work(t::Task)
@task begin 
for i in 1:20
s = consume(t)
info("worker got: $s")
produce(f(s))
end
end
end

# the worker needs a handle to the IO task which is why we create it second
worktask = work(iotask)
# schedule both tasks so that they start executing
schedule(iotask)
schedule(worktask)
#this task based computation is based on pulling data. That is if we don't 
ask the the worker for any results, then no computation happens.
for i in 1:20
x = consume(worktask)
info("computed $x")
end
##

# Results
##
> for i in {0..22}; do echo $i; done | julia taskio.jl

INFO: reading from stdin
INFO: worker got: 0
INFO: computed val:0
INFO: worker got: 1
INFO: computed val:1
INFO: worker got: 2
INFO: computed val:2
INFO: worker got: 3
INFO: computed val:3
INFO: worker got: 4
INFO: computed val:4
INFO: worker got: 5
INFO: computed val:5
INFO: worker got: 6
INFO: computed val:6
INFO: worker got: 7
INFO: computed val:7
INFO: worker got: 8
INFO: computed val:8
INFO: worker got: 9
INFO: computed val:9
INFO: worker got: 10
INFO: computed val:10
INFO: worker got: 11
INFO: computed val:11
INFO: worker got: 12
INFO: computed val:12
INFO: worker got: 13
INFO: computed val:13
INFO: worker got: 14
INFO: computed val:14
INFO: worker got: 15
INFO: computed val:15
INFO: worker got: 16
INFO: computed val:16
INFO: worker got: 17
INFO: computed val:17
INFO: worker got: 18
INFO: computed val:18
INFO: worker got: 19
INFO: computed val:19
##

Notice that only 20 lines of output appear even though the input has 22
lines. Changing the loop bounds in the code is left as an exercise to
the reader.

On Friday, April 15, 2016 at 2:23:00 PM UTC-4, pev...@gmail.com wrote:
>
> Hi All,
> I would like to implement an asynchronous reading from file.
>
> I am doing stochastic gradient descend and while I am doing the 
> optimisation, I would like to load the data on the background. Since 
> reading of the data is followed by a quite complicated parsing, it is not 
> just simple IO operation that can be done without CPU cycles.
>
> the skeleton of my current implementation looks like this
>
> rr = RemoteChannel()
> @async put!(rr, remotecall_fetch(loaddata,2)
>
>   for ii in 1:maxiter
> #do some steps of the gradient descend
>
>#check if the data are ready and schedule next reading
> if isready(rr)
>   append!(dss[1],take!(rr));
> @async put!(rr, remotecall_fetch(loaddata,2)
>  end
>end
>
>
> nevertheless the isready(rr) always returns false, which looks like that 
> the data are never loaded.
>
> I start the julia as julia -p 2, therefore I expect there will be a 
> processor.
>
> Can anyone explain me please, what am I doing wrong?
> Thank you very much.
>
> Tomas
>
>

[julia-users] Re: Pkg.update() error

2016-04-16 Thread digxx
As a sidenote: I do have Conda installed? Should I maybe NOT use Pkg 
anymore then?

Am Samstag, 16. April 2016 19:59:46 UTC+2 schrieb digxx:
>
> When using Pkg.update() I had these errors. I was just wondering coz I 
> didnt have these before!?
> What went wrong and what do I need to do?
>
>

[julia-users] Pkg.update() error

2016-04-16 Thread digxx
When using Pkg.update() I had these errors. I was just wondering coz I 
didnt have these before!?
What went wrong and what do I need to do?

julia> Pkg.update()
INFO: Updating METADATA...
INFO: Updating cache of ColorTypes...
INFO: Updating cache of DistributedArrays...
INFO: Updating cache of Contour...
INFO: Updating cache of Plots...
INFO: Updating cache of BinDeps...
INFO: Updating cache of StatsBase...
INFO: Updating cache of URIParser...
INFO: Updating cache of Roots...
INFO: Updating cache of DataFrames...
INFO: Updating cache of Distributions...
INFO: Updating cache of NaNMath...
INFO: Updating cache of ForwardDiff...
INFO: Updating cache of FixedPointNumbers...
INFO: Updating cache of Compat...
INFO: Updating cache of Colors...
INFO: Updating cache of DataStructures...
INFO: Updating cache of PyCall...
INFO: Updating cache of PDMats...
INFO: Updating cache of WinRPM...
INFO: Updating cache of LibExpat...
INFO: Updating cache of DualNumbers...
INFO: Computing changes...
INFO: Cloning cache of FixedSizeArrays from 
git://github.com/SimonDanisch/FixedSizeArrays.jl.git
INFO: Cloning cache of MacroTools from 
git://github.com/one-more-minute/MacroTools.jl.git
INFO: Cloning cache of Requires from 
git://github.com/one-more-minute/Requires.jl.git
INFO: Upgrading BinDeps: v0.3.20 => v0.3.21
INFO: Upgrading ColorTypes: v0.2.0 => v0.2.2
INFO: Upgrading Colors: v0.6.2 => v0.6.3
INFO: Upgrading Compat: v0.7.8 => v0.7.13
INFO: Upgrading Contour: v0.0.8 => v0.1.0
INFO: Upgrading DataFrames: v0.6.10 => v0.7.0
INFO: Upgrading DataStructures: v0.4.2 => v0.4.3
INFO: Upgrading DistributedArrays: v0.1.6 => v0.2.0
INFO: Upgrading Distributions: v0.8.9 => v0.8.10
INFO: Upgrading DualNumbers: v0.2.1 => v0.2.2
INFO: Upgrading FixedPointNumbers: v0.1.1 => v0.1.2
INFO: Installing FixedSizeArrays v0.1.0
INFO: Upgrading ForwardDiff: v0.1.4 => v0.1.6
INFO: Upgrading LibExpat: v0.1.1 => v0.1.2
INFO: Installing MacroTools v0.3.0
INFO: Upgrading NaNMath: v0.2.0 => v0.2.1
INFO: Upgrading PDMats: v0.3.6 => v0.4.1
INFO: Upgrading Plots: v0.5.1 => v0.5.4
INFO: Upgrading PyCall: v1.2.0 => v1.4.0
INFO: Installing Requires v0.2.2
INFO: Upgrading Roots: v0.1.25 => v0.1.26
INFO: Upgrading StatsBase: v0.7.4 => v0.8.0
INFO: Upgrading URIParser: v0.1.2 => v0.1.3
INFO: Upgrading WinRPM: v0.1.13 => v0.1.15
INFO: Removing ImmutableArrays v0.0.11
INFO: Building PyCall
INFO: Recompiling stale cache file 
C:\cygwin64\home\Diger\.julia\lib\v0.4\URIParser.ji for module URIParser.
INFO: Recompiling stale cache file 
C:\cygwin64\home\Diger\.julia\lib\v0.4\Conda.ji for module Conda.
INFO: Recompiling stale cache file 
C:\cygwin64\home\Diger\.julia\lib\v0.4\Compat.ji for module Compat.
INFO: Recompiling stale cache file 
C:\cygwin64\home\Diger\.julia\lib\v0.4\JSON.ji for module JSON.
INFO: Recompiling stale cache file 
C:\cygwin64\home\Diger\.julia\lib\v0.4\BinDeps.ji for module BinDeps.
INFO: Recompiling stale cache file 
C:\cygwin64\home\Diger\.julia\lib\v0.4\SHA.ji for module SHA.
INFO: Recompiling stale cache file 
C:\cygwin64\home\Diger\.julia\lib\v0.4\SHA.ji for module SHA.
WARNING: Module Compat uuid did not match cache file
WARNING: deserialization checks failed while attempting to load cache from 
C:\cygwin64\home\Diger\.julia\lib\v0.4\SHA.ji
INFO: Precompiling module SHA...
INFO: Recompiling stale cache file 
C:\cygwin64\home\Diger\.julia\lib\v0.4\SHA.ji for module SHA.
WARNING: Module Compat uuid did not match cache file
===[
 
ERROR: PyCall 
]===

LoadError: __precompile__(true) but require failed to create a precompiled 
cache file
while loading C:\cygwin64\home\Diger\.julia\v0.4\PyCall\deps\build.jl, in 
expression starting on line 12

===
INFO: Building WinRPM
INFO: Recompiling stale cache file 
C:\cygwin64\home\Diger\.julia\lib\v0.4\BinDeps.ji for module BinDeps.
WARNING: Module Compat uuid did not match cache file
WARNING: deserialization checks failed while attempting to load cache from 
C:\cygwin64\home\Diger\.julia\lib\v0.4\BinDeps.ji
INFO: Precompiling module BinDeps...
INFO: Recompiling stale cache file 
C:\cygwin64\home\Diger\.julia\lib\v0.4\SHA.ji for module SHA.
WARNING: Module Compat uuid did not match cache file
WARNING: deserialization checks failed while attempting to load cache from 
C:\cygwin64\home\Diger\.julia\lib\v0.4\SHA.ji
INFO: Precompiling module SHA...
INFO: Recompiling stale cache file 
C:\cygwin64\home\Diger\.julia\lib\v0.4\SHA.ji for module SHA.
WARNING: Module Compat uuid did not match cache file

[julia-users] Re: logic operations

2016-04-16 Thread digxx
Thx, setdiff is what I want.
symdiff seems to do
symdiff(A,B)=sort([setdiff(A,B),setdiff(B,A)])

Am Samstag, 16. April 2016 18:05:39 UTC+2 schrieb digxx:
>
> Say I have 2 arrays
> A=[1,2,3,4,5,6]
> B=[1,6,7]
> and the resulting array
> C=[2,3,4,5] = A NOT B results from a logical operation
> Is there already something implemented like this?
>


[julia-users] Re: logic operations

2016-04-16 Thread Lutfullah Tomak
I think setdiff does what you want.
Ref http://docs.julialang.org/en/release-0.4/stdlib/collections/#Base.setdiff

On Saturday, April 16, 2016 at 7:05:39 PM UTC+3, digxx wrote:
>
> Say I have 2 arrays
> A=[1,2,3,4,5,6]
> B=[1,6,7]
> and the resulting array
> C=[2,3,4,5] = A NOT B results from a logical operation
> Is there already something implemented like this?
>


[julia-users] Re: logic operations

2016-04-16 Thread STAR0SS
I think you are looking for symdiff :

http://docs.julialang.org/en/release-0.4/stdlib/collections/?highlight=intersect#Base.symdiff


[julia-users] Re: logic operations

2016-04-16 Thread Matt Bauman
If I understand correctly, it looks like you're describing `C = setdiff(A, 
B)`. 
 http://docs.julialang.org/en/release-0.4/stdlib/collections/#Base.setdiff

On Saturday, April 16, 2016 at 12:05:39 PM UTC-4, digxx wrote:
>
> Say I have 2 arrays
> A=[1,2,3,4,5,6]
> B=[1,6,7]
> and the resulting array
> C=[2,3,4,5] = A NOT B results from a logical operation
> Is there already something implemented like this?
>


[julia-users] logic operations

2016-04-16 Thread digxx
Say I have 2 arrays
A=[1,2,3,4,5,6]
B=[1,6,7]
and the resulting array
C=[2,3,4,5] = A NOT B results from a logical operation
Is there already something implemented like this?


[julia-users] Re: Vectorised usage of Compose

2016-04-16 Thread Christoph Ortner
so for those who are interested, I found a partial solution in how 
VoronoiDelaunay plots meshes. Basically, one can interrupt lines and 
polygons using NaNs. For example:

points = Tuple{Float64, Float64}[]
for n = 1:size(T, 2)
   p = [X[:, T[:, n]] X[:, T[1,n]]]
   for m = 1:size(p, 2)
 push!(points, tuple(p[1, m], p[2, m]))
   end
   push!(points, tuple(NaN, NaN))
end
compose(context(units=ub), polygon(points),  fill(elcol), stroke(linecol), 
linewidth(lwidth) )

This works well for me - for now. Though I am sure one can improve this 
further.

Christoph



Re: [julia-users] Re: Parametric types which add or delete fields.

2016-04-16 Thread Eric Forgy
Thank you Tim! That is some excellent wisdom. I appreciate that and shared 
it with my team.

On Friday, April 15, 2016 at 10:35:02 PM UTC+8, Tim Holy wrote:
>
> Your best bet is always to benchmark. Here's how I make such decisions: 
>
> # The type-based system: 
> julia> immutable Container1{T} 
>val::T 
>end 
>
> julia> inc(::Int) = 1 
> inc (generic function with 1 method) 
>
> julia> inc(::Float64) = 2 
> inc (generic function with 2 methods) 
>
> julia> inc(::UInt8) = 3 
> inc (generic function with 3 methods) 
>
> julia> vec = [Container1(1), Container1(1.0), Container1(0x01)] 
> 3-element Array{Container1{T},1}: 
>  Container1{Int64}(1) 
>  Container1{Float64}(1.0) 
>  Container1{UInt8}(0x01) 
>
> julia> function loop_inc1(vec, n) 
>s = 0 
>for k = 1:n 
>for item in vec 
>s += inc(item.val) 
>end 
>end 
>s 
>end 
> loop_inc1 (generic function with 1 method) 
>
> # The dictionary solution 
> julia> immutable Container2 
>code::Symbol 
>end 
>
> julia> vec2 = [Container2(:Int), Container2(:Float64), Container2(:UInt8)] 
> 3-element Array{Container2,1}: 
>  Container2(:Int) 
>  Container2(:Float64) 
>  Container2(:UInt8)   
>
> julia> dct = Dict(:Int=>1, :Float64=>2, :UInt8=>3) 
> Dict(:Int=>1,:UInt8=>3,:Float64=>2) 
>
> julia> function loop_inc2(vec, dct, n) 
>s = 0 
>for k = 1:n 
>for item in vec 
>s += dct[item.code] 
>end 
>end 
>s 
>end 
> loop_inc2 (generic function with 1 method) 
>
> # The switch solution 
> julia> function loop_inc3(vec, n) 
>s = 0 
>for k = 1:n 
>for item in vec 
>if item.code == :Int 
>s += 1 
>elseif item.code == :Float64 
>s += 2 
>elseif item.code == :UInt8 
>s += 3 
>else 
>error("Unrecognized code") 
>end 
>end 
>end 
>s 
>end 
>
> loop_inc3 (generic function with 1 method) 
>
> julia> loop_inc1(vec, 1) 
> 6 
>
> julia> loop_inc2(vec2, dct, 1) 
> 6 
>
> julia> loop_inc3(vec2, 1) 
> 6 
>
> julia> @time loop_inc1(vec, 10^4) 
>   0.002274 seconds (10.17 k allocations: 167.025 KB) 
> 6 
>
> julia> @time loop_inc1(vec, 10^5) 
>   0.025834 seconds (100.01 k allocations: 1.526 MB) 
> 60 
>
> julia> @time loop_inc2(vec2, dct, 10^5) 
>   0.010278 seconds (6 allocations: 192 bytes) 
> 60 
>
> julia> @time loop_inc3(vec2, 10^5) 
>   0.001561 seconds (6 allocations: 192 bytes) 
> 60 
>
>
> So in terms of run time, the bottom line is: 
> - The "switch" version is fastest (by quite a lot), but ugly. 
> - The dictionary is intermediate. You would likely be able to do even 
> better 
> with a "perfect hash" dictionary, see 
> http://stackoverflow.com/questions/36385653/return-const-dictionary 
> - The type-based solution is slowest, but not much worse than the 
> dictionary. 
>
> Note that none of this analysis includes compilation time. If you're 
> writing a 
> large system, the type-based one in particular will require longer JIT 
> times, 
> whereas the first two get by with only a single type and hence will need 
> much 
> less compilation. 
>
> Of course, if `inc` were a complicated function, it might change the 
> entire 
> calculus here. That's really the key: what's the tradeoff between the 
> amount of 
> computation per element and the price you pay for dispatch to a type- 
> specialized method? There is no universal answer to this question. 
>
> Best, 
> --Tim 
>
>

Re: [julia-users] Re: Mathematical Computing course in Julia

2016-04-16 Thread Peter Kovesi
Thanks Tim.  That is very good news indeed!

On Saturday, April 16, 2016 at 6:57:51 PM UTC+8, Tim Holy wrote:
>
> I agree line numbers have been a persistent problem in julia. In a 
> language 
> that inlines as aggressively as julia, and uses macros and metaprogramming 
> to 
> generate code, it's not a trivial problem---in some cases you might even 
> wonder what "line number" really means. 
>
> Fortunately, you'll be pleased to hear that, due to some fantastic work by 
> multiple people, in the current development version of julia your example 
> gives the following: 
>
> julia> include("/tmp/linenumbers.jl") 
> ERROR: LoadError: ArgumentError: invalid type for argument a in method 
> definition for foo at /tmp/linenumbers.jl:5 
>  in include(::ASCIIString) at ./boot.jl:234 
>  in include_from_node1(::ASCIIString) at ./loading.jl:417 
>  in eval(::Module, ::Any) at ./boot.jl:237 
> while loading /tmp/linenumbers.jl, in expression starting on line 4 
>
> Not only is the error message much easier to interpret, but every one of 
> those 
> line numbers is correct. This appears to hold generally---in recent 
> julia-0.5 
> builds, I have yet to notice an incorrect line number anywhere. One could 
> construct examples where code is built up by expressions that 
> (deliberately or 
> not) get this wrong, but for any "typical" usage this appears to be fixed. 
>
> Best, 
> --Tim 
>
> On Friday, April 15, 2016 07:01:05 PM Peter Kovesi wrote: 
> > Sheehan, That's a very nice looking course but I think you are very 
> brave 
> > to use Julia at this stage. 
> > I love the language but (at this stage of the language's development) 
> the 
> > error reporting is highly problematic.  For example this morning I made 
> a 
> > classic mistake 
> > 
> > function foo(a::real) # Should have been:   function foo(a::Real) 
> >  ... 
> > end 
> > 
> > The function was defined at line 998, the error was reported at line 
> 433, 
> >  565 lines away!  The message was 
> > "ERROR: LoadError: TypeError: Tuple: in parameter, expected Type{T}, got 
> > Function" 
> > 
> > Good luck to your students! 
> > 
> > Working in Julia requires a practice of defensive incremental coding in 
> the 
> > extreme.  Every few lines of code that are added need to be tested 
> before 
> > carrying on.  That way you know that any errors are in the few lines of 
> > code that were just added and not at whatever spurious location was 
> being 
> > suggested. 
> > 
> > Let me say again I love the language.  However the error reporting is a 
> > source of extreme frustration to me. 
> > 
> > A key pathway to getting Julia more widely adopted would be for it to be 
> > used for teaching purposes.  However, at the moment I fear that any 
> attempt 
> > to do so would surely end in tears. 
> > 
> > Peter Kovesi 
> > 
> > On Friday, April 15, 2016 at 10:17:40 AM UTC+8, Sheehan Olver wrote: 
> > > I'm currently lecturing the course MATH3076/3976 Mathematical 
> Computing at 
> > > U. Sydney in Julia, and thought that others may be interested in the 
> > > resources I've provided: 
> > > 
> > > http://www.maths.usyd.edu.au/u/olver/teaching/MATH3976/ 
> > > 
> > > The lecture notes and labs are all Jupyter notebooks.  I've also 
> included 
> > > a "cheat sheet" of Julia commands used in the course 
> > > 
> > > 
> > > 
> http://nbviewer.jupyter.org/url/www.maths.usyd.edu.au/u/olver/teaching/MAT 
> > > H3976/cheatsheet.ipynb 
> > > 
> > > The course is ongoing (it's about half through) and will continue to 
> take 
> > > shape, but any feedback is of course welcome! 
> > > 
> > > 
> > > Sheehan Olver 
>
>

Re: [julia-users] Re: Mathematical Computing course in Julia

2016-04-16 Thread Tim Holy
I agree line numbers have been a persistent problem in julia. In a language 
that inlines as aggressively as julia, and uses macros and metaprogramming to 
generate code, it's not a trivial problem---in some cases you might even 
wonder what "line number" really means.

Fortunately, you'll be pleased to hear that, due to some fantastic work by 
multiple people, in the current development version of julia your example 
gives the following:

julia> include("/tmp/linenumbers.jl")
ERROR: LoadError: ArgumentError: invalid type for argument a in method 
definition for foo at /tmp/linenumbers.jl:5
 in include(::ASCIIString) at ./boot.jl:234
 in include_from_node1(::ASCIIString) at ./loading.jl:417
 in eval(::Module, ::Any) at ./boot.jl:237
while loading /tmp/linenumbers.jl, in expression starting on line 4

Not only is the error message much easier to interpret, but every one of those 
line numbers is correct. This appears to hold generally---in recent julia-0.5 
builds, I have yet to notice an incorrect line number anywhere. One could 
construct examples where code is built up by expressions that (deliberately or 
not) get this wrong, but for any "typical" usage this appears to be fixed.

Best,
--Tim

On Friday, April 15, 2016 07:01:05 PM Peter Kovesi wrote:
> Sheehan, That's a very nice looking course but I think you are very brave
> to use Julia at this stage.
> I love the language but (at this stage of the language's development) the
> error reporting is highly problematic.  For example this morning I made a
> classic mistake
> 
> function foo(a::real) # Should have been:   function foo(a::Real)
>  ...
> end
> 
> The function was defined at line 998, the error was reported at line 433,
>  565 lines away!  The message was
> "ERROR: LoadError: TypeError: Tuple: in parameter, expected Type{T}, got
> Function"
> 
> Good luck to your students!
> 
> Working in Julia requires a practice of defensive incremental coding in the
> extreme.  Every few lines of code that are added need to be tested before
> carrying on.  That way you know that any errors are in the few lines of
> code that were just added and not at whatever spurious location was being
> suggested.
> 
> Let me say again I love the language.  However the error reporting is a
> source of extreme frustration to me.
> 
> A key pathway to getting Julia more widely adopted would be for it to be
> used for teaching purposes.  However, at the moment I fear that any attempt
> to do so would surely end in tears.
> 
> Peter Kovesi
> 
> On Friday, April 15, 2016 at 10:17:40 AM UTC+8, Sheehan Olver wrote:
> > I'm currently lecturing the course MATH3076/3976 Mathematical Computing at
> > U. Sydney in Julia, and thought that others may be interested in the
> > resources I've provided:
> > 
> > http://www.maths.usyd.edu.au/u/olver/teaching/MATH3976/
> > 
> > The lecture notes and labs are all Jupyter notebooks.  I've also included
> > a "cheat sheet" of Julia commands used in the course
> > 
> > 
> > http://nbviewer.jupyter.org/url/www.maths.usyd.edu.au/u/olver/teaching/MAT
> > H3976/cheatsheet.ipynb
> > 
> > The course is ongoing (it's about half through) and will continue to take
> > shape, but any feedback is of course welcome!
> > 
> > 
> > Sheehan Olver



[julia-users] Re: Int or Int64

2016-04-16 Thread Páll Haraldsson
On Wednesday, April 13, 2016 at 9:27:00 AM UTC, Bill Hart wrote:
>
> Int is either Int32 or Int64, depending on the machine. Int64 does still 
> seem to be defined on a 32 bit machine. In fact, even Int128 is defined.
>

Yes, this is all safe when you only have one thread, but if you plan for 
the future (threads in Julia), I wander if only Int64 on 32-bit (and 
Int128, on 32- and 64-bit) is unsafe, as it is non-atomic:

http://preshing.com/20130618/atomic-vs-non-atomic-operations/

See there "torn write". I saw a little surprised that all accesses in C/C++ 
can be non-safe.. so maybe that also applies to Julia.

Do I worry to much, as locks would be the way (or "lock-free"), to guard 
against non-atomic?

If you need big numbers keep BigNum in mind that I think should always be 
safe.

But of course it is going to have to emulate processor instructions to do 
> 64 bit arithmetic unless the machine actually has such instructions. So it 
> could well be quite a bit slower.
>
> On Wednesday, 13 April 2016 11:09:27 UTC+2, vincent leclere wrote:
>>
>> Hi all,
>>
>> quick question: I am building a package and has been defining types with 
>> Int64 or Float64 properties.
>> Is there any reason why I should be using Int and Float instead ? (Does 
>> Int64 work on 32bits processors ?)
>> Will it be at the price of efficiency loss ?
>>
>> Thanks
>>
>

Re: [julia-users] Re: Mathematical Computing course in Julia

2016-04-16 Thread Sheehan Olver
I had IJulia installed on our lab machines in a shared folder specified in 
LOAD_PATH.  Then had the students copy the Jupyter config file to their local 
directory.  This way they could each run their own Jupyter server.

This has worked fairly well.  If students want to install on their own 
machines, then that's their own responsibility. 

I avoided JuliaBox as I ran into issues with it in the past.



Sent from my iPhone

> On 16 Apr 2016, at 19:34, Daniel Carrera  wrote:
> 
> 
>> On 16 April 2016 at 10:33, Tamas Papp  wrote:
>> 
>> In a university you usually have an IT department who would maintain the
>> server; it is very unusual that these kind of tasks would be the
>> responsibility of the lecturer. Many universities already run Jupyter so
>> it is easy to add Julia, and if they don't, setting it up is not that
>> big of a deal for IT -- you should be able to convince them.
> 
> 
> To start with, I am not a lecturer (other than in the sense that I do a 
> little bit of teaching). I have never seen a university that serve a new web 
> app for 50,000 students campus-wide because one PhD student in astronomy with 
> a class of 30 doesn't like Matlab. When I am a lecturer in charge of a course 
> I would have freedom to choose the course software, and I will be able to 
> ditch Matlab because I don't like it. At that moment, I would probably tell 
> the students to use JuliaBox or to install Julia+Jupyter on their computer 
> using Steven's instructions.
> 
> Cheers,
> Daniel.
> 


[julia-users] build.jl help

2016-04-16 Thread Bart Janssens
Hi,

I use CMake to build code from the deps directory using BinDeps. An issue with 
this is that when the C++ code is changed, the package is not rebuilt because 
the existence of the library is still detected. Is there an elegant way to 
force a make each time Pkg.build is called? The only way I see now is to remove 
the library each time.

Cheers,

Bart

Re: [julia-users] Re: Mathematical Computing course in Julia

2016-04-16 Thread Daniel Carrera
On 16 April 2016 at 10:33, Tamas Papp  wrote:

>
> In a university you usually have an IT department who would maintain the
> server; it is very unusual that these kind of tasks would be the
> responsibility of the lecturer. Many universities already run Jupyter so
> it is easy to add Julia, and if they don't, setting it up is not that
> big of a deal for IT -- you should be able to convince them.
>


To start with, I am not a lecturer (other than in the sense that I do a
little bit of teaching). I have never seen a university that serve a new
web app for 50,000 students campus-wide because one PhD student in
astronomy with a class of 30 doesn't like Matlab. When I am a lecturer in
charge of a course I would have freedom to choose the course software, and
I will be able to ditch Matlab because I don't like it. At that moment, I
would probably tell the students to use JuliaBox or to install
Julia+Jupyter on their computer using Steven's instructions.

Cheers,
Daniel.


[julia-users] Re: SIMD multicore

2016-04-16 Thread Valentin Churavy
Blas is using a combination of SIMD and multi-core processing. Multi-core 
(threading) is coming in Julia v0.5 as an experimental feature. 

On Saturday, 16 April 2016 14:13:00 UTC+9, Jason Eckstein wrote:
>
> I noticed in Julia 4 now if you call A+B where A and B are matrices of 
> equal size, the llvm code shows vectorization indicating it is equivalent 
> to if I wrote my own function with an @simd tagged for loop.  I still 
> notice though that it uses a single core to maximum capacity but never 
> spreads an SIMD loop out over multiple cores.  In contrast if I use BLAS 
> functions like gemm! or even just A*B it will use every core of the 
> processor.  I'm not sure if these linear algebra operations also use simd 
> vectorization but I imagine they do since BLAS is very optimized.  Is there 
> a way to write an SIMD loop that spreads the data out across all processor 
> cores, not just the multiple functional units of a single core?
>


Re: [julia-users] Re: Mathematical Computing course in Julia

2016-04-16 Thread Tamas Papp
On Fri, Apr 15 2016, Daniel Carrera wrote:

> As for jupyterhub, I honestly had not heard of it until you mentioned it
> just now. The website says that it's a multi-user server. I definitely
> don't want to go down that route. I wouldn't want to be responsible for
> running a server where students are supposed to do their work. But I think
> JuliaBox is probably a good option. I guess my thoughts of Jupyter were
> tainted by the experience with the Macbook and R.

In a university you usually have an IT department who would maintain the
server; it is very unusual that these kind of tasks would be the
responsibility of the lecturer. Many universities already run Jupyter so
it is easy to add Julia, and if they don't, setting it up is not that
big of a deal for IT -- you should be able to convince them.

Best,

Tamas


Re: [julia-users] Re: Mathematical Computing course in Julia

2016-04-16 Thread cdm

how does real time collab in Jupyter (perhaps with a Julia kernel ...) with 
multiple synced cursors sound ?

   https://twitter.com/sagemath/status/713399552649338880


{straight wizardry, yo ...}


Re: [julia-users] Re: Mathematical Computing course in Julia

2016-04-16 Thread cdm

it may be worth noting that the Sage Math Cloud would appear to be fine 
environment for this sort of instruction ...


see:

   https://groups.google.com/d/msg/sage-cloud/V9QGAg-dJ3Q/8ly0Ca4OEAAJ

and/or

   https://cloud.sagemath.com

   http://www.beezers.org/blog/bb/2015/09/grading-in-sagemathcloud/

   https://github.com/sagemathinc/smc/wiki/Teaching

   https://cloud.sagemath.com/help


Julia seems to run fine for me on SMC and

W. Stein is doing simply brilliant work ...