It is my pleasure to announce that the call for proposals for JuliaCon 2017
is now open.
Please visit http://juliacon.org/2017/cfp for all the details and the
submission form.
TL;DR
All speakers welcome!
Call for Participation closes: March 25th, 11:59pm AOE (UTC-12)
Estimated
I forgot to post the link to the iOS 10 problem
https://meta.discourse.org/t/discourse-1-7-0-beta4-problems-due-to-removal-of-fixed-zoom-safari-iphone/50145
On Mon, 21 Nov 2016 at 22:39 Valentin Churavy <v.chur...@gmail.com> wrote:
> You can still answer to current thread, but y
You can still answer to current thread, but you should be unable to create
new threads.
Re: Discourse mobile view. I just tested it on my Android phone and I can
also not zoom in. Switching into Desktop mode works. On Android chrome has
in the accessibility settings the option to "Force enable
The group is now in read-only mode. Please continue the discussion at
discourse.julialang.org.
On Wednesday, 16 November 2016 23:48:59 UTC+9, Valentin Churavy wrote:
>
> It is similar to Github. You can also switch languages by ```python, Julia
> is set as the default though :)
>
C+2, Uwe Fechner wrote:
>>>
>>> Hello,
>>> how can I paste Julia code in Discourse, such that it has syntax
>>> highlighting?
>>>
>>> Uwe
>>>
>>> On Wednesday, November 16, 2016 at 3:45:55 AM UTC+1, Valentin Churavy
>
I would like to accelerate the move of `julia-users` to
https://discourse.julialang.org. In the original announcement
(https://groups.google.com/d/msg/julia-users/Ov1J6MOVly0/cD7vNKOUAgAJ) I
mentioned a evaluation period of 4 weeks, but since then members of the
community have been approaching
Welcome to Julia.
Similar to other programming language you can (but you do not need to) use
a semicolon as a separator between statements.
So your code example is equivalent to:
tri=base=5
height=10
1/2*base*height
so there is no assignment to tri after the first line, and thus the value
is
`julia-dev` has now moved to discourse.
One possibility would be to start forwarding posts for `julia-users` from
Google Groups to Discourse, but that is a one way street. So before I
enable that I would like to hear what everybody thinks about it.
On Wed, 9 Nov 2016 at 02:01 Tom Breloff
See the about page, https://discourse.julialang.org/about there is a
specific section about privacy https://discourse.julialang.org/privacy
Let me know if you have any questions or concerns.
On Sat, 5 Nov 2016 at 22:15 Robert DJ wrote:
> I would love to get away from GG,
The Julia community has been growing rapidly over the last few years and
discussions are happening at many different places: there are several
Google Groups (julia-users, julia-dev, ...), IRC, Gitter, and a few other
places. Sometimes packages or organisations also have their own forums and
Since KNL is just a new platform the default version of the LLVM compiler
that Julia is based on does not support it properly.
During our testing at MIT we found that we needed to switch to the current
upstream of LLVM (or if anybody reads this at a later time LLVM 4.0)
You can do that by
If you want explicit simd the best way right now is the great SIMD.jl
package https://github.com/eschnett/SIMD.jl it is builds on top of
VecElement.
In many cases we can perform automatic vectorisation, but you have to start
Julia with -O3
On Thursday, 13 October 2016 22:15:00 UTC+9, Florian
A great tool to figuring out what is going on in these cases is
`@code_llvm`. It shows you a representation of your code that is still
readable, but very close to the machine.
Your simple julia code without a `@simd` is nearly optimal, but does
benefits from the inclusion of `@inbounds`
-
I would recommend that you look into
ClusterManagers.jl https://github.com/JuliaParallel/ClusterManagers.jl it
includes a simple starting script for using it with slurm.
On Monday, 16 May 2016 23:27:21 UTC+9, John Hearns wrote:
>
> I hope this is an acceptable question...
> I am starting out
Hey,
I am one of the maintainers of OpenCL.jl and I would recommend you to look
at https://github.com/JuliaGPU/OpenCL.jl, https://github.com/JuliaGPU/CLFFT.jl,
and https://github.com/JuliaGPU/CLBLAS.jl.
We welcome any improvements and help. I can only speak for myself in that
most of the
Blas is using a combination of SIMD and multi-core processing. Multi-core
(threading) is coming in Julia v0.5 as an experimental feature.
On Saturday, 16 April 2016 14:13:00 UTC+9, Jason Eckstein wrote:
>
> I noticed in Julia 4 now if you call A+B where A and B are matrices of
> equal size,
Also take a look at https://github.com/dmlc/MXNet.jl/pull/73 where I
implemented debug_str in Julia so that you can test your network on its
space requirements.
On Friday, 8 April 2016 15:54:06 UTC+9, Valentin Churavy wrote:
>
> What happens if you set the batch_size to 1? Also take
What happens if you set the batch_size to 1? Also take a look
at https://github.com/dmlc/mxnet/tree/master/example/memcost
Also workspace is per convolution and you should keep it small.
On Thursday, 7 April 2016 19:13:36 UTC+9, kleinsplash wrote:
>
> Hi,
>
> I have a memory error using
One could use categories as channels. Rust has two differenent discourse
instances for internal and user discussions. But in practice categories can
have subcategories and we could have a tree like structure that also give a
namespace for the package organisation like JuliaGPU.
Congratulations Simon :) Now let's get quickly to a Vulkan backend
On Saturday, 27 February 2016 22:46:56 UTC+9, Simon Danisch wrote:
>
> Thanks :)
>
> >I wonder if it will be the Matplotlib of Julia
>
> While possible, it will need quite a bit of work :) Offering the
> flexibility and usability
Just a small idea. I would make makedict also take a type so that
makedict(1024) also works. Alternatively using typemax(UInt8) would remove
the dependence on the magic number.
On Saturday, 2 January 2016 03:05:19 UTC+9, Erik Schnetter wrote:
>
> Julia distinguishes between two kinds of
It fits in the same niche that Mocha.jl and MXNet.jl are filling right now.
MXNet is a ML library that shares many of the same design ideas of
TensorFlow and has great Julia support https://github.com/dmlc/MXNet.jl
On Wednesday, 11 November 2015 01:04:00 UTC+9, Randy Zwitch wrote:
>
> For me,
This looks great :) The only thing I am wondering about is the license. Is
the license compatible with MIT? And if I want to contribute do I have to
fill out a Contributor License Agreement? I would much prefer if the
license would be one of the standard open-source ones, but I understand if
Hej Charles,
in the future please only post in one place. A lot of the people who answer
on SO are also here.
You can use the label_components function in
Images.jl
https://github.com/timholy/Images.jl/blob/master/doc/function_reference.md#label_components
To get the the list of coordinates
ode_warntype not defined
>
> Do I need to update Julia or something? I have version 0.3.11.
>
> On 20 September 2015 at 16:14, Valentin Churavy <v.ch...@gmail.com
> > wrote:
>
>> take a look at
>> @code_warntype calc_net(0, 0, 0, Dict{String,Float64}(), Dict
take a look at
@code_warntype calc_net(0, 0, 0, Dict{String,Float64}(), Dict{String,Float64
}())
It tells you where the compiler has problems inferring the types of the
variables.
Problematic in this case is
b_hist::Any
b_hist_col2::Any
numB::Any
b_hist_col2_A::Any
b_hist_col2_B::Any
Zeromq stands for zero message queue (http://zeromq.org/) and is the
protocol used by Jupyter/IPython backends to communicate with the
Jupyter/IPython frontend. So if there is any problems with that you can not
run the Julia kernel (aka the Julia backend).
Also if you have problems like that
Hej,
Did anybody tried to implement finite field arithmetic in Julia?. I would
be particular interested in GF(2) eg. base 2 arithmetic modulus 2.
Best,
Valentin
/talks/2015AprilStanford_AndreasNoack.ipynb
notebook. The arithmetic definitions are simpler for GF(2), but should be
simple modifications to the definitions in the notebook.
2015-04-24 2:50 GMT-04:00 Valentin Churavy v.ch...@gmail.com
javascript::
Hej,
Did anybody tried to implement finite
* for `d`, so that the
interpolation would get back its *value*: `$(matching(:d))` does the trick.
Thanks for kicking me in the right direction =)
// T
On Tuesday, December 30, 2014 10:09:01 PM UTC+1, Valentin Churavy wrote:
So I do not totally understand the goal of what you
So I do not totally understand the goal of what you are trying to achieve,
but could you try to do this with a macro?
macro value(d)
@show d
quote
println(d at runtime , $d)
end
end
@ngenerate N T function example{T,N}(A::Array{T,N}, xs::NTuple{N,Real}...)
@nexprs N dim-begin
Great work!
Do you think it would make sense to extract some common functionality and
function names in a DB package? There is https://github.com/JuliaDB/DBI.jl.
I like the interface you provided and would like to use parts of it for the
Postgres driver I am working on.
- Valentin
On
Here is Keno's comment https://news.ycombinator.com/item?id=8810146 and the
general HN discussion https://news.ycombinator.com/item?id=8809422
On Monday, 29 December 2014 17:44:35 UTC+1, Keno Fischer wrote:
I've written up some of my thoughts on the issues raised in this article
in the
There is a recent and ongoing discussion on the llvm maillinglist to expose
scatter and load operations as llvm
intrinsics http://thread.gmane.org/gmane.comp.compilers.llvm.devel/79936 .
- Valentin
On Monday, 1 December 2014 17:43:11 UTC+1, John Myles White wrote:
This is great. Thanks,
, Valentin Churavy
v.ch...@gmail.com wrote:
So I narrowed it down to combining the system's blas with
Julia's suitesparse. In the tests I made laplack has no
influence. Should I file an issue against Julia or with the
Archlinux package? What
, Valentin Churavy v.ch...@gmail.com
wrote:
A fellow archuser here. Under which circumstances does the error occur?
Eg. what code are you executing?
And what does
pacman -Qi julia blas lapack
output
On Monday, 15 December 2014 19:14:22 UTC+1, Andrei Berceanu wrote:
Where do i need to type
December 2014 16:52:18 UTC+1, Andrei Berceanu wrote:
So if your suspicion is correct, setting USE_SYSTEM_SUITESPARSE=1 should
fix this, right?
Let me know how it goes :)
On Tuesday, December 16, 2014 3:19:43 PM UTC+1, Valentin Churavy wrote:
So your system setup is exactly the same (except me
by me and Andrei, can anybody not using Arch
try that out?
On Tuesday, 16 December 2014 17:07:55 UTC+1, Valentin Churavy wrote:
So building it from the PKGBUILD leads to the same error. I am now
building it with the same make options from the tar.gz on the Julia
download page.
Andrei we
UTC+1, Valentin Churavy wrote:
So using
make \
USE_SYSTEM_LLVM=0 \
USE_SYSTEM_LIBUNWIND=1 \
USE_SYSTEM_READLINE=0 \
USE_SYSTEM_PCRE=1 \
USE_SYSTEM_LIBM=1 \
USE_SYSTEM_OPENLIBM=0 \
USE_SYSTEM_OPENSPECFUN=0 \
USE_SYSTEM_BLAS=1 \
USE_SYSTEM_LAPACK=1
Petr,
Congratulations for achieving this. Is J FinEALE available somewhere? Maybe
as as package?
Valentin
On Tuesday, 16 December 2014 17:52:29 UTC+1, Petr Krysl wrote:
I have made some progress in my effort to gain insight into Julia
performance in finite element solvers.
I have
, 16 December 2014 18:07:23 UTC+1, Valentin Churavy wrote:
Ok setting
USE_SYSTEM_BLAS=0 \
USE_SYSTEM_LAPACK=0 \
USE_SYSTEM_SUITESPARSE=0 \
Make the problem go away. So it is the interaction between the system
blas/lapack and the built suitesparese. Are there any patches that julia
carries
A fellow archuser here. Under which circumstances does the error occur? Eg.
what code are you executing?
And what does
pacman -Qi julia blas lapack
output
On Monday, 15 December 2014 19:14:22 UTC+1, Andrei Berceanu wrote:
Where do i need to type all this? I must mention that I did not
For reference the discussion about slow backtraces on windows started here
https://groups.google.com/forum/#!msg/julia-users/hNEJtracCuI/bTaKlrKLIAcJ
and the PR reducing the sampling interval can be found
https://github.com/JuliaLang/julia/pull/9241
On Sunday, 14 December 2014 23:51:41 UTC+1,
What do you think? Why is the code still getting hit with a
big
performance/memory penalty?
Thanks,
Petr
On Monday, December 8, 2014 2:03:02 PM UTC-8, Valentin Churavy
wrote:
I would think that when f is a 1x1 matrix Julia is allocating a
new
I agree that displaying runnable code widgets are useless, but showing the
good integration with Jupyter/IPython via juliabox/tmpnb/SAGE is not.
It would enable to demonstrated people features of Julia without having
them actually installing yet another programming environment, thus reducing
I like the point: Solving P=NP reminds me of rust's
* In theory. Rust is a work-in-progress and may do anything it likes up to
and including eating your laundry.
On Wednesday, 10 December 2014 19:15:05 UTC+1, Christian Peel wrote:
One thing that I would very much appreciate is some kind of
An other nice example might be the new haskell
homepage http://new-www.haskell.org/
For the runnable part. Maybe we could use tmpnb/juliabox to host an example
notebook. We should probably use a docker image with an userimages
otherwise the attention span will be over before Gadfly is loaded.
I would think that when f is a 1x1 matrix Julia is allocating a new 1x1
matrix to store the result. If it is a scalar that allocation can be
skipped. When this part of the code is now in a hot loop it might happen
that you allocate millions of very small short-lived objects and that taxes
the
Hi Christoph,
If you pass in a function via an argument the compiler get relatively
little information about that function and can not do any optimization on
it. That is part of the reason why map(f, xs) is currently slower than a
for-loop. There are ongoing discussions on how to solve that
Julia is in this regard not like Matlab. In Matlab a function file is with
the right name is picked up by the interpreter and loaded. In Julia you
first have to include your file [1] and the you can call using A.
The profile function is a bit hidden but an explanation can be found here
[2]
for the typos - I am normally better at testing.)
Christoph
On Sunday, 30 November 2014 20:54:40 UTC, Valentin Churavy wrote:
I found a second error in lj_cstyle
t is calculated wrongly:
t = 1./s*s*s != 1/s^3
It probably should be t = 1.0 / (s * s * s)
t = 1.0 / (s*s*s)
E += t*t - 2
Nice work!
Regarding the pretty Julia version of Lennard-Jones MD.
You can shape of another second (on my machine) by not passing in the lj
method as a parameter, but directly calling it.
I tried to write an optimize version of your lj_pretty function by
analysing it with @profile and
So one argument feature wise for Juno/Lightable is the good integration of
profile(). You get an inline view of how expensive each line is that you
can directly jump from the call tree to the appropriate line.
On Sunday, 30 November 2014 19:00:07 UTC+1, Hans W Borchers wrote:
Yes, I found
On Sunday, 30 November 2014 20:54:58 UTC+1, Valentin Churavy wrote:
Nice work!
Regarding the pretty Julia version of Lennard-Jones MD.
You can shape of another second (on my machine) by not passing in the lj
method as a parameter, but directly calling it.
I tried to write an optimize
There is also https://github.com/dcjones/Compose.jl/tree/master/examples
On Saturday, 22 November 2014 10:08:41 UTC+1, ccsv.1...@gmail.com wrote:
Are there any tutorials for Compose.jl other than the documentation page?
I want to try to make pie charts and venn diagrams.
You could using a abstract type instead of a Union
abstract Element
type Tree
body :: Element
end
type Branch : Element
a :: Tree
b :: Tree
end
type Leaf : Element
a
end
so this would create a tree
julia Tree(Branch(
Tree(Leaf(:a)),
After looking
at
https://github.com/JuliaLang/julia/blob/1c4e8270646f7d5cab3d259f0464e6ea66e3574a/base/interactiveutil.jl#L11
it seems that you should set JULIA_EDITOR. It might be that your VISUAL
variable is still set to Vim.
V
On Wednesday, 1 October 2014 15:24:30 UTC+2, Andrei Berceanu
Hej,
so from what I gather if you set JULIA_CPU_TARGET=core2, Julia restricts
itself to the cpu features defined
here:
http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86.td?view=markup
Currently the only supported and tested options are i386, core2 and native,
whereas native
What you are looking for is described
in
http://julia.readthedocs.org/en/latest/manual/modules/#relative-and-absolute-module-paths
in P.jl you include all your submodules
module P
include(u.jl)
include(a.jl)
include(b.jl)
using .A, .B
export f, g
end
u.jl
module U
g() = 5
f() = 6
There is https://github.com/JuliaLang/julia/blob/v0.3.0/NEWS.md
On Saturday, 23 August 2014 15:02:56 UTC+2, Ed Scheinerman wrote:
Is there a document describing new features and significant changes
between versions 0.2 and 0.3?
One item I noticed is that in 0.2 the express 1:5 == [1:5]
There is also a PGF backend for Compose which is used by Gadfly, which is
producing quite nice plots.
On Saturday, 23 August 2014 00:22:34 UTC+2, Kaj Wiik wrote:
Hi!
I've been using Tikz, so this package is very welcome!
However,
using PGFPlots
a=plot(rand(20),rand(20))
save(test.pdf,
Hey there,
I am trying to create an asynchronous thread-safe callback following the
guidelines
here
http://julia.readthedocs.org/en/latest/manual/calling-c-and-fortran-code/#thread-safety
Trying to do one thing after another I tried to manually call my callback
via the thread safe and
if you are using an
0.3 snapshot then you need to change your cb function so that it only takes
one argument, data --- the status argument was eliminated.
On Tuesday, June 10, 2014 1:55:56 PM UTC-4, Valentin Churavy wrote:
Hey there,
I am trying to create an asynchronous thread-safe callback
Valentin Churavy følgende:
My colleagues and I are in the midst of finishing a research paper in
which the code was mostly written in Julia. Since without Julia the work on
it would have not been as much fun or as fast I would like to give credit
where credit is due.
Is there any publication
what your working on. I'm always interested in what
other people are up to.
Best,
Jake
On Wednesday, March 19, 2014 10:58:50 AM UTC-4, Valentin Churavy wrote:
Thanks I totally didn't see that :).
I will definitely cite the first paper there and julialang.org.
Cheers,
Valentin
My colleagues and I are in the midst of finishing a research paper in which
the code was mostly written in Julia. Since without Julia the work on it
would have not been as much fun or as fast I would like to give credit
where credit is due.
Is there any publication one should cite when using
Hello there,
I am looking into ways to grow a multidimensional array in julia. Essential
I would like to do the following:
d1=10
d2=10
A=Array(Float64, d1, d2, 0)
b=ones(d1, d2)
push!(A, b)
A[:, :, 1] == b
Is there any way to do such a thing?
According to the documentation on Dequeues the
67 matches
Mail list logo