The reason this particular form of @load now shows a warning is that it
can't be made to work properly in a function. If you can't know either the
file name or the variable names ahead of time, there are two things you
could do:
- Use the load function (e.g. x = load("data_run$(run).jld"))
Yes, I get normal output in a Python notebook.
On Sunday, November 1, 2015 at 7:57:55 PM UTC-5, Yichao Yu wrote:
>
> On Sun, Nov 1, 2015 at 7:25 PM, Simon Kornblith <si...@simonster.com
> > wrote:
> > I'm trying to figure out why IJulia appears to stopped working for me
I'm trying to figure out why IJulia appears to stopped working for me. It
executes code, but it never prints the output. It looks like the kernel is
trying to send it back, but for some reason it's not making it back to the
client. With verbose = true, for a cell that contains println("hello
\ is supposed to use pivoted QR by default, which a standard way to solve
least squares when X is not full rank. It gives me a LAPACKException on 0.3
too, but it works for me on 0.4:
julia> X2 = [1 2 3;
1 2 5
1 2 -8
2 4 23];
julia> y = [2.3, 5.6, 9.2,
sion of Julia than
> the one I'm using, so I can't verify it myself with your code, but in 0.3 I
> found using the @inbounds tag decreased performance each time.
>
> Thanks for your attention.
>
> On Saturday, September 12, 2015 at 4:38:50 PM UTC-4, Simon Kornblith wrote:
&
With some tweaks I got a 12x speedup. Original (using Iain's bench with
10 iterations):
0.639475 seconds (4.10 M allocations: 279.236 MB, 1.96% gc time)
0.634781 seconds (4.10 M allocations: 279.236 MB, 1.90% gc time)
With 9ab84caa046d687928642a27c30c85336efc876c
readall(`cat test`) or similar
On Friday, September 11, 2015 at 7:56:43 PM UTC-4, J Luis wrote:
>
> Ok, I've spend about an hour around "run" "open", "run(pipeline(..." but
> no way.
> In Matlab I would do
>
> [status, cmdout] = system(cmd);
>
> but in Julia the most a can reach is to run
There is no conversion from Pair to Tuple. The construction:
(a, b, c) = d
works for any iterable collection d. The same holds for the for loop
construction.
There used to be a type inference issue, but I fixed it in
https://github.com/JuliaLang/julia/pull/12493. The output of code_warntype
ArrayViews doesn't support indexing with ranges, vectors, etc. on Julia
0.3, although this should work on Julia 0.4. (Also on 0.4, SubArrays
created with sub/slice should be equally fast as ArrayViews, and both
should be faster than on 0.3.) On 0.3 you need to write an explicit loop to
set
Yichao, Oscar, and I were unhappy with the current state of vectorization
of operations involving complex numbers and other immutables so I decided
to do something about it. I'm pleased to announce StructsOfArrays.jl
https://github.com/simonster/StructsOfArrays.jl, which performs the Array
of
julia foo(;d...)
2-element Array{Any,1}:
(:a,97)
(:b,95)
Note the semicolon. Otherwise the Dict is splatted as positional arguments.
Simon
On Tuesday, March 31, 2015 at 3:29:47 PM UTC-4, Michael Turok wrote:
Is there a way to pass in a dictionary for the keyword arguments that have
a
Keyword arguments can add additional overhead when calling a function and
prevent its return type from being inferred properly, but the code that
runs is specialized for the types of the keyword arguments, so I don't
think keyword arguments alone explain this. But it looks like there is some
You can just call glm(X, y, Normal(), LogLink()) directly with a design
matrix X and response vector y, e.g.:
julia glm(randn(100, 3), rand(100), Normal(), LogLink())
GLM.GeneralizedLinearModel:
Coefficients:
Estimate Std.Error z value Pr(|z|)
x1 0.0179427 0.0574712 0.312202 0.7549
You probably want beginswith, which was recently renamed to startswith but
only in Julia 0.4. (Make sure you look at the appropriate docs
http://julia.readthedocs.org/en/release-0.3/stdlib/strings/ for Julia
0.3; there's a version selector in the bottom right.) You can also use the
Compat
https://github.com/stevengj/PyCall.jl/pull/110
On Sunday, January 4, 2015 9:34:14 AM UTC-5, John Zuhone wrote:
Steven,
How difficult would it be to work a way to suppress this warning message?
I general I would argue that it's best to avoid printing warnings to the
screen unless there is
This may be a bit greedy, but I'd rather that type instability were less of
a performance bottleneck. There are some optimizations that could be done
to address this:
- Specialize loops inside functions (#5654
https://github.com/JuliaLang/julia/issues/5654). This would solve most
On Friday, January 2, 2015 2:59:04 PM UTC-5, Douglas Bates wrote:
For many statistics-oriented Julia users there is a great advantage in
being able to piggy-back on R development and to use at least the data sets
from R packages. Hence the RDatasets package and the read_rda function in
You can use:
using HypothesisTests
ci(BinomialTest(x, n))
Several methods of constructing binomial confidence intervals are
implemented; see the docs
http://hypothesistestsjl.readthedocs.org/en/latest/api/ci.html#ci-binomial
.
Simon
On Wednesday, December 31, 2014 12:21:24 PM UTC-5, Jerry
In general, arrays cannot be assumed to be 16-byte aligned because it's
always possible to create one that isn't using pointer_to_array. However,
from Intel's AVX introduction
https://software.intel.com/en-us/articles/introduction-to-intel-advanced-vector-extensions
:
Intel® AVX has relaxed
Is there an easy way to display a polygon mesh in Julia, i.e., vertices and
faces loaded from an STL file or created by marching tetrahedra using
Meshes.jl? So far, I see:
- PyPlot/matplotlib, which seems to be surprisingly difficult to
convince to do this.
- GLPlot, which doesn't
On Tuesday, October 21, 2014 6:41:24 PM UTC-4, David van Leeuwen wrote:
I replaced the `γ = broadcast()` with the lines below that. No globals,
but perhaps the field type gmm.μ is spoiling things. I am not sure if this
is a case of an abstractly typed field
type GMM{T:FloatingPoint}
On Sunday, October 19, 2014 5:25:34 PM UTC-4, Greg Plowman wrote:
Hi,
I have several general questions that came up in my first foray into Julia.
Julia seems such a delight to work with, things seems to work magically
and lots of details are not required or implicitly assumed.
Whilst
Probably https://github.com/JuliaLang/julia/issues/8631 (although I think
this example alone runs fine; maybe you're loading Color.jl somewhere?)
Simon
On Friday, October 17, 2014 5:21:30 PM UTC-4, Spencer Lyon wrote:
Consider this very simple example
if nprocs() == 1
addprocs(2)
end
It's not really very inefficient. Compare:
julia f(x, y, z) = return x^2, y^3, z^4;
julia g(a, b, c) = f(a, b, c)[3];
julia @code_llvm g(1.0, 1.0, 1.0)
define double @julia_g;767954(double, double, double) {
top:
%3 = call double @pow(double %2, double 4.00e+00), !dbg !525
ret double
As has been discussed here in the past, dimension may be an ambiguous
term. A matrix has two dimensions, so you're circularly shifting the first
dimension by 2 (which has no effect since the rows are identical) and the
second by 1 (which shifts [1 2 3 4 5] to [5 1 2 3 4]). There are no third,
Or alternatively:
std(reshape(A, 10, div(length(A), 10)), 1)
Simon
On Thursday, October 9, 2014 7:10:11 PM UTC-4, Patrick O'Leary wrote:
optionally *along dimensions in region* (emphasis mine). You are
attempting to read along the tenth dimension of the array.
You're trying to split the
There is most certainly a type problem. You're not getting type information
for sparse_grid.lvl_l which deoptimizes a lot of things. In your code, you
have:
type sparse_gridd::Int64q::Int64n::Int64grid
::Array{Float64}ind::Array{Int64}
You're right that microbenchmarks often do not reflect real-world
performance, but in defense of using the sample mean for benchmarking, it's
a good estimator of the population mean (provided the distribution has
finite variance), and if you perform an operation n times, it will take
about nμ
But then the question is why we define specialized versions for Bool rather
than using the methods for Real in number.jl. For all but abs LLVM is smart
enough to optimize the methods for Real to no-ops.
Simon
On Thursday, September 11, 2014 5:27:10 AM UTC-4, Tim Holy wrote:
It's because
Yup. The reason this works when oVector is a Vector{Float64} is that Julia
makes a copy in the process of converting it to a Vector{Int} when you
construct the Cell.
Simon
On Thursday, September 11, 2014 12:53:52 AM UTC-4, John Myles White wrote:
This sure looks like you're not making any
Actually, if you want this to be fast I don't think you can avoid the if y
== 2 branch, although ideally it should be outside the loop and not inside
it. LLVM will optimize x^2, but if it doesn't know what the y in x^y is at
compile time, it will just call libm. It's not going to emit a branch
This was just added to Julia 0.4 yesterday, which is why it's not defined
in Julia 0.3 :). In general if you're using Julia 0.3 you should be looking
at the release-0.3 docs and not latest (click at the lower right on
julia.readthedocs.org to change).
Simon
On Monday, September 8, 2014
I don't think there's anything wrong with specializing small powers. We
already specialize small powers in openlibm's pow and even in inference.jl.
If you want to compute x^2 at maximum speed, something, somewhere needs
that when the power is 2 it should just call mulsd instead of doing
Here's an interesting comparison:
https://gist.github.com/simonster/6195af68c6df33ca965d
idiv for 64-bit integers is one of the most expensive extant x86-64
instructions. For 32-bit integers, it is much cheaper, and this function
runs nearly twice as fast. When LLVM knows the divisor in
See my answer. If you don't mind a copy/pasting a bunch of bit twiddly code
from Base, this can be efficiently devectorized, and that turns out to be a
decent perf win. But in this case the devectorized code is quite difficult
to decipher and Stefan Schwarz's point definitely applies.
Simon
It seems to have to do with the way the unparametrized type gets
interpolated into the AST. Changing
for (fp, fpc) in [(DoublelengthFloat{Float64}, DoublelengthFloat)]
to
for (fp, fpc) in [(DoublelengthFloat{Float64}, :DoublelengthFloat)]
on line 108 makes g and h perform equivalently for
Does it still not work to use 0.4.0-dev+n as the version in the REQUIRE
file? This used to almost work, but some of the nightlies were missing the
commit number. It certainly seems easier than searching through all the
hashes, although I don't know the git command to get the commit number for
From that script it looks like it's
git rev-list commit ^v0.3.0-rc3 | wc -l
On Wednesday, August 20, 2014 9:22:36 PM UTC-4, Gray Calhoun wrote:
On Wednesday, August 20, 2014 12:56:42 PM UTC-5, Simon Kornblith wrote:
Does it still not work to use 0.4.0-dev+n as the version in the REQUIRE
unique with a dim argument actually computes this as byproduct but does not
return it. All we need is an API.
On Monday, August 11, 2014 1:27:57 PM UTC-4, Jacob Quinn wrote:
There's an open issue about it here:
https://github.com/JuliaLang/julia/issues/1845
I've also played around with a
) != (@nref N A i)
nowcollided[k] = true
end
end
(collided, nowcollided) = (nowcollided, collided)
end
end
sort!(uniquerows)
end
On Monday, August 11, 2014 2:43:24 PM UTC-4, Simon Kornblith wrote:
unique with a dim argument
This doesn't seem quite right: assuming the model has an intercept, SStot
is traditionally the sum of squares of the intercept only model (i.e.,
sumabs2(y
- mean(y)). You can see this if you add a constant to the first column of dd,
which should not change R^2 but instead results in an
Assuming you have enough memory to write a BitArray to the JLD file
initially, if you later open the JLD file with mmaparrays=true and read it,
JLD will mmap the underlying Vector{Uint64} so that pieces are read from
the disk as they are accessed. (The actual specifics of how this works is
up
On Friday, August 1, 2014 6:23:59 AM UTC-4, Milan Bouchet-Valat wrote:
Le jeudi 31 juillet 2014 à 21:19 -0700, John Myles White a écrit :
To address Simon’s general points, which are really good reasons to avoid
jumping on the Option{T} bandwagon too soon:
* I agree that most
Does that mean instead of carrying around a function I will need to carry
around the Expr?
Thanks!
On Friday, August 1, 2014 11:17:50 AM UTC-4, Simon Kornblith wrote:
Assuming you're generating code that calls fn, as opposed to trying to
call it when generating code in the macro (usually
the
compiler will handle it, I admit I have only skimmed this thread.
On Fri, Aug 1, 2014 at 9:18 AM, Simon Kornblith si...@simonster.com
wrote:
On Friday, August 1, 2014 6:23:59 AM UTC-4, Milan Bouchet-Valat wrote:
Le jeudi 31 juillet 2014 à 21:19 -0700, John Myles White a écrit
I suspect you are running out of RAM and your system is thrashing
http://en.wikipedia.org/wiki/Thrashing_%28computer_science%29.
On Thursday, July 31, 2014 3:06:31 AM UTC-4, K leo wrote:
Sorry, but really don't know what is going on. Today I had two sessions
of julia each running a program
of type T, you need to explicitly say how you're going to
handle any missingness so that the system only interacts with values of
type T.
I should note that I'm not very sure the use of Options is the right
approach: Simon Kornblith has argued very persuasively for waiting for the
compiler
Presumably we could use the same global or let-scoped array rather than
allocating a new array on each call.
On Monday, July 28, 2014 11:39:44 AM UTC-4, Simon Byrne wrote:
Yes, I would agree: as Elliot mentioned, you might get some gain by only
doing the range-reduction once.
Looking at
isdefined(myobject, :myfield)
On Tuesday, July 22, 2014 6:54:00 PM UTC-4, Ben Ward wrote:
I have a type that contains references to other instances of the same type
- like a doubly linked list or - as it's intended use, like a tree like
structure. They contain references to a parent of the
Bug filed: https://github.com/JuliaLang/julia/issues/7679
On Sunday, July 20, 2014 6:08:46 PM UTC-4, Keno Fischer wrote:
I don't recall that being changed again intentionally.
On Sun, Jul 20, 2014 at 2:18 PM, Cyril Slobin cyril@gmail.com
javascript: wrote:
Sorry, I've did it wrong
The fact that append! grows the array on failure seems like a bug
nonetheless. If convert throws it seems preferable to leave the array as
is. I'll file an issue.
Simon
On Thursday, July 17, 2014 9:34:21 AM UTC-4, Jacob Quinn wrote:
Hi Jan,
You have your syntax a little mixed up. The usage
See also https://github.com/JuliaLang/julia/issues/6561
On Monday, July 14, 2014 10:48:42 AM UTC-4, Mauro wrote:
Using type parameters works for me:
julia f{T:Float64}(a::Array{(T,T)}) = eltype(a)
f (generic function with 1 method)
julia f([(3.,4.), (4.,5.)])
(Float64,Float64)
delete!(A, [:C,:D])
On Tuesday, July 1, 2014 7:38:00 PM UTC-4, Andre P. wrote:
I figured out one way to do this...
B = A[:, setdiff(names(A),[symbol(D), symbol(E)])] # removes columns C
D using column names
A less verbose way?
On Wednesday, July 2, 2014 7:01:56 AM UTC+9, Andre P.
include evaluates at top-level, so this would only work if foo were a
global variable. It not possible to include in a function context for the
same reason it is not possible to eval in a function context.
Simon
On Thursday, June 26, 2014 1:03:00 PM UTC-4, Tomas Lycken wrote:
I have the
According to my VML.jl
benchmarks,https://github.com/simonster/VML.jl#performanceVML tanh is ~5x
faster for large arrays even with only a single core.
(VML.jl is currently only single threaded because I haven't figured out how
to get multithreading without ccalling into MKL, which could lead
vec should have minimal overhead. (Unlike [:], it doesn't make a copy of
the underlying data.)
On Sunday, May 11, 2014 6:00:10 PM UTC-4, Ethan Anderes wrote:
Thanks for the response. Still not sure I understand what you mean.
readcsv returns Array{T, 2} where T is determined form the file
It seems to me that the massive difference in performance between
homogeneous and heterogeneous arrays is at least in part a characteristic
of the implementation and not the language. We currently store
heterogeneous arrays as arrays of boxed pointers and perform function calls
on values taken
If diag is passed a vector rather than a matrix, we already give a good
error message:
julia diag([1, 2, 3, 4])
ERROR: use diagm instead of diag to construct a diagonal matrix
in diag at linalg/generic.jl:49
It wouldn't hurt to have this in the docs, though.
On Sunday, April 27, 2014 4:07:52
You can do:
[1, 2, (flag ? 3 : [])]
or:
tuple(1, 2, (flag ? (3,) : ())...)
On Friday, April 25, 2014 7:35:49 PM UTC-4, andrew cooke wrote:
really i'm asking if there's an idiomatic way to do the kind of thing you
do with linked lists (usually, in functional languages) in julia...
On
They still look like function calls to me, but given the performance
difference, I would be surprised if they are as accurate. sin is 4x faster
for Float64, which is on par with VML with 1 ulp accuracy, but VML has the
benefit of vectorization. Indeed, if
There was a missing method for ApproximateMannWhitneyUTest. If you run
Pkg.update(), this should be fixed. (I also added some tests to make sure
that show() isn't completely broken for any of the types.)
Simon
On Saturday, April 12, 2014 12:29:22 PM UTC-4, Iain Gallagher wrote:
Hi
I'm new
Assuming avconv or ffmpeg is available on your system, you can open a pipe
to it:
pipe, process = writesto(`avconv -y -f rawvideo -pix_fmt gray -s 100x100
-r 30 -i - -an -c:v libx264 -pix_fmt yuv420p movie.mp4`)
The options are detailed in the docs; -s is the movie size and -r is the
frame
It's rarely used, but | may be what you're looking for:
julia X = zeros(50, 50);
julia X|size
(50,50)
julia X|length
2500
On Friday, April 11, 2014 6:49:16 PM UTC-4, Ben Racine wrote:
Hi all,
I understand most of the mental maps between the common object-oriented
systems (Python,
I'm not a lawyer either, but I don't think this is any more problematic
than the current situation with inclusion of Rmath etc. The combined work
is GPL (or LGPL, once we get rid of the GPL parts), but the vast majority
of the source files are (also) MIT. This seems fine since MIT gives all of
Is there a way to get the least squares residuals for qrfact(X)\y without
having to compute X*beta, as is given by gels? I believe this should be
||Q2'y||; Q2'y appears to be computed but explicitly zeroed in A_ldiv_B!.
Simon
You could write a C interface to the C++ spline library and then ccall that
from Julia. That's probably not too much work, but I can't promise that the
interface plus the port to Julia would be less work than fixing the
segfaults.
Simon
On Wednesday, March 26, 2014 10:30:52 AM UTC-4, Tomas
Your algorithm looks fine. The problems are entirely in your testing
script. The first issue is that JSON.parse returns a Vector{Any}, which
deoptimizes everything. Try:
f = open(test.json)
data = convert(Vector{Float64}, JSON.parse(f))
The second issue is that you're including compilation
If you have e.g. a function that always returns Int, then you can specify
the associative type as Dict{Any,Int}, which allows type inference to
determine that anything pulled out of it is an Int, so the return type of
the memoized function can be inferred:
julia using Memoize
julia @memoize
It seems like you don't have a GLMNet binary. This was supposed to be built
when you ran Pkg.add(GLMNet); maybe you got an error message? Make sure
you have gfortran installed and then run Pkg.build(GLMNet).
Simon
On Sunday, March 2, 2014 10:23:40 PM UTC-5, Bina Caroline wrote:
I run the
codes.
- Dahua
On Thursday, February 27, 2014 11:58:20 PM UTC-6, Stefan Karpinski wrote:
We really need to stop using libm for those.
On Fri, Feb 28, 2014 at 12:40 AM, Simon Kornblith si...@simonster.comwrote:
Some of the poorest performers here are trunc, ceil, floor, and round
, Stefan Karpinski wrote:
We really need to stop using libm for those.
On Fri, Feb 28, 2014 at 12:40 AM, Simon Kornblith
si...@simonster.comwrote:
Some of the poorest performers here are trunc, ceil, floor, and round
(as of LLVM 3.4). We currently call out to libm
I created a package that makes Julia use Intel's Vector Math Library for
operations on arrays of Float32/Float64. VML provides vectorized versions
of many of the functions in openlibm with equivalent (1 ulp) accuracy (in
VML_HA mode). The speedup is often quite stunning; see benchmarks at
.
Simon
On Thursday, February 27, 2014 9:04:18 PM UTC-5, Simon Kornblith wrote:
I created a package that makes Julia use Intel's Vector Math Library for
operations on arrays of Float32/Float64. VML provides vectorized versions
of many of the functions in openlibm with equivalent (1 ulp) accuracy
To make type inference for memoized functions suck less, all we'd need is a
way to get a function's inferred return type for a given set of inputs in a
way that can be used by type inference, and then we could use that to put a
typeassert on the result. This doesn't actually seem that hard
Looks like https://github.com/JuliaLang/julia/issues/5750. Deleting
sys.dylib as described there should fix this, but it will make Julia
startup slower. If you compile yourself, it should fix this and startup
should be fast. I don't think there's a major reason to run the prerelease
binaries
,
or if a google group should be started up.
-Jim
On Friday, February 21, 2014 5:40:52 PM UTC-6, Simon Kornblith wrote:
This is great! ss2tf and potentially other functionality is also relevant
to DSP more generally. We currently have conversions between most standard
filter
As you have surmised, the b=5 kw argument is not actually the same as
:(b=5)(you can see this using
xdump(ex.args)). Instead of :(b=7), use Expr(:kw, :b, 7).
On Friday, February 21, 2014 6:17:48 PM UTC-5, Bassem Youssef wrote:
Suppose i have a simple function as follows:
julia func(a;b=3)
This is great! ss2tf and potentially other functionality is also relevant
to DSP more generally. We currently have conversions between most standard
filter representations in DSP.jl (see
https://github.com/JuliaDSP/DSP.jl/blob/master/src/filter_design.jl) but no
state space representation.
There are also a couple of more concise options in Base.
For multiplication by a constant, you can also use scale!(b, 2), which will
basically do the same thing as Tim's loop for SharedArrays, but may be
faster for arrays of BLAS types since it calls BLAS scal!.
For elementwise operations on
ccall automatically calls convert(Ptr{Uint8}, fname), which fails if the
string is a non-terminal SubString not because the conversion is
impossible, but because it would need to return a pointer to a copy but not
the copy itself, there's no guarantee that copy wouldn't be
garbage-collected
Done.
On Wednesday, January 29, 2014 10:03:39 PM UTC-5, John Myles White wrote:
Please go ahead and add deprecation warnings.
— John
On Jan 29, 2014, at 6:51 PM, Simon Kornblith
si...@simonster.comjavascript:
wrote:
I believe two identical symbols are the same object, which implies
With a 64-bit build, Julia integers are 64-bit unless otherwise specified.
In C, you use ints, which are 32-bit. Changing them to long long makes the
C code perform similarly to the Julia code on my system. Unfortunately,
it's hard to operate on 32-bit integers in Julia, since + promotes to
someone can
explain?
All best,
Przemyslaw Szufel
On Tuesday, 14 January 2014 23:29:40 UTC+1, Simon Kornblith wrote:
With a 64-bit build, Julia integers are 64-bit unless otherwise
specified. In C, you use ints, which are 32-bit. Changing them to long long
makes the C code perform similarly
, 14 January 2014 23:55:12 UTC+1, Simon Kornblith wrote:
In C long is only guaranteed to be at least 32 bits (IIRC it's 64 bits
on 64-bit *nix but 32-bit on 64-bit Windows). long long is guaranteed
to be at least 64 bits (and is 64 bits on all systems I know of).
Simon
On Tuesday, January 14
84 matches
Mail list logo