For functions like dot and norm, its also good to check out all the existing
methods in base, via e.g. methods(norm). You'll get this response:
# 9 methods for generic function "norm":
Did you check axpy! which originates from the BLAS tradition?
Looks ok, but I think you could generically have less restricted types in your
functions: e.g.
4 in BasicInterval(3.1,4.9)
won't work, nor will you be able to construct an interval BasicInterval(3,4.5)
(on Mac): what seems to work is to first use \approx + tab to get ≈, and then
go back one character and type \not + tab. If you first use \not + tab, and
then start typing \approx, the not acts only on the \ . If you write
\not\approx + tab, the \not is not being substituted.
https://github.com/ahwillia/Einsum.jl
or
https://github.com/Jutho/TensorOperations.jl
might also be of interest
Op vrijdag 20 mei 2016 17:17:08 UTC+2 schreef Matt Bauman:
>
> I believe something just like that is already implemented in
> Devectorize.jl's @devec macro. Try: `@devec
Transposing coord (i.e. let the column index correspond to x,y,z and the
row index to the different particles) helps a little bit (but not so much).
Julia uses column major order, Python is different I believe. How big is
the difference with Numba?
Op vrijdag 29 januari 2016 21:26:07 UTC+1
I don't see why in Julia or Matlab you would want to use repmat or
broadcasting. For me, the following simple line of code:
m,n,p = size(A)
reshape(reshape(A,m*n,p)*y,(m,n))
accomplishes your task, and it has about the same speed as matrixLinComb2. It
writes it as a simple multiplication so it
That's true, and therefore also not in Julia, unless using some command to
inline assembly. However, in C it might be possible to get to a factor 2 of
BLAS speed. This might be sufficient if you want to implement something
slightly different from matrix multiplication (like maybe this case) and
If X can be Y, then you should of course not use scale! .
X'*scale(w,Y) should do the job.
Or if you can spare to destroy Y (or more accurately, replace it by diag(w)*Y)
you could do X'*scale!(w,Y) which should allocate less (and therefore be
slightly faster).
Depending on how much more complicated your actual use case is, could you not
just write f(a,b,c) = a*b + a*c instead of a*(b+c)? I guess the former would be
evaluated immediately at compile time if a is _zero ?
in those arguments.
On 22 May 2015 10:27, Jutho juthohaege...@gmail.com
mailto:juthohaege...@gmail.com wrote:
Thanks for the detailed response. The MathConst idea is possibly a great
suggestion which I will look into. With respect to the calling the low-level
function, I don't think
Thanks for the detailed response. The MathConst idea is possibly a great
suggestion which I will look into. With respect to the calling the
low-level function, I don't think there is a big difference between your
suggestion and my one-liner, as long as the return type of the function is
the
Dear julia users,
When looking at e.g. BLAS functions, they have a general format of adding a
scaled result array to an existing array, which itself can also be scaled;
e.g. AB could be the result of a matrix multiplication of matrices A and B,
and the BLAS gemm! function (using Julia's name)
Regression.jl seems to have like a sweet implementation of gradient based
optimization algorithms. How does this compare to the work in Optim.jl?
Would it be useful to join these efforts?
Op donderdag 23 april 2015 11:12:58 UTC+2 schreef Dahua Lin:
Hi,
I am happy to announce three packages
When the matrix is real and symmetric, ARPACK does resort to Lanczos, or at
least the implicitly restarted version thereof. Straight from the homepage:
When the matrix A is symmetric it reduces to a variant of the Lanczos
process called the Implicitly Restarted Lanczos Method (IRLM).
I
But feel free to discuss the need for a convenient syntax for this
at https://github.com/JuliaLang/julia/issues/10338
Op woensdag 4 maart 2015 10:32:12 UTC+1 schreef Tamas Papp:
Assuming that you want to fill it with a sequence of integers as your
examples suggest, something like
[i for
Op woensdag 4 maart 2015 11:54:27 UTC+1 schreef Tamas Papp:
Surely I am missing something, but how would this relate to
concatenation?
The point of proposal 2 in issue 10338 would be that concatenation would
receive a new pair of brackets and that spaces and semicolons would be used
as
The rational would be that people often want to construct matrices (i.e.
even where the elements are themselves arrays and they do not want
concatenation) and that there is no syntax for this. All the tricks with
reshape or transposing are not very feature proof, as e.g. transposing
might
Or in this particular case, maybe their should be some functionality like
that in Base, or at least in Base.LinAlg, where is often necessary to mix
complex variables and real variables of the same type used to build to
complex variables.
Op donderdag 26 februari 2015 08:10:35 UTC+1 schreef
this.
2015-02-27 15:02 GMT-05:00 Jutho juthoh...@gmail.com javascript::
Or in this particular case, maybe their should be some functionality like
that in Base, or at least in Base.LinAlg, where is often necessary to mix
complex variables and real variables of the same type used to build to
complex
. Having type in the name would be a bit like having `fabs(x::Float64)`.
2015-02-27 15:21 GMT-05:00 Jutho juthohaege...@gmail.com:
But I wouldn't overload real; real is for the real value of a value, not for
the real type. Maybe something like realtype , or typereal if we want to go
with the other
but probably some would oppose that.
2015-02-27 15:42 GMT-05:00 Jutho Haegeman juthohaege...@gmail.com
mailto:juthohaege...@gmail.com:
I am not opposed to that but the same could be said for typemin and typemax.
Verstuurd vanaf mijn iPhone
Op 27-feb.-2015 om 21:27 heeft Andreas Noack
I opened an issue to further discuss some related questions regarding
concatenation and construction of matrices:
https://github.com/JuliaLang/julia/issues/10338
As long as not all parameters of a parametric concrete type are fully
specified, the type is treated as abstract. So in both cases your collection
would be of abstract elements and they would not be stored packed in memory. I
don't think what you are requesting is possible, but I might be
Not sure what is causing the slowness, but you could avoid creating a
diagonal matrix and then doing the matrix multiplication with diagm(expr)
which will be treated as a full matrix.
Instead of shrt*diagm(expr) which is interpreted as the multiplication of
two full matrices, try
I remember reading somewhere that Codebox might support Julia in the near
future. Does anybody have any comments or information about this?
Op vrijdag 28 november 2014 17:39:43 UTC+1 schreef Daniel Carrera:
Hi everyone,
Can anyone here comment or share opinions on the newer text editors --
This only happens in global scope, not inside a function? If you define
f(list) = return [g(x) for x in list]
then f(xs) will return an Array{Float64,1}.
Op dinsdag 4 november 2014 03:23:36 UTC+1 schreef K leo:
I found that I often have to force this conversion, which is not too
difficult.
Some more comments / answers:
I too was confused by the precise use of ! when I first started using Julia
in cases like reshape. However, it is very clear that reshape should not
have an exclamation mark.
The use of ! in Julia functions is whenever the function modifies one of
its input
Montag, 13. Oktober 2014 10:33:26 UTC+2 schrieb Jutho:
Some more comments / answers:
I too was confused by the precise use of ! when I first started using
Julia in cases like reshape. However, it is very clear that reshape should
not have an exclamation mark.
The use of ! in Julia functions
I don't see the problem regarding point one.
A type with parameters, as your SortedDict{D,K}, becomes an abstract type
when the parameters are unspecified, e.g. SortedDict, but this is indeed
printed/formatted with unspecified parameters put back (I guess with the
name as you defined them),
, 2014 2:15:01 AM UTC-4, Jutho wrote:
I don't see the problem regarding point one.
A type with parameters, as your SortedDict{D,K}, becomes an abstract
type when the parameters are unspecified, e.g. SortedDict, but this is
indeed printed/formatted with unspecified parameters put back (I guess
Note that
Vector
Vector{Belief}
Vector{Belief{DistributionType}}
Vector{Belief{D}} # for some D: DistributionType
are all different types. Your z is a Vector{Belief}, where Belief, without
parameters, is an abstract type.
For any D:DistributionType , Belief{D} is a concrete type, even if D
While it is certainly expected that a custom type will be more performant,
you cannot really trust your timings. You should not time with variables
defined in global scope, and you should add a warm-up phase to compile your
functions.
Defining all of this:
# Method 1: Type
type parameters
because it is not recognized/parsed as literal but as the application of a
unary minus, which has lower precedence than ^
I guess it is not possible to give binary minus a lower precedence than ^
and unary minus of higher precedence, since these are just different
methods of the same
!)
https://github.com/JuliaLang/julia/pull/7814
I will try to add clear (hopefully) documentation for these methods and get
this pull request merged next week.
Jutho
Op woensdag 17 september 2014 20:43:41 UTC+2 schreef Tim Holy:
To add to Doug's point, permutedims! is non-allocating, but it's
I have just reopened an old issue here:
https://github.com/JuliaLang/julia/issues/6965
We hoped that everything was fixed, but apparently not.
What are the dimensions (i.e. sizes) of these 9 dimensions? You might be
interested in trying the tensorcontract routine from TensorOperations.jl
and compare the method=:BLAS vs method=:native approach. Although I do
assume that for a specific case like this (where basically every dimension
is
Suppose I have some type T{P1,P2,P3} depending some parameters. I don't
know which type exactly, except that it originates from a type hierarchy
which has 3 parameters. What is the best way to construct, given e.g. a
variable of type T{P1,P2,P3} with specific values for P1, P2, P3, to
+1 for this quote of yours:
The algorithm Julia uses for type inference works by walking through a
program, starting with the types of its input values, and abstractly
interpreting the code. Instead of applying the code to values, it applies
the code to types, following all branches
I don't know about the zeros, but one issue with your timings is certainly
that you also measure the time to generate the random numbers, which is
most probably not negligible.
Op donderdag 17 juli 2014 13:54:54 UTC+2 schreef Andrei Zh:
I continue investigating matrix multiplication
timings.
On 17 Jul 2014, at 15:54, Tomas Lycken tomas.lyc...@gmail.com wrote:
@Jutho: My gut reaction was the same thing, but then I should be able to
reproduce the results, right? All three invocations take about 1.2-1.5
seconds on my machine.
// T
On Thursday, July 17, 2014 3:06:08 PM
Having played a little bit with parameter sweeps like these a while ago, I
was also troubled by the best way to do this. Note that @parallel for will
immediately partition your parameter set (in your case: doset ) and assign
different partitions to different processors. This means that, if the
Using random permutations of your original parameter set is a clever idea. It
never even occurred to me when I was trying to find a workaround :-).
On 10 Jul 2014, at 23:03, Thomas Covert thom.cov...@gmail.com wrote:
Jutho, I was also worried about this. For that reason, “doset” is a random
Fully realising that this discussion has been settled and the convention is
here to stay, I nevertheless feel obsessed to make the remark that there
would have been more elegant solutions. Other languages have been able to
come up with acceptable operators for a binary 'min' or 'max':
Since I just read these operators were later removed from gcc again, it must
not all have been perfect either :D.
On 08 Jul 2014, at 16:04, Jutho juthohaege...@gmail.com wrote:
Fully realising that this discussion has been settled and the convention is
here to stay, I nevertheless feel
I've also encountered this problem and did indeed solve it by implementing
a method that throws an error. But it would be nice to hear if a better,
more julian, approach exists or could be made available.
Op vrijdag 4 juli 2014 10:50:21 UTC+2 schreef Magnus Lie Hetland:
Just a quick followup:
This is Matlab's code:
[Q,S] = svd(A,'econ'); %S is always square.
if ~isempty(S)
S = diag(S);
tol = max(size(A)) * S(1) * eps(class(A));
r = sum(S tol);
Q = Q(:,1:r);
end
Op maandag 30 juni 2014 07:53:19 UTC+2 schreef Andre P.:
I have some Matlab code I'm porting to
Good point, I am not used to having to deal with licensing issues so I was
probably a bit careless. My apologies for that.
I removed my message, unfortunately it is still contained in your response
:-).
Op maandag 30 juni 2014 15:40:03 UTC+2 schreef Matt Bauman:
Careful! That is Copyright
://github.com/Jutho/LinearMaps.jl . Comments and
suggestions are appreciated.
Jutho
That's a strange line of reasoning. Why not contribute to IterativeSolvers
with the things you're interested and skilled in (linear solvers using
Krylov)?
It's of course up to you, but I would certainly prefer a less general name
then KrylovSolvers.jl for a package that contains linear solvers
Dear Julia users,
if you have an AbstractSupertype and a concrete Subtype, then the dispatch
evidently works great to make sure that if you define
f(x::AbstractSupertype) and f(x::Subtype), the latter will be called if f
is called with an argument of type Subtype.
I am writing some code in
, running at a 100% for a time that could be compatible with
processing a single iteration. But then, about half of them fall back to 0%
CPU, and even some more fall back to 0% later. Am I doing something wrong?
Best regards,
Jutho
I probably already have an idea what's going on. How are the different
tasks distributed over the different Julia processes? Is the for loop
immediately cut into pieces where e.g. process 1 will handle the cases
iter=1:10, process 2 handles the cases iter=11:20 and so on? For different
values
segfault.
Simon
On Monday, June 2, 2014 11:34:13 PM UTC+2, Jutho wrote:
Dear Julia users,
I often need to use large temporary multidimensional arrays within loops.
These temporary arrays don't necessarily have a fixed size, so allocating
them just once before the start of the loop
Starting with the following definitions
```julia
abstract Vartype
type VT1:Vartype
end
abstract Graph{T:Vartype}
abstract Subgraph{T:Vartype} : Graph{T}
type Vertex{T:Vartype} : Subgraph{T}
end
type Block{T:Vartype,N} : Subgraph{T}
end
abstract GraphObject{T:Vartype}
type
Dear Julia users,
I would like to present TensorOperations.jl v0.0.1, a package for
performing tensor contractions, traces and related operations. It can be
found here:
https://github.com/Jutho/TensorOperations.jl
and can now be installed using Pkg.add(TensorOperations)
Before creating
://nbviewer.ipython.org/gist/Jutho/8934314
I repeated 5 timings for a bunch of values of the matrix dimension (all
square matrices). First two plots show the Julia timings (respectively BLAS
timings) divided by matrix size to the power 3, which should stagnate to a
constant. This is the case for BLAS
, and the Julia speed (100 times BLAS speed) with
the C speed (4 to 5 times BLAS speed) in the second regime. Any idea on
where the big difference between Julia and C is coming from?
Best regards,
Jutho
Thanks for all the responses. It was never my intention to write a
sophisticated code that can compete with BLAS, we have BLAS for that. I
just wanted to see how much you lose with a simple code. I guess the
generic_matmulmul is at the level of simplicity that I am still willing to
consider,
60 matches
Mail list logo