http://docs.julialang.org/en/release-0.3/manual/types/#parametric-composite-types
Use foo{V:VecOrMat}(X::Vector{V})
--Tim
On Tuesday, April 28, 2015 02:40:41 AM Ján Dolinský wrote:
Hi guys,
I am trying to write a function which accepts as an input either a vector
of vectors or a vector of
The previous picture was only an example, I should by the end solve non
linear system of dimension 500. I expect NLsolve to work with one dimension
too.
I have not figure out how to use anonymous function within NLsolve. I don't
understand of what README Tim Holy you are talking about. I went
I think one should go over all the names in Base and see if there's some
rules that can be applied sanely to come up with better naming scheme.
If you introduce factorize(MyType,...) and want to be consistent about this
kind of things you might end up changing a lot of the functions in base.
Both ideas you have given are working. Wonderful! I just need to figure out
which one is the fastest, the @gensym out of the creation of the function
probably.
Thank you very much!
Antoine.
Le lundi 27 avril 2015 15:49:56 UTC+1, Antoine Messager a écrit :
Dear all,
I need to create a lot
Intel compilers won't help, because your julia code is being compiled by LLVM.
It's still hard to tell what's up from what you've shown us. When you run
@time, does it allocate any memory? (You still have global variables in there,
but maybe you made them const.)
But you can save yourself two
Hi,
Ángel de Vicente angel.vicente.garr...@gmail.com writes:
Now I have two more questions, to see if I can get better performance:
1) I'm just running the Julia distribuation that came with my Ubuntu
distro. I don't know how this was compiled. Is there a way to see
which optimization level
Hi Tim,
On Tuesday, April 28, 2015 at 2:53:45 PM UTC+1, Tim Holy wrote:
Before deciding that the compiler is the answer...profile. Where is the
bottleneck?
well, the code now runs quite fast (double the time it takes for my Fortran
version), after following the suggestions made in this
On Tuesday, April 28, 2015 05:45:10 AM 'Antoine Messager' via julia-users
wrote:
The previous picture was only an example, I should by the end solve non
linear system of dimension 500. I expect NLsolve to work with one dimension
too.
I have not figure out how to use anonymous function
Re: implementing such a merge function.
My first instinct would be to create a list of methods from each function,
find the intersection, then return a function with methods determined by
the methods from each input function, with methods in the intersection
going to the value of
Hi Tim,
Thanks for the tip. Very interesting. In function definition it works. I
read the parametric-composite-types manual. I am still puzzled however.
Consider the example below which works as I expect:
a = rand(10)
b = rand(10,2)
julia a :: VecOrMat{Float64}
10-element Array{Float64,1}:
Thanks for the clarification.
If my function foo has more parameters I just go like this ?
function foo{V:VecOrMat}(X::Vector{V}, param1::Int, param2::String)
...
end
Regards,
Jan
Dňa utorok, 28. apríla 2015 16:31:37 UTC+2 Tom Breloff napísal(-a):
The reason is a little subtle, but it's
Agreed that gensym may be unnecessary. Do you really need access to all
those functions? If you redefine the same function every time, is that
faster?
On Tuesday, April 28, 2015 at 9:17:16 AM UTC-4, Yuuki Soho wrote:
The README.md is just the default page shown on github,
The reason is a little subtle, but it's because you have an abstract type
inside a parametric type, which confuses Julia. When you annotate
a::MyAbstractType, julia understands what to do with it (i.e. compiles
functions for each concrete subtype). When you annotate
Is it possible to use MatPlotlib style commands in PyPlot?
From http://matplotlib.org/users/style_sheets.html I get the impression
that I can quickly switch to a 'ggplot' style interface. Translating the
commands on that page to Julia I thought I could do something like
Thanks, but I think if iter 2 (line 21) makes sure that x_previous is
defined in the previous iteration. Just to be clear, the condition to check
here was g_norm g_norm_old, but I changed it to get there as early as
the second iteration.
On Tuesday, April 28, 2015 at 9:13:49 PM UTC-4, Avik
Sorry for being a pain, but doesn't LinAlg be LinearAlgebra? What's the
point of issuing naming convention if it is not even respected by the main
developers?
I'm glad you are apologizing, because I find the way you are expressing
yourself is borderline insulting to the hard work of
I think it's time for this thread to stop. People are already upset and
things could easily get much worse.
Let's all go back to our core work: writing packages, building
infrastructure and improving Base Julia's functionality. We can discuss
naming conventions when we've got the functionality
Hi,
I have a big numerical problem that julia is nice for.
But I really want to farm it out over a few hundren cores.
I know my local research supercomputing provider (iVec since I am in
Western Australia),
prefers it if you are running programs in C or Fortran.
But I know they have run things
Hi all,
I have a problem that has made me scratch my head for many hours now! It
might be something obvious that I am missing. I have a Newton-Raphson code
to solve a system of nonlinear equations. The error that I get here does
not have anything to do with the algorithm, but just to be clear,
As I'm writing this, I'm running Julia on a pretty new 90 node cluster. I
don't know if that counts as medium size cluster, but recently it was
reported on the mailing list that Julia was running on
http://www.top500.org/system/178451
which I think counts as a supercomputer.
2015-04-28 19:58
If you comment out lines 42-49, you will see that it works fine!
On Tuesday, April 28, 2015 at 9:20:49 PM UTC-4, Pooya wrote:
Thanks, but I think if iter 2 (line 21) makes sure that x_previous is
defined in the previous iteration. Just to be clear, the condition to check
here was g_norm
Yes, sorry I jumped the gun. Thanks for clarifying.
But it still does not have anything to do with Optim :)
The problem is due to defining an inline function (line 43) that creates
closure over the x_previous variable. To test this, just comment that
line (and adjust the Optim.optimize
Yes, in general you can do anything from PyPlot that you can do from
Matplotlib, because PyPlot is just a thin wrapper around Matplotlib using
PyCall, and PyCall lets you call arbitrary Python code.
The pyplot.style module is not currently exported by PyCall, you can
access it via plt.style:
Ian. I am really sorry if I hurt people. I really respect what has been done
with Julia. I kind of like when people push me in the corner because it helps
me build better tools. That's why I might act this way, and I am sorry if it
hurts people.
I've expressed my ideas that I would like to
I've also tried to help proposing solutions to those problems, such as using
multiple dispatch. But I understand the fact that my tone was not appropriate.
I was also puzzled to hear from Stefan that there is nothing wrong with
sprandn, whereas the coding guideline says otherwise. It just feels
The DecFP package
https://github.com/stevengj/DecFP.jl
provides 32-bit, 64-bit, and 128-bit binary-encoded decimal floating-point
types following the IEEE 754-2008, implemented as a wrapper around the
(BSD-licensed) Intel Decimal Floating-Point Math Library
With all due respect, talk is cheap. If anyone really wants to help, submit
a pull request with your proposed changes.
On Tue, Apr 28, 2015, 20:03 François Fayard francois.fay...@gmail.com
wrote:
I've also tried to help proposing solutions to those problems, such as
using multiple dispatch.
I'm seeing the error in line 22 of your gist where you are trying to print
the current value of x_previous. However, x_previous is first defined in
line 38 of your gist, and so the error is correct and doesnt have anything
to do with Optim, as far as I can see.
On Wednesday, 29 April 2015
Would it be possible to use a pointer to allocate space for the creation of
my function every time at the same location?
Le mardi 28 avril 2015 13:45:11 UTC+1, Antoine Messager a écrit :
The previous picture was only an example, I should by the end solve non
linear system of dimension 500. I
You seem to passing nlsolve a one-argument anonymous function whereas your
generating two-argument functions above.
On Tuesday, April 28, 2015, 'Antoine Messager' via julia-users
julia-users@googlegroups.com wrote:
Is there any other possibility? Because, I need to use NLsolve, as it is
the
Do you really need to use a different name for your function each time ?
you could just use the same name it seems. I'm not sure it would solve the
problem though.
Also, on 0.3 Gtk loads just fine for me. Not sure why it's not working on
PkgEvaluator.
--Tim
On Tuesday, April 28, 2015 12:46:52 AM Andreas Lobinger wrote:
Hello colleagues,
what is status of availability and usecases for GUI toolkits.
I see Tk and Gtk on the pkg.julialang.org. Gtk has
On Tuesday, April 28, 2015 02:45:59 AM 'Antoine Messager' via julia-users
wrote:
I would love too but it seems that NLsolve does not accept anonymous
function.
I'd be really surprised if this were true. Search the README for -.
--Tim
The README.md is just the default page shown on github,
https://github.com/EconForge/NLsolve.jl/blob/master/README.md
but there's no example of anonymous function use there. I think you need to
do something of the sort:
(x, fvec) - begin
fvec[1] = (x[1]+3)*(x[2]^3-7)+18
fvec[2] =
+1 for factorize(MyType, ...), Sparse(MyDist, ...) and other similar
examples that have been suggested. It's only a very slight hardship for
those copying their code directly from matlab, but for everyone else I
think it's a big win for readability and type safety. It's also likely
easier to
On Monday, April 27, 2015 at 6:40:50 PM UTC-4, ele...@gmail.com wrote:
On Sunday, April 26, 2015 at 8:24:15 PM UTC+10, Scott Jones wrote:
Yes, precisely... and I *do* want Julia to protect the user *in that
case*.
If a module has functions that are potentially ambiguous, then 1) if the
I would check out PySide.jl. I'm not sure of the current package status,
but I have used Qt from both C++ and Python to do fairly intensive gui
work, and it's a very good framework. IMO, the only potential downside is
the license, but you'd have to evaluate that yourself.
On Tuesday, April
Hi,
Ángel de Vicente writes:
Now I have two more questions, to see if I can get better performance:
1) I'm just running the Julia distribuation that came with my Ubuntu
distro. I don't know how this was compiled. Is there a way to see
which optimization level and which compiler options were
I like the idea of something like factorize(MyType,...), but it is not
without problems for generic programming. Right now cholfact(Matrix) and
cholfact(SparseMatrixCSC) return different types, i.e. LinAlg.Cholesky and
SparseMatrix.CHOLMOD.Factor. The reason is that internally, they are very
Great, thanks!
Dňa utorok, 28. apríla 2015 17:07:27 UTC+2 Tom Breloff napísal(-a):
Yes that's fine
On Tuesday, April 28, 2015 at 10:41:15 AM UTC-4, Ján Dolinský wrote:
Thanks for the clarification.
If my function foo has more parameters I just go like this ?
function
Distributions is an awesome example of a package that explains what I was
trying to say about using multi-dispatch instead of compound function names
-- a work of art. I hope to use it in the future. Have you had an uproar
from the community that the names don't follow the MATLAB defacto
Thank you very much Duane! It will help me a lot.
Right now, I found something that was really slowing down my algorithm.
The original function checkDominance has the following signature:
function checkDominance(mgeoData::MGEOStructure,
candidatePoint::ParetoPoint,
The code allocate only 432 bytes on my computer once I removed all global
variables, and it's pretty fast.
Multiplying by the inverse of dx2 ... instead of dividing also make quite a
difference, 2-3x.
http://pastebin.com/PSZyLXJX
NLsolve seems to work fine with anonymous functions:
julia using NLsolve
julia f! = function (x, fvec)
fvec[1] = (x[1]+3)*(x[2]^3-7)+18
fvec[2] = sin(x[2]*exp(x[1])-1)
end
(anonymous function)
julia g! = function (x, fjac)
fjac[1, 1] = x[2]^3-7
Yes that's fine
On Tuesday, April 28, 2015 at 10:41:15 AM UTC-4, Ján Dolinský wrote:
Thanks for the clarification.
If my function foo has more parameters I just go like this ?
function foo{V:VecOrMat}(X::Vector{V}, param1::Int, param2::String)
...
end
Regards,
Jan
Dňa utorok,
See https://github.com/JuliaLang/julia/pull/11043
I implemented an program in Fortran and Julia for time comparison when
learning the language.
This was very helpful to find problems in how I was learning julia. Maybe
I did not read carefully enough,
but I would compile the fortran with the intel compilers (not MKL) instead
of gcc as
Ah! Thank you. I had not heard of closure before. Now, I have heard of it,
but am not sure if I can completely understand it! I guess this might be
worth being explained in the manual. One thing that is still kind of
confusing is that in your example, z is defined after the closure is
created
I can see that this issue is convoluted. There appears to be competing
requirements, and getting things to start humming is non trivial.
Instead of dealing with what if-s... I want to start with more concrete
what does...
*Transgressions.sin*
First, I don't fully understand Jeff's talk about
Nice summary, it shows that, in the case where the module developer knows
about an existing module and intends to extend its functions, Julia just
works.
But it misses the actual problem case, where two modules are developed in
isolation and each exports an original sin (sorry couldn't
Thanks. The main problem I had was that I was using an older version of
Matplotlib; I've upgraded it and used the ggplot style. Thanks for your
help.
On Tue, Apr 28, 2015 at 3:44 PM, Steven G. Johnson stevenj@gmail.com
wrote:
Yes, in general you can do anything from PyPlot that you can
People who might realize, after becoming acquainted with Julia, for general
computing, or maybe for some light usage of the math packages, might much
rather have understandable names available, so they don't always have to
run to the manual...
With decent code completion in your editor, I don't
On Tuesday, April 28, 2015 at 11:17:55 AM UTC-5, Ronan Chagas wrote:
Sorry, my mistake. Every problem is gone when I change
nf::Integer
to
nf::Int64
in type MGEOStructure.
I didn't know that such thing would affect the performance this much...
Sorry about that,
Ronan
No problem.
On Tuesday, April 28, 2015 at 11:56:29 AM UTC-5, Scott Jones wrote:
People who might realize, after becoming acquainted with Julia, for
general computing, or maybe for some light usage of the math packages,
might much rather have understandable names available, so they don't always
have to
Integer is an abstract type and thus kills performance if you use it in a
type and need to access it frequently. There was discussion somewhere about
renaming abstract types like Integer and FloatingPoint to include the name
Abstract in them to avoid accidents like this. I guess this shows that
Sorry, my mistake. Every problem is gone when I change
nf::Integer
to
nf::Int64
in type MGEOStructure.
I didn't know that such thing would affect the performance this much...
Sorry about that,
Ronan
Sorry for being a pain, but doesn't LinAlg be LinearAlgebra? What's the point
of issuing naming convention if it is not even respected by the main developers?
Besides, I really find that Julia underuses multiple dispatch. It's a big
selling point of the language and it's not even used that much
I am trying to publish my package, AbstractDomains
https://github.com/zenna/AbstractDomains.jl and having some issues. I
already registered AbstractDomains before, but failed to tag it. I am
attempting to both tag it and update with recent changes.
Currently, if I run
git tag
I get back
I don't know why it hasn't been mentioned (it was hinted at by Tamas) but
it seems to me the clear solution is for most of Base to actually be moved
into submodules like 'LinAlg'. Then to use those names, people need to call
'using LinAlg' or 'using Sparse', etc... Somebody mentioned how
Hi,
On Tuesday, April 28, 2015 at 3:36:48 PM UTC+1, Tim Holy wrote:
Intel compilers won't help, because your julia code is being compiled by
LLVM.
But I saw a discussion about using Intel's MKL for greater performance and
the Make.user options to use Intel compilers are meant to be
I'm fitting a complex cognitive model to data. Because the model does not
have closed-form solution for the likelihood, computationally intensive
simulation is required to generate the model predictions for fitting. My
fitting routine involves two steps: (1) a brute force search of the
Yuuki,
thanks a lot, this was what I was missing!
On Tuesday, April 28, 2015 at 3:49:58 PM UTC+1, Yuuki Soho wrote:
The code allocate only 432 bytes on my computer once I removed all global
variables, and it's pretty fast.
Multiplying by the inverse of dx2 ... instead of dividing also make
But I saw a discussion about using Intel's MKL for greater performance and
the Make.user options to use Intel compilers are meant to be supported by
Julia. Why if there is no advantage in using them?
Intel MKL only helps with faster linear algebra than the default OpenBLAS
(in some cases).
Having using LinAlg would help to clean up the namespace of available
functions for people who don't need it but what I was proposing is doing
using LinAlg shouldn't change the API that much (if at all). You still
want to factorize numbers and polynomials (which are not part of LinAlg).
The
Renaming the sparse matrix functions should absolutely be done. And this is
coming from a long-time former user of Matlab who works with sparse
matrices much more often than I work with strings. SparseMatrixCSC is far
from the only sparse matrix format in the world when you venture beyond
what
In ODE.jl, I would like to be able to take the norm of a Vector
containing other things than just numbers:
type Point
x::Float64
y::Float64
end
Base.norm(pt::Point, p=2) = norm([pt.x, pt.y], p)
norm(Point(3,4)) # == 5
norm([Point(3,4), Point(3,1)]) # does not work:
# ERROR: `abs` has
I am using
autocmd FileType julia set commentstring=#\ %s
together with https://github.com/tpope/vim-commentary (don't have experience
with tComment)
Am 28.04.2015 um 11:14 schrieb Magnus Lie Hetland m...@idi.ntnu.no:
I'm using vim for Julia editing, and the Julia mode is great; for
Here's one vote for Gtk. Currently it might need some love to fix up for recent
julia changes---presumably you (or someone) could fix it up in a couple of
hours.
Once I finish with my array work in base julia, I'm hoping to get back into
graphics this summer and switch all of my packages to
Hi guys,
I am trying to write a function which accepts as an input either a vector
of vectors or a vector of matrices e.g.
function foo(X::Vector{VecOrMat{Float64}})
When running the function with a vector of matrices I get the following
error 'foo' has no method matching
I would love too but it seems that NLsolve does not accept anonymous
function.
https://lh3.googleusercontent.com/-spe5mTRqJDQ/VT9WpGNgDEI/ABg/_caaIfVwkec/s1600/Capture%2Bd%E2%80%99e%CC%81cran%2B2015-04-28%2Ba%CC%80%2B10.44.04.png
Le lundi 27 avril 2015 18:38:30 UTC+1, Tim Holy a
I guess using
setlocal commentstring=#= %s
rather than the
setlocal commentstring=#=%s=#
from julia.vim works. Not sure if it's the right way, though.
Is there any other possibility? Because, I need to use NLsolve, as it is
the faster non linear solver I have found for my problem.
Le mardi 28 avril 2015 10:45:59 UTC+1, Antoine Messager a écrit :
I would love too but it seems that NLsolve does not accept anonymous
function.
I'm using vim for Julia editing, and the Julia mode is great; for some
reason, though either it or tComment
https://github.com/tomtom/tcomment_vim (or their interaction ;-) uses the
multiline comment style when commenting individual lines. So I end up with
#= for i=1:n =#
#= prinln(i) =#
correction: map(LogLikelihood,RangeVar)
On Tuesday, April 28, 2015 at 3:39:34 PM UTC-4, Christopher Fisher wrote:
I forgot to add that when I tried to set up simpler code with only one
array (and also no data passed through), it produced an error because it
read the distributed array,
On Tuesday, April 28, 2015 at 1:38:02 PM UTC-5, Ángel de Vicente wrote:
Fortran code: http://pastebin.com/nHn44fBa
Julia code:http://pastebin.com/Q8uc0maL
We typically share snippets of code using https://gist.github.com/. It
provides syntax highlighting for Julia code, integrated
I forgot to add that when I tried to set up simpler code with only one
array (and also no data passed through), it produced an error because it
read the distributed array, RangeVar, as a one dimensional array instead of
a two dimensional array.
map(LogLikelihood,DensityVar)
Turning named functions into methods of a generic function is definitely
something Julia should take maximum advantage of. It does look like it
should work well in the case of 'factorize'. But I think it's a bit
unrelated from the naming convention issue. Most, if not the large
majority, of
Hello colleagues,
what is status of availability and usecases for GUI toolkits.
I see Tk and Gtk on the pkg.julialang.org. Gtk has the tag 'doesn't load'
from testing, Tk seems OK.
In a recent discussion here, Tim Holy mentioned himself tesing Qwt and Qt
in general seem to be a testcase for
On Tuesday, April 28, 2015 at 3:19:26 AM UTC-4, Mauro wrote:
approach? Would it not make sense to use `norm` instead of `abs` inside
vecnorm, as abs==norm for numbers? (If so, what should p be for the
inside norms?)
Yes, abs - norm in these functions would make sense, and shouldn't
79 matches
Mail list logo