Hi Raniere,
Are there specific dates mentors and students have to do this by?
-viral
On Tuesday, March 24, 2015 at 10:25:22 PM UTC+1, Raniere Silva wrote:
Hi,
there was a problem of communication
and I didn't announce that NumFOCUS, http://numfocus.org/,
was selected for Google Summer
How about we aim for 5pm in that case? I think I can make it by then. Does
that work for others?
-viral
On Tuesday, March 24, 2015 at 11:07:40 AM UTC+1, Simon Danisch wrote:
My train leaves at 9pm (at least the train station is close), so I'd
probably go there 1-2 hours early and see who
Both times are fine with me, I just need to change the reservation if we go
with that.
By my count, from the thread above the following people are probably coming:
Viral Shah
Simon Danisch
Felix Schueler
David Higgins
Felix Jung? (wow, cool stuff :) )
Fabian Gans?? (Jena)
One other person
Yes, writing to a file is one of the slower things you can do. So if that's in
a performance-critical loop it will very much slow things down. But that would
be true for Python and PyPy as well. Are you doing the same thing in that code?
On Mar 25, 2015, at 4:00 AM, Michael Bullman
Ok, glad to hear that it seems that you got it working!
I am interested in profiling some julia code, but a substantial fraction of
the time and memory usage will be due to functions from an external
library, called with ccall. Should I be able to collect data about time
spent and memory resources used in this case?
Are there specific dates mentors and students have to do this by?
Before March 27th 19:00 UTC.
signature.asc
Description: Digital signature
That does seem to be the issue. It's tricky to fix since you can't evaluate
sizeof(Ptr) unless the condition is true.
On Tue, Mar 24, 2015 at 7:13 PM, Stefan Karpinski ste...@karpinski.org
wrote:
There's a branch in eltype, which is probably causing this difference.
On Tue, Mar 24, 2015 at
I am interested in profiling some julia code, but a substantial fraction of
the time and memory usage will be due to functions from an external
library, called with ccall. Should I be able to collect data about time
spent and memory resources used in this case?
Yes: if you call Profile.print(C=true) you'll see C stack frames as well.
On Wed, Mar 25, 2015 at 11:29 AM, Patrick Sanan patrick.sa...@gmail.com
wrote:
I am interested in profiling some julia code, but a substantial fraction
of the time and memory usage will be due to functions from an
I won’t make it either, but I hope that I can join in on some other day.
Cheers,
Keyan
On 25 Mar 2015, at 11:54, Felix Jung fe...@jung.fm wrote:
Sorry guys. Would have loved to come but can't make it on that date. If we
make this a regular thing I'd be happy to participate in an active
Given the performance difference and the different behavior, I'm tempted to
just deprecate the two-argument form of pointer.
On Wed, Mar 25, 2015 at 12:53 PM, Sebastian Good
sebast...@palladiumconsulting.com wrote:
I guess what I find most confusing is that there would be a difference,
since
I'm sure a pull request would be appreciated. Alternatively, SubArrays do work
the way you are hoping for.
--Tim
On Wednesday, March 25, 2015 07:19:50 AM Neal Becker wrote:
I can assign a single element of a view:
julia view(a,:,:)[1,1] = 2
2
julia a
10x10 Array{Int64,2}:
2 5 5 5
The benefit of the semantics of the two argument pointer function is that it
preserves intuitive pointer arithmetic. As a new (yet happy!) Julia programmer,
I certainly don’t know what the deprecation implications of changing pointer
arithmetic are (vast, sadly, I imagine), but their behavior
There is also http://www.reddit.com/r/Julia/
On Wednesday, March 25, 2015 at 7:57:20 AM UTC+2, cdm wrote:
these twitter feeds:
https://twitter.com/JuliaLanguage
https://twitter.com/ProjectJupyter
https://twitter.com/julialang_news
in addition to searching twitter for
using Interact, Reactive
α = Input(2)
display(togglebuttons([one = 1, two = 2], signal=α))
signal(α)
results in two being selected initially. If you want to set initial label
to be selected, you can use the value_label keyword argument
If you want the selection to change wrt another signal, you
On Wednesday, March 25, 2015 at 7:20:05 AM UTC-4, Neal Becker wrote:
So ArrayView is not a 1st-class array?
There's not really such a thing as a 1st-class array. Every array type
needs to define its own indexing methods… and there are a lot of them!
It's very tough to cover them all.
Here is some code I wrote for completely pivoted LU factorizations.
Can you make it even faster?
Anyone who can demonstrate verifiable speedups (or find bugs relative
to the textbook description) while sticking to pure Julia code wins an
acknowledgment in an upcoming paper I'm writing about
Le mercredi 25 mars 2015 à 07:55 -0700, Matt Bauman a écrit :
See https://github.com/JuliaLang/julia/issues/6219#issuecomment-38117402
This looks like a case where, as discussed for string indexing, writing
something like p + 5bytes could make sense. Then the default behavior
could follow the
I hope to look at this when I get some time, but as a preliminary note,
merely applying the @inbounds and @simd macros to the main for loop yields
an increase in performance of about 15-20% on my machine.
Ah, I see it’s been discussed and even documented. FWIW, documenting this
behavior in the pointer function would be useful for newbies like myself. I
agree with Stefan that the two argument pointer function should be deprecated
as it’s C-like behavior is inconsistent. If Julia pointer
See https://github.com/JuliaLang/julia/issues/6219#issuecomment-38117402
On Wednesday, March 25, 2015 at 9:58:46 AM UTC-4, Sebastian Good wrote:
The benefit of the semantics of the two argument pointer function is that
it preserves intuitive pointer arithmetic. As a new (yet happy!) Julia
Reservation changed:
Thursday, 26th March, *5pm* at St. Oberholz, Rosenthaler Straße 72A
It's still in my name (Higgins).
Looking forward to seeing you then,
David.
On Wednesday, 25 March 2015 15:05:07 UTC+1, Keyan wrote:
I won’t make it either, but I hope that I can join in on some other
Thanks all for the suggestions so far. Yes, I'm using julia 0.4-dev for the
basis of this discussion.
Hello guys!
I just had someone ask me this question and I didn't know what to answer
him, example:
julia using Base.Test
julia @test 1 == 1
julia @test 1 == 3
ERROR: test failed: 1 == 3
in error at error.jl:21 (repeats 2 times)
julia @assert 1 == 1
julia @assert 1 == 3
ERROR: assertion
On Wed, Mar 25, 2015 at 1:13 PM, Jason Riedy ja...@lovesgoodfood.com wrote:
Similarly for moving the row scaling and next pivot search into
the loop.
I tried to manually inline idxmaxabs. It made absolutely no difference
on my machine. The row scaling takes ~0.05% of total execution time.
The swap could be done without temporaries, but I assume you're also
trying to match the look of the pseudocode?
It would be interesting to see how fast the code can get without
significantly altering its look, or alternatively how much one would have
to change to achieve speedups.
I
Great! I will experiment further. he I am hoping that this will also apply
to external fortran routines, and that I'll be able to monitor memory
allocation in these external functions.
On Wednesday, March 25, 2015 at 11:38:00 AM UTC+1, Stefan Karpinski wroteT
Yes: if you call
The swap could be done without temporaries, but I assume you're also trying
to match the look of the pseudocode?
On Wednesday, March 25, 2015 at 11:22:41 AM UTC-4, Jiahao Chen wrote:
Here is some code I wrote for completely pivoted LU factorizations.
Can you make it even faster?
Anyone
Also, Andreas just pointed out the loop in indmaxabs traverses the matrix
in row major order, not column major. (for j in s, i in r is faster)
If you want it to look nice and are running on 0.4, just switching to
slice(A, 1:n, k) ↔ slice(A, 1:n, λ)
should also get you a performance boost (especially for large matrices).
Obviously you could do even better by devectorizing, but it wouldn't be as
pretty.
Off-topic, but your use of
And Tim Holy writes:
Obviously you could do even better by devectorizing, but it
wouldn't be as pretty.
Similarly for moving the row scaling and next pivot search into
the loop.
I was surprised by two things in the SubArray implementation
1) They are big! About 175 bytes for a simple subset from a 1D array from
my naive measurement.[*]
2) They are not flat. That is, they seem to get heap allocated and have
indirections in them.
I'm guessing this is because SubArrays
That helps a bit; I am indeed working on v0.4. A zero-allocation SubArray would
be a phenomenal achievement. I guess it’s at that point that getindex with
ranges will return SubArrays, i.e. mutable views, instead of copies? Is that
still targeted for v0.4?
On March 25, 2015 at 3:30:03 PM, Tim
Actually, didn't the original implementation have a couple of bugs?
- A[1:n, k] makes a copy, so I'm not sure you were actually swapping elements
in the original A
- If A[i,j] 0, you're storing a negative value in themax, making it easy for
the next nonnegative value to beat it. You presumably
This is a known limitation of Julia. The trouble is that Julia cannot
do its type interference with the passed in function. I don't have time
to search for the relevant issues but you should be able to find them.
Similarly, lambdas also suffer from this. Hopefully this will be
resolved soon!
There have been many prior posts about this topic. Maybe we should add a FAQ
page we can direct people to. In the mean time, your best bet is to search (or
use FastAnonymous or NumericFuns).
--Tim
On Wednesday, March 25, 2015 11:41:10 AM Phil Tomson wrote:
Maybe this is just obvious, but
Hi everyone,
I recently started using Julia for my projects and I'm currently quite
stuck by how to parallelize things.
I've got the two following functions:
@everywhere pixel(p) = [p.r, p.g, p.b];
which takes a RGB pixel (as defined in the Images module) and converts it
into a vector of
Others are more qualified to answer the specific question about SubArrays,
but you might check out the ArrayViews package. For your test, it
allocates a little under half the memory and is a little over twice as fast
(after warmup);
julia const b = [1:5;];
julia function f()
for i in
Maybe this is just obvious, but it's not making much sense to me.
If I have a reference to a function (pardon if that's not the correct
Julia-ish terminology - basically just a variable that holds a Function
type) and call it, it runs much more slowly (persumably because it's
allocating a lot
Thanks to all who contributed to v0.3.7.
Unless further testing is going on, this milestone can now be closed at
https://github.com/JuliaLang/julia/milestones
It would be helpful if the v0.4.0 milestone due date was updated to provide
a more realistic projection.
SubArrays are immutable on 0.4. But tuples aren't inlined, which is going to
force allocation.
Assuming you're using 0.3, there's a second problem: the code in the
constructor is not type-stable, and that makes construction slow and memory-
hungry. Compare the following on 0.3 and 0.4:
julia A
I have a couple of instances where a function is determined by some
parameters (in a JSON file in this case) and I have to call it in this
manner. I'm thinking it should be possible to speed these up via a macro,
but I'm a macro newbie. I'll probably post a different question related to
The question says it all. I wonder if on would get any benefits of keeping
small things in small containers: Uint8 instead of Int64 on x64 OS?
Thanks.
On Wed, Mar 25, 2015 at 11:31 PM, Ivar Nesje iva...@gmail.com wrote:
If you store millions of them, you can use only 1/8 of the space, and get
better memory efficiency.
onsdag 25. mars 2015 21.11.05 UTC+1 skrev Boris Kheyfets følgende:
The question says it all. I wonder if on would
And Jiahao Chen writes:
I tried to manually inline idxmaxabs. It made absolutely no difference
on my machine. The row scaling takes ~0.05% of total execution time.
Simply inlining, sure, but you could scale inside the outer loop
and find next the pivot in the inner loop. Making only a single
Good question!
In 0.4 the printing for @test has been improved quite significantly to
display the values of variables.
julia a,b = 1,2
julia @test a==b
ERROR: test failed: (1 == 2)
in expression: a == b
in error at error.jl:19
in default_handler at test.jl:27
in do_test at test.jl:50
On Wednesday, March 25, 2015 at 1:08:24 PM UTC-7, Tim Holy wrote:
Don't use a macro, just use the @anon macro to create an object that will
be
fast to use as a function.
I guess I'm not understanding how this is used, I would have thought I'd
need to do something like:
julia
function
On Wednesday, March 25, 2015 at 1:52:04 PM UTC-7, Tim Holy wrote:
No, it's
f = @anon x-abs(x)
and then pass f to test_time. Declare the function like this:
function test_time{F}(func::F)
end
Ok, got that working, but when I try using it inside the function (which
Don't use a macro, just use the @anon macro to create an object that will be
fast to use as a function.
--Tim
On Wednesday, March 25, 2015 01:00:27 PM Phil Tomson wrote:
I have a couple of instances where a function is determined by some
parameters (in a JSON file in this case) and I have to
If you store millions of them, you can use only 1/8 of the space, and get
better memory efficiency.
onsdag 25. mars 2015 21.11.05 UTC+1 skrev Boris Kheyfets følgende:
The question says it all. I wonder if on would get any benefits of keeping
small things in small containers: Uint8 instead of
No, it's
f = @anon x-abs(x)
and then pass f to test_time. Declare the function like this:
function test_time{F}(func::F)
end
--Tim
On Wednesday, March 25, 2015 01:30:28 PM Phil Tomson wrote:
On Wednesday, March 25, 2015 at 1:08:24 PM UTC-7, Tim Holy wrote:
Don't use a macro,
The function-to-be-called is not known at compile time in Phil's
application, apparently.
Question for Phil: are there a limited set of functions that you know
you'll be calling here? I was doing something similar recently, where it
actually made the most sense to create a fixed Dict{Symbol,
On Thursday, March 26, 2015 at 8:06:41 AM UTC+11, Phil Tomson wrote:
On Wednesday, March 25, 2015 at 1:52:04 PM UTC-7, Tim Holy wrote:
No, it's
f = @anon x-abs(x)
and then pass f to test_time. Declare the function like this:
function test_time{F}(func::F)
end
I want to be able to pass in a symbol which represents a function name into
a macro and then have that function applied to an expression, something
like:
@apply_func :abs (x - y)
(where (x-y) could stand in for some expression or a single number)
I did a bit of searching here and came up
On Wednesday, March 25, 2015 at 5:07:27 PM UTC-7, Tony Kelman wrote:
The function-to-be-called is not known at compile time in Phil's
application, apparently.
Right, they come out of a JSON file. I parse the JSON and construct a list
of processing nodes from it and those could have 1 of
Here's the code I was referring to
- https://github.com/tkelman/BLOM.jl/blob/master/src/functioncodes.jl
In my case I'm using Float64 function codes for other reasons, created by
reinterpreting a UInt64 with a few bits flipped. Using UInts directly,
probably from the object_id of the symbol,
I guess what I find most confusing is that there would be a difference, since
adding 1 to a pointer only adds one byte, not one element size.
p1 = pointer(zeros(UInt64));
Ptr{UInt64} @0x00010b28c360
p1 + 1
Ptr{UInt64} @0x00010b28c361
I would have expected the latter to end in 68. the
On Wednesday, March 25, 2015 at 12:34:47 PM UTC-7, Mauro wrote:
This is a known limitation of Julia. The trouble is that Julia cannot
do its type interference with the passed in function. I don't have time
to search for the relevant issues but you should be able to find them.
Hi,
I have an array of 100 elements. I want to split the array to 70 (test set)
and 30 (train set) randomly.
N=100
A = rand(N);
n = convert(Int, ceil(N*0.7))
testindex = sample(1:size(A,1), replace=false,n)
testA = A[testindex];
How can I get the train set?
I could loop through testA and A to
Hi again,
I found a workaround by transforming the images into an array before (with
separate(data(img))). However I still don't understand why I can't
parallelize directly using the image.
Any idea why?
Thanks in advance :)
On Wednesday, 25 March 2015 19:33:02 UTC+1, Archibald Pontier
Sorry guys. Would have loved to come but can't make it on that date. If we make
this a regular thing I'd be happy to participate in an active manner.
Have fun,
Felix
On 25 Mar 2015, at 09:37, David Higgins daithiohuig...@gmail.com wrote:
Both times are fine with me, I just need to change
What platform is this? Are you building from a tarball or a git clone? What
version of git do you have installed?
On Tuesday, March 24, 2015 at 11:16:01 AM UTC-7, Neal Becker wrote:
after git clone, and
make OPENBLAS_TARGET_ARCH=NEHALEM
I see a lot of messages like:
fatal: Needed a
Hello all! The latest bugfix release of the 0.3.x Julia line has been
released. Binaries are available from the usual place
http://julialang.org/downloads/, and as is typical with such things,
please report all issues to either the issue tracker
https://github.com/JuliaLang/julia/issues, or
I can assign a single element of a view:
julia view(a,:,:)[1,1] = 2
2
julia a
10x10 Array{Int64,2}:
2 5 5 5 5 5 5 5 5 5
5 5 5 5 5 5 5 5 5 5
5 5 5 5 5 5 5 5 5 5
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
1 2 3
Given the performance difference and the different behavior, I'm tempted
to just deprecate the two-argument form of pointer.
let's try to be aware of the fact that there is is no performance
difference, before we throw out any wild claims about function calls being
problematic or slow:
julia
You could remove the type assertion on `fn`, and then pull the symbol out
with `fn.args[1]` if it is an expression. I don't see much benefit to
setting things up this way, though.
On Wed, Mar 25, 2015 at 8:58 PM, Phil Tomson philtom...@gmail.com wrote:
I want to be able to pass in a symbol
67 matches
Mail list logo