Do note that SharedArrays will only work with bitstype arrays, not with,
for example, an array of Strings.
Assuming that you are parsing these files and generating an large amount of
small strings, I would suspect that the time being taken is in serializing
and deserializing these large amount of
The toplevel module is called Main. So Main.start(g2, s)
On Mon, Jun 2, 2014 at 9:40 AM, Robert Feldt robert.fe...@gmail.com wrote:
Hi,
I want to be able to call out from a function (defined within a module) to
a method with a specific name but that is defined outside the module. Let
me
Is there a reason you don't just pass the `start` function as an argument
to the module function?
function g(g::G, start::Function)
s = new_s(g)
println(start(, typeof(g), , , typeof(s), ))
start(g, s)
end
M.g(g2, (g::G2, s::M.DefaultS) - 1)
Since anonymous functions are quite slow in some
Thanks!
It worked with ODBC.jl
On Friday, May 30, 2014 8:20:12 AM UTC-7, John Myles White wrote:
That package isn’t finished yet. I’d start with
https://github.com/karbarcca/ODBC.jl
— John
On May 30, 2014, at 8:07 AM, Douglas Teixeira Goncalves dt...@mst.edu
javascript: wrote:
Thanks to both of you. I have now rewritten it as:
function g(g::G; startFunc = Main.start)
s = new_s(g)
startFunc(g, state)
end
and it works as expected. In the end I would actually want the user to
supply the name of the startFunc as a symbol rather than caring about
grabbing the function
Last night I got julia into a state where
julia typeof([i for i = 1:10])
Array{Any,1}
restarting julia cured this (it went back to being an Int array).
Does anyone have any idea what would cause this?
You should be able to do just
function g(g::G; start = :start) # there is no problem in having the same
name on a kwarg and the variable you pass, so calling as M.g(g2;start =
start) is OK
if (start == :start)
startFunc = Main.start
else
startFunc = start
end
startFunc::Function # don't
Hi!
I want to iterate over a collection of dicts and evaluate a function that
takes one Dict at a time. In R-speak I have a list of lists and want to
lapply my function - which takes a list as input - for each sublist:
function dfun(d::Dict)
println(collect(keys(d)))
The iterator for Dict iterates through (key, value) pairs as tuples. Try:
map(dfun, values(d))
On Monday, June 2, 2014 10:25:35 AM UTC-5, Florian Oswald wrote:
Hi!
I want to iterate over a collection of dicts and evaluate a function that
takes one Dict at a time. In R-speak I have a list
gotcha.
thanks!
On 2 June 2014 16:44, Patrick O'Leary patrick.ole...@gmail.com wrote:
The iterator for Dict iterates through (key, value) pairs as tuples. Try:
map(dfun, values(d))
On Monday, June 2, 2014 10:25:35 AM UTC-5, Florian Oswald wrote:
Hi!
I want to iterate over a collection
I had been doing everything in modules, (I don't like restarting.)
After attempting to recreate it, I think the cause is aborting out of an
infinite recursion in typechecking.
I get a stack trace that looks like
^C ^CERROR: interrupt
in typeinf at inference.jl:1357
in typeinf at
On Monday, June 2, 2014 1:01:25 AM UTC-4, Jameson wrote:
Therefore, for benchmarks, you should execute your code in a loop enough
times that the measurement error (of the hardware and OS) is not too
significant.
You can also often benchmark multiple times and take the minimum (not the
On May 30, 2014, at 5:55 PM, Jameson Nash wrote:
I don't see anything wrong.
The T in your method declaration is not the same as the T in your function
declaration
What? I don't understand. Might be a typo in your sentence.
I know that there are four different type variables in the
I also have a similar kind of problem when doing some unrelated xml parsing
tests using LightXML today. My code hasn't changed. The stack trace
suggests it's not LightXML related but more how the type inference got
confused.
Strangely enough, as long as I repeat the function call with the same
output of methods()
If I wasn't typing on my phone, I would have been clearer. Methods() has a
two argument form which will specializ
On Monday, June 2, 2014, David Moon dave_m...@alum.mit.edu wrote:
On May 30, 2014, at 5:55 PM, Jameson Nash wrote:
I don't see anything wrong.
The T
I actually just started collaborating with these
guys http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6546013 who
recently moved from MIT to Berkeley. They're using Chisel, Bluespec, and a
custom compiler that takes a graph representation of an algorithm and
determines a hardware layout
Thanks, I will try that and see what I find.
/ Toivo
On Sunday, 1 June 2014 21:55:09 UTC+2, Elliot Saba wrote:
Try running it inside an OpenGL debugger, (On Mac, you can install the
opengl debugger from Apple's Xcode utilities download, I believe it's
available as a separate download on
Does julia have a package or library to compute exponential integral
http://en.wikipedia.org/wiki/Exponential_integral
http://en.wikipedia.org/wiki/Exponential_integral
for real positive argument arbitrary precision ?? I am new to julia so
explain appropriately. thanks.
Hi Thomas,
This is definitely a reasonable thing to want to do, and there are a number
of methods to differentiate through a matrix factorization. Julia doesn't
yet have a generic chol() method
(see https://github.com/JuliaLang/julia/issues/1629), but it does have a
generic LU decomposition
Thanks for the quick response. I'll check and see if an LU decomposition
will do the trick
As far as my own cholesky code, it is quite a bit slower, even after your
edits. For a 1000x1000 Float64 matrix (i.e., X = randn(1000,1000); X = X'
* X), here are the timing results I get:
3 Calls of
Thanks John. His argument definitely makes sense (that algorithms that
cause more garbage collection won't get penalized by median, unless, of
course, they cause gc() to occur more than 50% of the time).
Most benchmarks of Julia code that I've done (or seen) have made some
attempt to take gc()
I feel that ignoring gc can be a bit of a cheat since it does happen and
it's quite expensive – and other systems may be better or worse at it. Of
course, it can still be good to separate the cause of slowness explicitly
into execution time and overhead for things like gc.
On Mon, Jun 2, 2014 at
random, possibly clueless thoughts as i look at this:
yes, transform rules by type would be nice! not sure what that means about
having to generate a module within a macro, though (for namespacing).
do you parse strings or streams? (or both?) i know nothing about julia
streams, yet, but i
On Tue, Jun 3, 2014 at 4:43 AM, Thomas Covert thom.cov...@gmail.com wrote:
I was hoping to find some neat linear algebra trick that would let me
compute a DualNumber cholesky factorization without having to resort to
non-LAPACK code, but I haven't found it yet. That is, I figured that I
could
Thanks for that, Chris. I also worked out a similar derivation - though I
wasn't able to prove to myself that the equation B = L*M' + M*L' has a
unique solution for M.
This feels similar to the Sylvester Equation, but its not quite the same...
On Mon, Jun 2, 2014 at 10:13 PM, Chris Foster
Ah good hint. Apparently a slightly more general version is called
the *-Sylvester Equation
http://gauss.uc3m.es/web/personal_web/fteran/papers/equation_lama_rev.pdf
and quite some work has been done, though I haven't time to dig
through the references. The big difference in your case is that
By the way, using your notation, do we want the traditional transpose of M,
or the conjugate transpose of M (i.e., -1 * M')? Wikipedia seems to
suggest that matrix operations on Dual Numbers require care similar to
complex matrices. I wonder if this makes the problem more clear (or not)?
On
27 matches
Mail list logo