>
> Not sure what the problem was exactly with yours. Maybe the multi-
> indices versions of aget.
>
> On Sep 21, 7:43 pm, Ranjit wrote:
>
> > I was thinking I needed the Java arrays for interop. At one step in my
> > simulation I need to take an FFT of a 2d array,
help,
-Ranjit
On Sep 21, 5:48 pm, Jason Wolfe wrote:
> I think you're still missing some type hints. I think there are
> varying degrees of reflective code the compiler can emit, and it
> doesn't always warn for the intermediate cases.
>
> Do you need to do all this array
til.Random. I
> think there are faster (and higher-quality) drop-in replacements for
> java.util.Random, if that is really important to you. I seem to
> recall there being a good Mersenne Twister implementation around,
> which might fit your bill.
>
> -Jason
>
> On Sep 21, 6:34
1000^2 gaussians.
But numpy only takes about 100 msecs to do the same thing on my
computer. I'm surprised we can't beat that or at least get close. But
maybe next-gaussian is the bottleneck as you say.
On Sep 21, 12:20 am, Jason Wolfe wrote:
> On Sep 20, 4:43 pm, Ranjit wrote:
&
Actually it turns out the type hinting in gaussian-matrix-final isn't
even necessary. I just took it out and the speed doesn't seem to
change.
On Sep 20, 7:43 pm, Ranjit wrote:
> I'm glad you think partition is the problem, because that was my guess
> too. But I think I hav
and what's going on now, then it looks like the only way
to make this any faster is if next-gaussian could return primitives.
The for, and doseq macros seems like they're pretty slow.
-Ranjit
On Sep 20, 3:30 pm, Jason Wolfe wrote:
> I think partition is slowing you down (but haven
like you should be able to just use aset-double with multiple
> indices (in place of aset-double2), but I can't seem to get the type
> hints right.
>
> -Jason
>
> On Sep 20, 7:36 am, Ranjit wrote:
>
> > Thanks Jason, this is great.
>
> > I was confused
t is "aset-
double2"? Is that a macro that has a type hint for an array of
doubles?
Thanks,
-Ranjit
On Sep 19, 5:37 pm, Jason Wolfe wrote:
> Hi Ranjit,
>
> The big perf differences you're seeing are due to reflective calls.
> Getting the Java array bits properly type-hin
I'm seeing a big difference in speed for each function run only once,
so I guess any Hotspot optimization isn't happening right?
But is the way Clojure works so opaque we need to see byte codes? I
was hoping someone on the list would have some intuition about how the
expressions get implemented. I
igured it would be
better to do everything in Java arrays rather than copy a Clojure
vector into a 2d array and back every time I needed to do an FFT. But
am I wrong in thinking that?
Thanks,
Ranjit
On Sep 19, 11:20 am, Nicolas Oury wrote:
> A first good start is to put
> (set! *wa
;m not
sure why. And it's still slower than it should be I think. Is there
anything I can do to speed this up still further?
Thanks,
-Ranjit
(import java.util.Random)
(def r (Random. ))
(defn next-gaussian [] (.nextGaussian r))
(defn gaussian-matrix1 [arr L]
(doseq [x (range L) y (ran
Thanks for clearing that up for me everyone. So the REPL itself acts
like a consumer of lazy sequences? Is there some logic behind that? I
guess I would have expected that the REPL would just return a
reference to a lazy expression rather than evaluate it.
Thanks,
-Ranjit
On Sep 13, 2:06 pm
he call to aset consuming the
lazy sequence?
Thanks.
On Sep 13, 11:00 am, Meikel Brandmeyer wrote:
> Hi,
>
> On 13 Sep., 15:07, Ranjit Chacko wrote:
>
> > (for [x (range 3) y (range 3)] (aset xt x y 1))
>
> Note that that this will not do what you think it does. for cr
float[]%29
I'm pretty new to Clojure so I'm not sure what's going on here, but I
guess this has something to do with reflection? Is this going to be a
general problem with passing multidimensional arrays to Java methods
from Clojure?
Thanks,
-Ranjit
--
You received this message b
14 matches
Mail list logo