Re: [julia-users] Initializing members of composite types

2015-07-13 Thread Mauro
Maybe
https://github.com/mauro3/Parameters.jl
could help, it provides a way to set defaults and keyword constructors.

On Mon, 2015-07-13 at 13:08, Ranjan Anantharaman benditlikeran...@gmail.com 
wrote:
  

 Suppose we had a type with 100 members

 To initialise the data members, we'd have to write a constructor 
 s = something(1,2, )

 This could turn out to be an extremely long line.
 Is there a better way of doing this?



Re: [julia-users] Re: Strange performance problem for array scaling

2015-07-13 Thread Yichao Yu
On Mon, Jul 13, 2015 at 2:20 AM, Jeffrey Sarnoff
jeffrey.sarn...@gmail.com wrote:
 and this: Cleve Moler tries to see it your way
 Moler on floating point denormals


 On Monday, July 13, 2015 at 2:11:22 AM UTC-4, Jeffrey Sarnoff wrote:

 Denormals were made part of the IEEE Floating Point standard after some
 very careful numerical analysis showed that accomodating them would
 substantively improve the quality of floating point results and this would
 lift the quality of all floating point work. Surprising it may be,
 nonetheless you (and if not you today, you tomorrow and one of your
 neighbors today) really do care about those unusual, and often rarely
 observed values.

 fyi
 William Kahan on the introduction of denormals to the standard
 and an early, important paper on this
 Effects of Underflow on Solving Linear Systems - J.Demmel 1981


Thank you very much for the references.

Yes I definitely believe every part of the IEEE Floating Point
standard has it's reason to be there and I'm more wondering what are
the cases that they are significant. OTOH, I also believe there's
certain kind of computation that do not care about them, which is why
-ffast-math is there.

From the mathwork blog reference:

 Double precision denormals are so tiny that they are rarely numerically 
 significant, but single precision denormals can be in the range where they 
 affect some otherwise unremarkable computations.

For my computation, I'm currently using double but I've already
checked that switching to single precision still give me enough
precision. Based on this, can I say that I can ignore them if I use
double precision and may need to keep them if I switch to single
precision? Using Float64 takes twice as long as using Float32 while
keeping denormals seems to take 10x time.



As for doing it in julia, I found @simonbyrne's mxcsr.jl[1]. However,
I couldn't get it working without #11604[2]. Inline assembly in
llvmcall is working on LLVM 3.6 though[3], in case it's useful for
others.


[1] https://gist.github.com/simonbyrne/9c1e4704be46b66b1485
[2] https://github.com/JuliaLang/julia/pull/11604
[3] 
https://github.com/yuyichao/explore/blob/a47cef8c84ad3f43b18e0fd797dca9debccdd250/julia/array_prop/array_prop.jl#L3


 On Monday, July 13, 2015 at 12:35:24 AM UTC-4, Yichao Yu wrote:

 On Sun, Jul 12, 2015 at 10:30 PM, Yichao Yu yyc...@gmail.com wrote:
  Further update:
 
  I made a c++ version[1] and see a similar effect (depending on
  optimization levels) so it's not a julia issue (not that I think it
  really was to begin with...).

 After investigating the c++ version more, I find that the difference
 between the fast_math and the non-fast_math version is that the
 compiler emit a function called `set_fast_math` (see below).

 From what I can tell, the function sets bit 6 and bit 15 on the MXCSR
 register (for SSE) and according to this page[1] these are DAZ and FZ
 bits (both related to underflow). It also describe denormals as take
 considerably longer to process. Since the operation I have keeps
 decreasing the value, I guess it makes sense that there's a value
 dependent performance (and it kind of make sense that fft also
 suffers from these values)

 So now the question is:

 1. How important are underflow and denormal values? Note that I'm not
 catching underflow explicitly anyway and I don't really care about
 values that are really small compare to 1.

 2. Is there a way to set up the SSE registers as done by the c
 compilers? @fastmath does not seems to be doing this.

 05b0 set_fast_math:
  5b0:0f ae 5c 24 fc   stmxcsr -0x4(%rsp)
  5b5:81 4c 24 fc 40 80 00 orl$0x8040,-0x4(%rsp)
  5bc:00
  5bd:0f ae 54 24 fc   ldmxcsr -0x4(%rsp)
  5c2:c3   retq
  5c3:66 2e 0f 1f 84 00 00 nopw   %cs:0x0(%rax,%rax,1)
  5ca:00 00 00
  5cd:0f 1f 00 nopl   (%rax)

 [1] http://softpixel.com/~cwright/programming/simd/sse.php

 
  The slow down is presented in the c++ version for all optimization
  levels except Ofast and ffast-math. The julia version is faster than
  the default performance for both gcc and clang but is slower in the
  fast case for higher optmization levels. For O2 and higher, the c++

 The slowness of the julia version seems to be due to multi dimentional
 arrays. Using 1d array yields similar performance with C.

  version shows a ~100x slow down for the slow case.
 
  @fast_math in julia doesn't seem to have an effect for this although
  it does for clang and gcc...
 
  [1]
  https://github.com/yuyichao/explore/blob/5a644cd46dc6f8056cee69f508f9e995b5839a01/julia/array_prop/propagate.cpp
 
  On Sun, Jul 12, 2015 at 9:23 PM, Yichao Yu yyc...@gmail.com wrote:
  Update:
 
  I've just got an even simpler version without any complex numbers and
  only has Float64. The two loops are as small as the following LLVM-IR
  now and there's only simple arithmetics in the loop body.
 
  ```llvm
  L9.preheader: 

Re: [julia-users] Re: Strange performance problem for array scaling

2015-07-13 Thread Yichao Yu
 As for doing it in julia, I found @simonbyrne's mxcsr.jl[1]. However,
 I couldn't get it working without #11604[2]. Inline assembly in
 llvmcall is working on LLVM 3.6 though[3], in case it's useful for
 others.


And for future references I find #789, which is not documented
anywhere AFAICT (will probably file a doc issue...)
It also supports runtime detection of cpu feature so it should be much
more portable.

[1] https://github.com/JuliaLang/julia/pull/789


 [1] https://gist.github.com/simonbyrne/9c1e4704be46b66b1485
 [2] https://github.com/JuliaLang/julia/pull/11604
 [3] 
 https://github.com/yuyichao/explore/blob/a47cef8c84ad3f43b18e0fd797dca9debccdd250/julia/array_prop/array_prop.jl#L3



Re: [julia-users] Re: MongoDB and Julia

2015-07-13 Thread Stefan Karpinski
Have you tried opening issues on the relevant packages? Most people here 
(myself included) won't know much about mongoDB or these packages.


 On Jul 13, 2015, at 12:27 AM, Kevin Liu kevinliu2...@gmail.com wrote:
 
 Any help would be greatly appreciated. I am even debating over the idea of 
 contributing to the development of this package because I believe so much in 
 the language and need to use MongoDB. 
 
 On Sunday, July 12, 2015 at 4:17:44 AM UTC-3, Kevin Liu wrote:
 Hi, 
 
 I have Julia 0.3, Mongodb-osx-x86_64-3.0.4, and Mongo-c-driver-1.1.9 
 installed, but can't get Julia to access the Mongo Client through this 
 'untestable' package https://github.com/pzion/Mongo.jl, according to  
 http://pkg.julialang.org/. 
 
 I have tried Lytol/Mongo.jl and the command require(Mongo.jl) can't open 
 file Mongo.jl, or the auto-generated deps.jl. 
 
 Is anyone having similar problems trying to make Julia work with Mongo? 
 
 Thank you
 
 Kevin


[julia-users] Initializing members of composite types

2015-07-13 Thread Ranjan Anantharaman
 

Suppose we had a type with 100 members

To initialise the data members, we'd have to write a constructor 
s = something(1,2, )

This could turn out to be an extremely long line.
Is there a better way of doing this?


[julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Kostas Tavlaridis-Gyparakis
Hello thanks a lot for your answer.
You assumed correctly I do use version 0.4.
If I got right your proposal about the linking error is to use
the command jl_inti(JULIA_INIT_DIR) as shown in this 
http://julia.readthedocs.org/en/latest/manual/embedding/#example 
example and use a makefile where I just copy paste the
commands shown in the example, so the makefile looks like that:

JL_SHARE = $(shell julia -e 
'print(joinpath(JULIA_HOME,Base.DATAROOTDIR,julia))')
CFLAGS   += $(shell $(JL_SHARE)/julia-config.jl --cflags)
CXXFLAGS += $(shell $(JL_SHARE)/julia-config.jl --cflags)
LDFLAGS  += $(shell $(JL_SHARE)/julia-config.jl --ldflags)
LDLIBS   += $(shell $(JL_SHARE)/julia-config.jl --ldlibs)

all: main


and my source code looks like that:

#include julia.h

int main(int argc, char *argv[])
{
/* required: setup the julia context */
jl_init(JULIA_INIT_DIR);

/* run julia commands */
jl_eval_string(print(sqrt(2.0)));

/* strongly recommended: notify julia that the
 program is about to terminate. this allows
 julia time to cleanup pending write requests
 and run all finalizers
*/
jl_atexit_hook();
return 0;
}

Yet again when I try to compile I receive the following errors:

make: /home/kostav/julia/usr/bin/../share/julia/julia-config.jl: Command 
not found
make: /home/kostav/julia/usr/bin/../share/julia/julia-config.jl: Command 
not found
make: /home/kostav/julia/usr/bin/../share/julia/julia-config.jl: Command 
not found
cc main.c   -o main
main.c:1:19: fatal error: julia.h: No such file or directory
 #include julia.h
   ^
compilation terminated.
builtin: recipe for target 'main' failed
make: *** [main] Error 1


Note that I have really no experience with makefiles so I can not really 
handle them
properly, but I assumed that I should be working if I did all it was saying 
in the example.
Yet again I receive the above errors.





On Monday, July 13, 2015 at 9:30:33 AM UTC+2, Jeff Waller wrote:

It's not clear to me which version you are using?  Depending on the 
 version, It is referred
 to in the URL you linked...  

 I'll just cut to the chase use 0.4 and julia_config.jl as described in the 
 doc, create a
 Makefile just cut-and-paste the example, and augment with your source. 
  All but
 one of your errors is a result of the wrong compile/link flags.

 The last error is that main() is either not being compiled or linked, 
 that's just straight-up
 C programming, and has nothing to do with Julia.

 As far as eclipse goes, I'm confident it's possible, I can't imagine 
 eclipse not supporting
 compilation using Makefiles, but even if it doesn't you can still automate 
 things, but just
 get something working first and you can embellish later.

 TL;DR

 0.4,  julia_config, cut-paste Makefile, add your source, done



Re: [julia-users] Initializing members of composite types

2015-07-13 Thread Tamas Papp
Maybe you could provide more context for the problem. Are the 
fields computed, or do you have them in some other data structure 
(eg a Dict or a vector)? See


https://julia.readthedocs.org/en/latest/manual/constructors/

Best,

Tamas

On Mon, Jul 13 2015, Ranjan Anantharaman 
benditlikeran...@gmail.com wrote:


 


Suppose we had a type with 100 members

To initialise the data members, we'd have to write a constructor 
s = something(1,2, )


This could turn out to be an extremely long line.  Is there a 
better way of doing this?




[julia-users] Re: Too many packages?

2015-07-13 Thread yuuki
On Sunday, 12 July 2015 22:47:42 UTC+2, Tony Kelman wrote:

  I think there's a big differences between developing core features in 
 packages and shipping them with the default version and having optional 
 third party packages implementing core features.

 Like what, exactly? If the complaint is about ease of installation of 
 packages, then that's a known and acknowledged bug (set of bugs) that 
 people are thinking about how to do a better job of. We could always use 
 more help making things better.


If there's a bunch of official packages that are shipped with default 
version it's like having no packages, it's just a way for the devs to 
organize their work internally that doesn't concern the user too much.

On the other hand for third party packages the user has to find them, 
install them, debug them, worry about long term maintenance, etc. In 
reality it's a bit more fuzzy than that, so maybe my distinction isn't so 
relevant.


For plotting I think it would be better to have any plotting than none, 
even though not everybody will agree on the best choice for the one. The 
least dependencies seems the most important criteria to me, as long as you 
can draw lines, points and surfaces with decent performances. The high 
level interface doesn't matter that much in my opinion. 


[julia-users] Re: Too many packages?

2015-07-13 Thread Tom Breloff
Packages:  I fall into the camp of tiny base, curated packages.  It would 
be great to have an absolutely minimal core julia, and then lots of build 
recipes for those people that want a matlab-like experience.  For example, 
JuliaStats could be responsible for compiling a best of breed list of key 
stats and plotting packages for the stats recipe, etc.  Then on 
installation you can say what recipes you'd like to include (maybe it 
defaults to most/all of the major recipes so people don't even need to 
think about it?).

Plotting:  I agree that simple plotting with minimal dependencies is the 
way to go for a standard package.  I don't think Gadfly fits that very well 
in it's current form... there needs to be simple ways to plot and there 
should not be a strong (or any?) dependency on DataFrames.  I'm certainly 
not recommending that this become standard, but look at the readme 
of https://github.com/tbreloff/Qwt.jl as what I think the basic plotting 
interface should look like.

On Monday, July 13, 2015 at 5:12:53 AM UTC-4, yu...@altern.org wrote:

 On Sunday, 12 July 2015 22:47:42 UTC+2, Tony Kelman wrote:

  I think there's a big differences between developing core features in 
 packages and shipping them with the default version and having optional 
 third party packages implementing core features.

 Like what, exactly? If the complaint is about ease of installation of 
 packages, then that's a known and acknowledged bug (set of bugs) that 
 people are thinking about how to do a better job of. We could always use 
 more help making things better.


 If there's a bunch of official packages that are shipped with default 
 version it's like having no packages, it's just a way for the devs to 
 organize their work internally that doesn't concern the user too much.

 On the other hand for third party packages the user has to find them, 
 install them, debug them, worry about long term maintenance, etc. In 
 reality it's a bit more fuzzy than that, so maybe my distinction isn't so 
 relevant.


 For plotting I think it would be better to have any plotting than none, 
 even though not everybody will agree on the best choice for the one. The 
 least dependencies seems the most important criteria to me, as long as you 
 can draw lines, points and surfaces with decent performances. The high 
 level interface doesn't matter that much in my opinion. 



Re: [julia-users] Re: Strange performance problem for array scaling

2015-07-13 Thread Jeffrey Sarnoff
Cleve Moler's discussion is not quite as contextually invariant as are 
William Kahan's and James Demmel's.
In fact the numerical analysis community has made an overwhelmingly 
strong case that, roughly speaking,
one is substantively better situated where denormalized floating point 
values will be used whenever they may
arise than being free of those extra cycles at the mercy of an absent 
smoothness shoving those values to zero.
And this holds widely for floating point centered applications or 
libraries. 

If the world were remade with each sunrise by fixed bitwidth floating point 
computations, supporting denormals
is to have made house-calls with few numerical vaccines to everyone who 
will be relying on those computations
to inform expectations about non-trivial work with fixed bitwdith floating 
point types.  It does not wipe out all forms
of numerical untowardness, and some will find the vaccinces more 
prophylatic than others; still, the analogy holds.

We vaccinate many babies against measles even though there are some who 
would never have become exposed
to that disease .. and for those who forgot why, not long ago the news was 
about a Disney vaction disease nexus
and how far it spread -- then California changed its law to make it more 
difficult to opt-out of childhood vaccination.
Having denormals there when the values they cover arise brings benifit that 
parallels the good in that law change.
The larger social environment  gets better by growing stronger and that can 
happen because somethat that had
been bringing weakness (disease or bad consequences from subtile numbery 
misadventures) no longer operates.

There is another way denormals have been shown to be matter -- the way 
above ought to help you feel at ease
with deciding not to move your work from Float64 to Float32 for the purpose 
of avoiding values that hover around
smaller magnitudes realizable with Float64s.  That sounds like a headache, 
and you would not have changed
the theory in a way that makes things work  (or at all).  Recasting the 
approch to solving ot transforming at hand
to work with integer values would move the work away from any cost and 
benefit that accompany denormals.
Other that that, thank your favorite floating point microarchitect for 
giving you greater throughput with denormals
than everyone had a few design cycles ago.

I would like their presence without measureable cost .. just not enough to 
dislike their availability.

On Monday, July 13, 2015 at 8:02:13 AM UTC-4, Yichao Yu wrote:

  As for doing it in julia, I found @simonbyrne's mxcsr.jl[1]. However, 
  I couldn't get it working without #11604[2]. Inline assembly in 
  llvmcall is working on LLVM 3.6 though[3], in case it's useful for 
  others. 
  

 And for future references I find #789, which is not documented 
 anywhere AFAICT (will probably file a doc issue...) 
 It also supports runtime detection of cpu feature so it should be much 
 more portable. 

 [1] https://github.com/JuliaLang/julia/pull/789 

  
  [1] https://gist.github.com/simonbyrne/9c1e4704be46b66b1485 
  [2] https://github.com/JuliaLang/julia/pull/11604 
  [3] 
 https://github.com/yuyichao/explore/blob/a47cef8c84ad3f43b18e0fd797dca9debccdd250/julia/array_prop/array_prop.jl#L3
  
  



Re: [julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Jeff Waller


 This is the problem, or part of it anyway. That file lives under 
 `julia/contrib` in a source build.


Hmm  this should be bundled into all 0.4, oh oh.


Re: [julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Kostas Tavlaridis-Gyparakis
Any ideas on how to fix the compile error: julia.h: No such file or 
directory  #include julia.h compilation terminated.?


This is the problem, or part of it anyway. That file lives under 
 `julia/contrib` in a source build.


 Hmm  this should be bundled into all 0.4, oh oh.



Re: [julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Isaiah Norton

 make: /home/kostav/julia/usr/bin/../share/julia/julia-config.jl: Command
 not found


This is the problem, or part of it anyway. That file lives under
`julia/contrib` in a source build.

The shell is your friend:
http://www.cyberciti.biz/faq/search-for-files-in-bash/

On Mon, Jul 13, 2015 at 6:46 AM, Kostas Tavlaridis-Gyparakis 
kostas.tavlari...@gmail.com wrote:

 Hello thanks a lot for your answer.
 You assumed correctly I do use version 0.4.
 If I got right your proposal about the linking error is to use
 the command jl_inti(JULIA_INIT_DIR) as shown in this
 http://julia.readthedocs.org/en/latest/manual/embedding/#example
 example and use a makefile where I just copy paste the
 commands shown in the example, so the makefile looks like that:

 JL_SHARE = $(shell julia -e
 'print(joinpath(JULIA_HOME,Base.DATAROOTDIR,julia))')
 CFLAGS   += $(shell $(JL_SHARE)/julia-config.jl --cflags)
 CXXFLAGS += $(shell $(JL_SHARE)/julia-config.jl --cflags)
 LDFLAGS  += $(shell $(JL_SHARE)/julia-config.jl --ldflags)
 LDLIBS   += $(shell $(JL_SHARE)/julia-config.jl --ldlibs)

 all: main


 and my source code looks like that:

 #include julia.h

 int main(int argc, char *argv[])
 {
 /* required: setup the julia context */
 jl_init(JULIA_INIT_DIR);

 /* run julia commands */
 jl_eval_string(print(sqrt(2.0)));

 /* strongly recommended: notify julia that the
  program is about to terminate. this allows
  julia time to cleanup pending write requests
  and run all finalizers
 */
 jl_atexit_hook();
 return 0;
 }

 Yet again when I try to compile I receive the following errors:

 make: /home/kostav/julia/usr/bin/../share/julia/julia-config.jl: Command
 not found
 make: /home/kostav/julia/usr/bin/../share/julia/julia-config.jl: Command
 not found
 make: /home/kostav/julia/usr/bin/../share/julia/julia-config.jl: Command
 not found
 cc main.c   -o main
 main.c:1:19: fatal error: julia.h: No such file or directory
  #include julia.h
^
 compilation terminated.
 builtin: recipe for target 'main' failed
 make: *** [main] Error 1


 Note that I have really no experience with makefiles so I can not really
 handle them
 properly, but I assumed that I should be working if I did all it was
 saying in the example.
 Yet again I receive the above errors.





 On Monday, July 13, 2015 at 9:30:33 AM UTC+2, Jeff Waller wrote:

 It's not clear to me which version you are using?  Depending on the
 version, It is referred
 to in the URL you linked...

 I'll just cut to the chase use 0.4 and julia_config.jl as described in
 the doc, create a
 Makefile just cut-and-paste the example, and augment with your source.
 All but
 one of your errors is a result of the wrong compile/link flags.

 The last error is that main() is either not being compiled or linked,
 that's just straight-up
 C programming, and has nothing to do with Julia.

 As far as eclipse goes, I'm confident it's possible, I can't imagine
 eclipse not supporting
 compilation using Makefiles, but even if it doesn't you can still
 automate things, but just
 get something working first and you can embellish later.

 TL;DR

 0.4,  julia_config, cut-paste Makefile, add your source, done




Re: [julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Jeff Waller
Ok this is turning into kind of a debugging session, but 
/home/kostav/julia/contrib/julia-config.jl --cflags

just run it on the command line.  What's the output?

What version of 0.4?  Is it current as of a few days ago?  In the version 
I'm using
all is well. I will obtain/compile the latest


Re: [julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Jeff Waller


 CFLAGS   += $(shell $/home/kostav/julia/contrib/julia-config.jl --cflags)


take out the 2nd $

CFLAGS   += $(shell /home/kostav/julia/contrib/julia-config.jl --cflags) 

what results?


Re: [julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Kostas Tavlaridis-Gyparakis
Thanks a lot, that fixed this error, but now it returns me the same error 
that I get when I just run the compile command
in terminal:

fatal error: julia.h: No such file or directory  #include julia.h 
compilation terminated.

On Monday, July 13, 2015 at 4:34:24 PM UTC+2, Jeff Waller wrote:

 CFLAGS   += $(shell $/home/kostav/julia/contrib/julia-config.jl --cflags)


 take out the 2nd $

 CFLAGS   += $(shell /home/kostav/julia/contrib/julia-config.jl --cflags) 

 what results?



Re: [julia-users] Re: Help to eliminate Calendar.jl dependence

2015-07-13 Thread Jeffrey Sarnoff
Hello Avik.  That is fact.

As you knowl, the local offset often  is obtainable through calls to C or 
C++ Standard libraries.  
Unfortunately OSs vary greatly in the care and feeding of flexible and 
accurate support for time.

Some OSs may need us to provide portable run-alikes (or not). Is Julia 
distributed with unexposed
but exposable low level time  functions? 

I copied stuff from Libc.jl and have a variant that (should work) just as 
it has.  There are a few
functions where adding `true` as the final argument for a call will change 
the time being
processed and UT or UTC dependant information is obtained.

I prefer not using them directly -- they C functions are are methodolgical 
fragments by design.



On Monday, July 13, 2015 at 8:18:16 PM UTC-4, Avik Sengupta wrote:

 now() - now(Dates.UTC) does not actually return the correct current local 
 offset, since the two now() invocations happen a few milliseconds apart, 
 and thus do not denote the exact same time. Hence their difference then is 
 not the exact local difference. So unfortunately, as a hack, it does not 
 quite seem to work. 

 For example, with GMT-1, on a mac:

 *julia **now() - now(Dates.UTC)*

 *3599430 milliseconds*


 *julia **Dates.Second(div(Dates.value(now() - now(Dates.UTC)),1000))*

 *3599 seconds*

 On Friday, 10 July 2015 17:49:45 UTC+1, Jacob Quinn wrote:


 On Fri, Jul 10, 2015 at 8:11 AM, Tom Breloff t...@breloff.com wrote:

 as


 Tom,

 Yes, the method I proposed won't work retroactively since the method for 
 getting the current local offset from GMT is `now() - now(Dates.UTC)`. If 
 you were to run that every second crossing over the daylight savings 
 moment, you'd see that it correctly adjusts for daylight savings, but it's 
 only going to give you the *current* offset from GMT. Something more 
 elaborate will require tapping into the tzinfo database (which TimeZones.jl 
 will do).

 -Jacob



Re: [julia-users] Re: MongoDB and Julia

2015-07-13 Thread Jacob Quinn
You may also try Pkg.add(ODBC) if you can find a working ODBC driver for
mongo. I feel like I've heard of people going this route.

-Jacob

On Mon, Jul 13, 2015 at 9:23 AM, Kevin Liu kevinliu2...@gmail.com wrote:

 Hey Stefan, thanks for replying. I have not opened an issue on Github's
 pzion/Mongo.jl. I will, and I will attempt to debug it. Thank you. Kevin

 On Monday, July 13, 2015 at 9:02:30 AM UTC-3, Stefan Karpinski wrote:

 Have you tried opening issues on the relevant packages? Most people here
 (myself included) won't know much about mongoDB or these packages.


 On Jul 13, 2015, at 12:27 AM, Kevin Liu kevinl...@gmail.com wrote:

 Any help would be greatly appreciated. I am even debating over the idea
 of contributing to the development of this package because I believe so
 much in the language and need to use MongoDB.

 On Sunday, July 12, 2015 at 4:17:44 AM UTC-3, Kevin Liu wrote:

 Hi,

 I have Julia 0.3, Mongodb-osx-x86_64-3.0.4, and Mongo-c-driver-1.1.9
 installed, but can't get Julia to access the Mongo Client through this
 'untestable' package https://github.com/pzion/Mongo.jl, according to
 http://pkg.julialang.org/.

 I have tried Lytol/Mongo.jl and the command require(Mongo.jl) can't
 open file Mongo.jl, or the auto-generated deps.jl.

 Is anyone having similar problems trying to make Julia work with Mongo?

 Thank you

 Kevin




Re: [julia-users] Re: Too many packages?

2015-07-13 Thread Jacob Quinn
Note there's also an open issue for requiring a higher overall standard for
officially registered packages in the JuliaLang/METADATA.jl package
repository. It's a big issue with a lot of work required to get to the
proposal, but it would lead to (hopefully) instilling more confidence in
users knowing that anything they add through `Pkg.add()` would meet some
acceptable level of quality and robustness.

-Jacob

On Mon, Jul 13, 2015 at 11:11 AM, Christoph Ortner 
christophortn...@gmail.com wrote:

 I seem to be in the minority too many packages camp. I would prefer
 stable updates of julia version which means that key functionality should
 be included in core, e.g. BLAS, sparse solvers, eig, eigs, basic plotting
 and so on and so forth. But at some point there was an idea of having core
 and Stdlib, which I think is equally acceptable.
 Christoph


[julia-users] Re: Too many packages?

2015-07-13 Thread Christoph Ortner
I seem to be in the minority too many packages camp. I would prefer stable 
updates of julia version which means that key functionality should be included 
in core, e.g. BLAS, sparse solvers, eig, eigs, basic plotting and so on and so 
forth. But at some point there was an idea of having core and Stdlib, which I 
think is equally acceptable.
Christoph

Re: [julia-users] Re: Strange performance problem for array scaling

2015-07-13 Thread Yichao Yu
On Mon, Jul 13, 2015 at 11:39 AM, Jeffrey Sarnoff
jeffrey.sarn...@gmail.com wrote:

Thanks for sharing your view about denormal values. I hope what I said
doesn't seem that I want to get rid of them completely (and if it did
sound like that, I didn't meant it...). I didn't read the more detail
analysis of their impact but I would believe you that they are
important in general.

For my specific application, I'm doing time propagation on a
wavefunction (that can decay). For my purpose, there are many other
sources of uncertainty and I'm mainly interested in how the majority
of the wavefunction behave. Therefore I don't really care about the
actually value of something smaller than 10^-10 but I do want it to
run fast. Since this is a linear problem, I can also scale the values
by a constant factor to make underflow less of a problem.

 I have not looked at the specifics of what is going on ...
 Dismissing denormals is particularly dicey when your functional data flow is
 generating many denormalized values.

 Do you what it is causing many values of very small magnitude to occur as
 you run this?

 Is the data holding them explicitly?  If so, and you have access to
 preprocess the data, and you are sure that software
 cannot accumulate or reciprocate or exp etc them, clamp those values to zero
 and then use the data.

 Does the code operate as a denormalized value generator? If so, a small
 alteration to the order of operations may help.



 On Monday, July 13, 2015 at 9:45:59 AM UTC-4, Jeffrey Sarnoff wrote:

 Cleve Moler's discussion is not quite as contextually invariant as are
 William Kahan's and James Demmel's.
 In fact the numerical analysis community has made an overwhelmingly
 strong case that, roughly speaking,
 one is substantively better situated where denormalized floating point
 values will be used whenever they may
 arise than being free of those extra cycles at the mercy of an absent
 smoothness shoving those values to zero.
 And this holds widely for floating point centered applications or
 libraries.

 If the world were remade with each sunrise by fixed bitwidth floating
 point computations, supporting denormals
 is to have made house-calls with few numerical vaccines to everyone who
 will be relying on those computations
 to inform expectations about non-trivial work with fixed bitwdith floating
 point types.  It does not wipe out all forms
 of numerical untowardness, and some will find the vaccinces more
 prophylatic than others; still, the analogy holds.

 We vaccinate many babies against measles even though there are some who
 would never have become exposed
 to that disease .. and for those who forgot why, not long ago the news was
 about a Disney vaction disease nexus
 and how far it spread -- then California changed its law to make it more
 difficult to opt-out of childhood vaccination.
 Having denormals there when the values they cover arise brings benifit
 that parallels the good in that law change.
 The larger social environment  gets better by growing stronger and that
 can happen because somethat that had
 been bringing weakness (disease or bad consequences from subtile numbery
 misadventures) no longer operates.

 There is another way denormals have been shown to be matter -- the way
 above ought to help you feel at ease
 with deciding not to move your work from Float64 to Float32 for the
 purpose of avoiding values that hover around
 smaller magnitudes realizable with Float64s.  That sounds like a headache,
 and you would not have changed
 the theory in a way that makes things work  (or at all).  Recasting the
 approch to solving ot transforming at hand
 to work with integer values would move the work away from any cost and
 benefit that accompany denormals.
 Other that that, thank your favorite floating point microarchitect for
 giving you greater throughput with denormals
 than everyone had a few design cycles ago.

 I would like their presence without measureable cost .. just not enough to
 dislike their availability.

 On Monday, July 13, 2015 at 8:02:13 AM UTC-4, Yichao Yu wrote:

  As for doing it in julia, I found @simonbyrne's mxcsr.jl[1]. However,
  I couldn't get it working without #11604[2]. Inline assembly in
  llvmcall is working on LLVM 3.6 though[3], in case it's useful for
  others.
 

 And for future references I find #789, which is not documented
 anywhere AFAICT (will probably file a doc issue...)
 It also supports runtime detection of cpu feature so it should be much
 more portable.

 [1] https://github.com/JuliaLang/julia/pull/789

 
  [1] https://gist.github.com/simonbyrne/9c1e4704be46b66b1485
  [2] https://github.com/JuliaLang/julia/pull/11604
  [3]
  https://github.com/yuyichao/explore/blob/a47cef8c84ad3f43b18e0fd797dca9debccdd250/julia/array_prop/array_prop.jl#L3
 


Re: [julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Jeff Waller


On Monday, July 13, 2015 at 11:36:34 AM UTC-4, Kostas Tavlaridis-Gyparakis 
wrote:

 Ok, my current Julia version (that I installed via running the source 
 code) is: 
 *Version 0.4.0-dev+5841 (2015-07-07 14:58 UTC)*So, what should I do 
 different so that -I flag gets the proper value?


Could you run /home/kostav/julia/contrib/julia-config.jl  --ldlibs and tell 
me what it returns?

Did you do a make install of Julia itself and are running out of that
installed directory or are you running out of where it compiled or possibly
simply copied the directories? That script assumes the former and I think
maybe you're doing the latter.

If so you can still use it, but you have to specify which julia so 
something like this

/home/kostav/julia/bin/julia  /home/kostav/julia/contrib/julia-config.jl 
 --ldlibs

is there any difference?  To get something working, cut-paste the output to 
the
semi-working step above

to use in a Makefile modify to 

$(shell /home/kostav/julia/bin/julia 
/home/kostav/julia/contrib/julia-config.jl --cflags) 

etc


[julia-users] Where do they come from: jl_uv_writecb() ERROR: bad file descriptor EBADF

2015-07-13 Thread Andreas Lobinger
Hello colleagues,

i'm working on a library adaptation and if i close julia, i get a list of 
these:
julia 
jl_uv_writecb() ERROR: bad file descriptor EBADF
jl_uv_writecb() ERROR: bad file descriptor EBADF
jl_uv_writecb() ERROR: bad file descriptor EBADF
jl_uv_writecb() ERROR: bad file descriptor EBADF
jl_uv_writecb() ERROR: bad file descriptor EBADF
jl_uv_writecb() ERROR: bad file descriptor EBADF
jl_uv_writecb() ERROR: bad file descriptor EBADF
jl_uv_writecb() ERROR: bad file descriptor EBADF

Can i someone point me into the direction, how to get the information about 
these file descriptors which cause the problem? Afaics i have only 1 file 
pending, opened by the underlying library.

Wishing a happy day,
   Andreas


Re: [julia-users] Where do they come from: jl_uv_writecb() ERROR: bad file descriptor EBADF

2015-07-13 Thread Yichao Yu
On Mon, Jul 13, 2015 at 11:31 AM, Andreas Lobinger lobing...@gmail.com wrote:
 Hello colleagues,

 i'm working on a library adaptation and if i close julia, i get a list of
 these:
 julia
 jl_uv_writecb() ERROR: bad file descriptor EBADF
 jl_uv_writecb() ERROR: bad file descriptor EBADF
 jl_uv_writecb() ERROR: bad file descriptor EBADF
 jl_uv_writecb() ERROR: bad file descriptor EBADF
 jl_uv_writecb() ERROR: bad file descriptor EBADF
 jl_uv_writecb() ERROR: bad file descriptor EBADF
 jl_uv_writecb() ERROR: bad file descriptor EBADF
 jl_uv_writecb() ERROR: bad file descriptor EBADF

 Can i someone point me into the direction, how to get the information about
 these file descriptors which cause the problem? Afaics i have only 1 file
 pending, opened by the underlying library.

https://github.com/JuliaLang/julia/issues/11958


 Wishing a happy day,
Andreas


Re: [julia-users] Re: Strange performance problem for array scaling

2015-07-13 Thread Jeffrey Sarnoff
it is a fairer test working from a second copy of the data that has been 
prescaled(x::Float64) = x * 2.0^64

On Monday, July 13, 2015 at 12:51:33 PM UTC-4, Jeffrey Sarnoff wrote:

 Staying with Float64, see if the runtime comes way down when you prescale 
 the data using prescale(x) = x *  2.0^64


 Guessing your values to be less than 10^15,  and assuming the worst case 
 smallest magnitude  
   the base10 exponent of your largest data value is below  70 scale by 
 constant is a good strategy when the largest of the data values is not large

 On Monday, July 13, 2015 at 12:04:32 PM UTC-4, Yichao Yu wrote:

 On Mon, Jul 13, 2015 at 11:39 AM, Jeffrey Sarnoff 
 jeffrey...@gmail.com wrote: 

 Thanks for sharing your view about denormal values. I hope what I said 
 doesn't seem that I want to get rid of them completely (and if it did 
 sound like that, I didn't meant it...). I didn't read the more detail 
 analysis of their impact but I would believe you that they are 
 important in general. 

 For my specific application, I'm doing time propagation on a 
 wavefunction (that can decay). For my purpose, there are many other 
 sources of uncertainty and I'm mainly interested in how the majority 
 of the wavefunction behave. Therefore I don't really care about the 
 actually value of something smaller than 10^-10 but I do want it to 
 run fast. Since this is a linear problem, I can also scale the values 
 by a constant factor to make underflow less of a problem. 

  I have not looked at the specifics of what is going on ... 
  Dismissing denormals is particularly dicey when your functional data 
 flow is 
  generating many denormalized values. 
  
  Do you what it is causing many values of very small magnitude to occur 
 as 
  you run this? 
  
  Is the data holding them explicitly?  If so, and you have access to 
  preprocess the data, and you are sure that software 
  cannot accumulate or reciprocate or exp etc them, clamp those values to 
 zero 
  and then use the data. 
  
  Does the code operate as a denormalized value generator? If so, a small 
  alteration to the order of operations may help. 
  
  
  
  On Monday, July 13, 2015 at 9:45:59 AM UTC-4, Jeffrey Sarnoff wrote: 
  
  Cleve Moler's discussion is not quite as contextually invariant as 
 are 
  William Kahan's and James Demmel's. 
  In fact the numerical analysis community has made an overwhelmingly 
  strong case that, roughly speaking, 
  one is substantively better situated where denormalized floating point 
  values will be used whenever they may 
  arise than being free of those extra cycles at the mercy of an absent 
  smoothness shoving those values to zero. 
  And this holds widely for floating point centered applications or 
  libraries. 
  
  If the world were remade with each sunrise by fixed bitwidth floating 
  point computations, supporting denormals 
  is to have made house-calls with few numerical vaccines to everyone 
 who 
  will be relying on those computations 
  to inform expectations about non-trivial work with fixed bitwdith 
 floating 
  point types.  It does not wipe out all forms 
  of numerical untowardness, and some will find the vaccinces more 
  prophylatic than others; still, the analogy holds. 
  
  We vaccinate many babies against measles even though there are some 
 who 
  would never have become exposed 
  to that disease .. and for those who forgot why, not long ago the news 
 was 
  about a Disney vaction disease nexus 
  and how far it spread -- then California changed its law to make it 
 more 
  difficult to opt-out of childhood vaccination. 
  Having denormals there when the values they cover arise brings benifit 
  that parallels the good in that law change. 
  The larger social environment  gets better by growing stronger and 
 that 
  can happen because somethat that had 
  been bringing weakness (disease or bad consequences from subtile 
 numbery 
  misadventures) no longer operates. 
  
  There is another way denormals have been shown to be matter -- the way 
  above ought to help you feel at ease 
  with deciding not to move your work from Float64 to Float32 for the 
  purpose of avoiding values that hover around 
  smaller magnitudes realizable with Float64s.  That sounds like a 
 headache, 
  and you would not have changed 
  the theory in a way that makes things work  (or at all).  Recasting 
 the 
  approch to solving ot transforming at hand 
  to work with integer values would move the work away from any cost and 
  benefit that accompany denormals. 
  Other that that, thank your favorite floating point microarchitect for 
  giving you greater throughput with denormals 
  than everyone had a few design cycles ago. 
  
  I would like their presence without measureable cost .. just not 
 enough to 
  dislike their availability. 
  
  On Monday, July 13, 2015 at 8:02:13 AM UTC-4, Yichao Yu wrote: 
  
   As for doing it in julia, I found @simonbyrne's mxcsr.jl[1]. 
 However, 
   I couldn't get it 

Re: [julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Kostas Tavlaridis-Gyparakis
Ok, my current Julia version (that I installed via running the source code) 
is: 
*Version 0.4.0-dev+5841 (2015-07-07 14:58 UTC)*So, what should I do 
different so that -I flag gets the proper value?





On Monday, July 13, 2015 at 5:15:04 PM UTC+2, Jeff Waller wrote:



 On Monday, July 13, 2015 at 10:54:57 AM UTC-4, Kostas Tavlaridis-Gyparakis 
 wrote:

 Any ideas on how to fix the compile error: julia.h: No such file or 
 directory  #include julia.h compilation terminated.?


 Yea of course.  This is a result of the -I flag having the wrong value.  

 There was a space of time that this command (julia_config.jl) was not 
 working
 as a result of the new use of  sys.so instead of sys.ji, but that has 
 sense been fixed, so
 that's why I'm asking what version it was.  It would be working in the 
 newest version, but
 I am verifying that now.

 The cause of libjulia.so not found is the link step is missing -Wl,-rpath, 
 which julia_config gives you
 that's why I keep coming back to it.



Re: [julia-users] Re: Strange performance problem for array scaling

2015-07-13 Thread Jeffrey Sarnoff
I have not looked at the specifics of what is going on ...
Dismissing denormals is particularly dicey when your functional data flow 
is generating many denormalized values.

Do you what it is causing many values of very small magnitude to occur as 
you run this?  

Is the data holding them explicitly?  If so, and you have access to 
preprocess the data, and you are sure that software
cannot accumulate or reciprocate or exp etc them, clamp those values to 
zero and then use the data.

Does the code operate as a denormalized value generator? If so, a small 
alteration to the order of operations may help.



On Monday, July 13, 2015 at 9:45:59 AM UTC-4, Jeffrey Sarnoff wrote:

 Cleve Moler's discussion is not quite as contextually invariant as are 
 William Kahan's and James Demmel's.
 In fact the numerical analysis community has made an overwhelmingly 
 strong case that, roughly speaking,
 one is substantively better situated where denormalized floating point 
 values will be used whenever they may
 arise than being free of those extra cycles at the mercy of an absent 
 smoothness shoving those values to zero.
 And this holds widely for floating point centered applications or 
 libraries. 

 If the world were remade with each sunrise by fixed bitwidth floating 
 point computations, supporting denormals
 is to have made house-calls with few numerical vaccines to everyone who 
 will be relying on those computations
 to inform expectations about non-trivial work with fixed bitwdith floating 
 point types.  It does not wipe out all forms
 of numerical untowardness, and some will find the vaccinces more 
 prophylatic than others; still, the analogy holds.

 We vaccinate many babies against measles even though there are some who 
 would never have become exposed
 to that disease .. and for those who forgot why, not long ago the news was 
 about a Disney vaction disease nexus
 and how far it spread -- then California changed its law to make it more 
 difficult to opt-out of childhood vaccination.
 Having denormals there when the values they cover arise brings benifit 
 that parallels the good in that law change.
 The larger social environment  gets better by growing stronger and that 
 can happen because somethat that had
 been bringing weakness (disease or bad consequences from subtile numbery 
 misadventures) no longer operates.

 There is another way denormals have been shown to be matter -- the way 
 above ought to help you feel at ease
 with deciding not to move your work from Float64 to Float32 for the 
 purpose of avoiding values that hover around
 smaller magnitudes realizable with Float64s.  That sounds like a headache, 
 and you would not have changed
 the theory in a way that makes things work  (or at all).  Recasting the 
 approch to solving ot transforming at hand
 to work with integer values would move the work away from any cost and 
 benefit that accompany denormals.
 Other that that, thank your favorite floating point microarchitect for 
 giving you greater throughput with denormals
 than everyone had a few design cycles ago.

 I would like their presence without measureable cost .. just not enough to 
 dislike their availability.

 On Monday, July 13, 2015 at 8:02:13 AM UTC-4, Yichao Yu wrote:

  As for doing it in julia, I found @simonbyrne's mxcsr.jl[1]. However, 
  I couldn't get it working without #11604[2]. Inline assembly in 
  llvmcall is working on LLVM 3.6 though[3], in case it's useful for 
  others. 
  

 And for future references I find #789, which is not documented 
 anywhere AFAICT (will probably file a doc issue...) 
 It also supports runtime detection of cpu feature so it should be much 
 more portable. 

 [1] https://github.com/JuliaLang/julia/pull/789 

  
  [1] https://gist.github.com/simonbyrne/9c1e4704be46b66b1485 
  [2] https://github.com/JuliaLang/julia/pull/11604 
  [3] 
 https://github.com/yuyichao/explore/blob/a47cef8c84ad3f43b18e0fd797dca9debccdd250/julia/array_prop/array_prop.jl#L3
  
  



Re: [julia-users] Re: Strange performance problem for array scaling

2015-07-13 Thread Jeffrey Sarnoff
Staying with Float64, see if the runtime comes way down when you prescale 
the data using prescale(x) = x *  2.0^64


Guessing your values to be less than 10^15,  and assuming the worst case 
smallest magnitude  
  the base10 exponent of your largest data value is below  70 scale by 
constant is a good strategy when the largest of the data values is not large

On Monday, July 13, 2015 at 12:04:32 PM UTC-4, Yichao Yu wrote:

 On Mon, Jul 13, 2015 at 11:39 AM, Jeffrey Sarnoff 
 jeffrey...@gmail.com javascript: wrote: 

 Thanks for sharing your view about denormal values. I hope what I said 
 doesn't seem that I want to get rid of them completely (and if it did 
 sound like that, I didn't meant it...). I didn't read the more detail 
 analysis of their impact but I would believe you that they are 
 important in general. 

 For my specific application, I'm doing time propagation on a 
 wavefunction (that can decay). For my purpose, there are many other 
 sources of uncertainty and I'm mainly interested in how the majority 
 of the wavefunction behave. Therefore I don't really care about the 
 actually value of something smaller than 10^-10 but I do want it to 
 run fast. Since this is a linear problem, I can also scale the values 
 by a constant factor to make underflow less of a problem. 

  I have not looked at the specifics of what is going on ... 
  Dismissing denormals is particularly dicey when your functional data 
 flow is 
  generating many denormalized values. 
  
  Do you what it is causing many values of very small magnitude to occur 
 as 
  you run this? 
  
  Is the data holding them explicitly?  If so, and you have access to 
  preprocess the data, and you are sure that software 
  cannot accumulate or reciprocate or exp etc them, clamp those values to 
 zero 
  and then use the data. 
  
  Does the code operate as a denormalized value generator? If so, a small 
  alteration to the order of operations may help. 
  
  
  
  On Monday, July 13, 2015 at 9:45:59 AM UTC-4, Jeffrey Sarnoff wrote: 
  
  Cleve Moler's discussion is not quite as contextually invariant as 
 are 
  William Kahan's and James Demmel's. 
  In fact the numerical analysis community has made an overwhelmingly 
  strong case that, roughly speaking, 
  one is substantively better situated where denormalized floating point 
  values will be used whenever they may 
  arise than being free of those extra cycles at the mercy of an absent 
  smoothness shoving those values to zero. 
  And this holds widely for floating point centered applications or 
  libraries. 
  
  If the world were remade with each sunrise by fixed bitwidth floating 
  point computations, supporting denormals 
  is to have made house-calls with few numerical vaccines to everyone who 
  will be relying on those computations 
  to inform expectations about non-trivial work with fixed bitwdith 
 floating 
  point types.  It does not wipe out all forms 
  of numerical untowardness, and some will find the vaccinces more 
  prophylatic than others; still, the analogy holds. 
  
  We vaccinate many babies against measles even though there are some who 
  would never have become exposed 
  to that disease .. and for those who forgot why, not long ago the news 
 was 
  about a Disney vaction disease nexus 
  and how far it spread -- then California changed its law to make it 
 more 
  difficult to opt-out of childhood vaccination. 
  Having denormals there when the values they cover arise brings benifit 
  that parallels the good in that law change. 
  The larger social environment  gets better by growing stronger and that 
  can happen because somethat that had 
  been bringing weakness (disease or bad consequences from subtile 
 numbery 
  misadventures) no longer operates. 
  
  There is another way denormals have been shown to be matter -- the way 
  above ought to help you feel at ease 
  with deciding not to move your work from Float64 to Float32 for the 
  purpose of avoiding values that hover around 
  smaller magnitudes realizable with Float64s.  That sounds like a 
 headache, 
  and you would not have changed 
  the theory in a way that makes things work  (or at all).  Recasting the 
  approch to solving ot transforming at hand 
  to work with integer values would move the work away from any cost and 
  benefit that accompany denormals. 
  Other that that, thank your favorite floating point microarchitect for 
  giving you greater throughput with denormals 
  than everyone had a few design cycles ago. 
  
  I would like their presence without measureable cost .. just not enough 
 to 
  dislike their availability. 
  
  On Monday, July 13, 2015 at 8:02:13 AM UTC-4, Yichao Yu wrote: 
  
   As for doing it in julia, I found @simonbyrne's mxcsr.jl[1]. 
 However, 
   I couldn't get it working without #11604[2]. Inline assembly in 
   llvmcall is working on LLVM 3.6 though[3], in case it's useful for 
   others. 
   
  
  And for future references I find #789, which is 

[julia-users] Performance of multidimensional arrays

2015-07-13 Thread Matthieu
I get bad performance when filling arrays with high dimension.  My version 
of Julia is 0.3.10. This is the simplest example I could come up with:

function f(A::Array{Float64, 7})
for i1 in 1:10
for i2 in 1:10
for i3 in 1:10
for i4 in 1:10
for i5 in 1:10
for i6 in 1:10
for i7 in 1:10
A[i7, i6, i5, i4, i3, i2, i1] = 1.0
end
end
end
end
end
end
end
end
A = Array(Float64, (10, 10, 10, 10, 10, 10, 10));
@time f(A)

# elapsed time: 3.43857013 seconds (1280585500 bytes allocated, 39.46% gc 
time)


A function that creates an intermediary matrix is much faster

function g(A::Array{Float64, 7})
out = Array(Float64, (10, 10))
for i1 in 1:10
for i2 in 1:10
for i3 in 1:10
for i4 in 1:10
for i5 in 1:10
for i6 in 1:10
for i7 in 1:10
out[i7, i6] = 1.0
end
end
A[:, :, i5, i4, i3, i2, i1] = out
end
end
end
end
end
end
A = Array(Float64, (10, 10, 10, 10, 10, 10, 10));
@time g(A)

# elapsed time: 0.130418247 seconds (32668788 bytes allocated)

What is going on?


Re: [julia-users] Re: MongoDB and Julia

2015-07-13 Thread Kevin Liu
... from Mongo's solution's architect, slide 
28/29 http://www.slideshare.net/NorbertoLeite/how-mongodb-drv?from_action=save

On Monday, July 13, 2015 at 6:02:49 PM UTC-3, Kevin Liu wrote:

 This seems like a path I could take 
 http://docs.mongodb.org/meta-driver/latest/tutorial/ which I just found 
 out about. 

 On Monday, July 13, 2015 at 5:56:03 PM UTC-3, Jacob Quinn wrote:

 No worries. I realize it's a bit of a square peg-round hole there.

 On Mon, Jul 13, 2015 at 2:07 PM, Kevin Liu kevinl...@gmail.com wrote:

 Hey Jacob, thanks for the suggestion. ODBC just doesn't sound like the 
 optimal way to go for being too generic. I am studying its implications and 
 alternatives, but probably won't follow with ODBC. I appreciate the help. 

 On Monday, July 13, 2015 at 4:02:25 PM UTC-3, Jacob Quinn wrote:

 You may also try Pkg.add(ODBC) if you can find a working ODBC driver 
 for mongo. I feel like I've heard of people going this route.

 -Jacob

 On Mon, Jul 13, 2015 at 9:23 AM, Kevin Liu kevinl...@gmail.com wrote:

 Hey Stefan, thanks for replying. I have not opened an issue on 
 Github's pzion/Mongo.jl. I will, and I will attempt to debug it. Thank 
 you. 
 Kevin

 On Monday, July 13, 2015 at 9:02:30 AM UTC-3, Stefan Karpinski wrote:

 Have you tried opening issues on the relevant packages? Most people 
 here (myself included) won't know much about mongoDB or these packages.


 On Jul 13, 2015, at 12:27 AM, Kevin Liu kevinl...@gmail.com wrote:

 Any help would be greatly appreciated. I am even debating over the 
 idea of contributing to the development of this package because I 
 believe 
 so much in the language and need to use MongoDB. 

 On Sunday, July 12, 2015 at 4:17:44 AM UTC-3, Kevin Liu wrote:

 Hi, 

 I have Julia 0.3, Mongodb-osx-x86_64-3.0.4, and Mongo-c-driver-1.1.9 
 installed, but can't get Julia to access the Mongo Client through this 
 'untestable' package https://github.com/pzion/Mongo.jl, according 
 to  http://pkg.julialang.org/. 

 I have tried Lytol/Mongo.jl and the command require(Mongo.jl) 
 can't open file Mongo.jl, or the auto-generated deps.jl. 

 Is anyone having similar problems trying to make Julia work with 
 Mongo? 

 Thank you

 Kevin





[julia-users] let and named args

2015-07-13 Thread Michael Francis
I was somewhat surprised to find that name args do not have an implicit let 
and I don't seem to be able to embed the let syntax

bar() = Hello
wow( ; bar = bar() ) = string( bar,  world)
wow()
: ERROR: bar not defined

Within a function I can do the following 
function foo()
  let bar = bar()
return string( bar,  world ) 
  end
end
: foo (generic function with 1 method)

In the args I can not do so ( perhaps there is a working syntax ? )
wow( ; let bar = bar() end) = string( bar,  world)

: ERROR: syntax: invalid keyword argument syntax (let (block) (= bar (call 
bar))) (expected assignment)

Thanks



Re: [julia-users] let and named args

2015-07-13 Thread Stefan Karpinski
This does seem like a good idea. Positional arguments work this way already.

On Mon, Jul 13, 2015 at 4:36 PM, Michael Francis mdcfran...@gmail.com
wrote:

 I was somewhat surprised to find that name args do not have an implicit
 let and I don't seem to be able to embed the let syntax

 bar() = Hello
 wow( ; bar = bar() ) = string( bar,  world)
 wow()
 : ERROR: bar not defined

 Within a function I can do the following
 function foo()
   let bar = bar()
 return string( bar,  world )
   end
 end
 : foo (generic function with 1 method)

 In the args I can not do so ( perhaps there is a working syntax ? )
 wow( ; let bar = bar() end) = string( bar,  world)

 : ERROR: syntax: invalid keyword argument syntax (let (block) (= bar
 (call bar))) (expected assignment)

 Thanks




Re: [julia-users] Re: MongoDB and Julia

2015-07-13 Thread Kevin Liu
This seems like a path I could 
take http://docs.mongodb.org/meta-driver/latest/tutorial/ which I just 
found out about. 

On Monday, July 13, 2015 at 5:56:03 PM UTC-3, Jacob Quinn wrote:

 No worries. I realize it's a bit of a square peg-round hole there.

 On Mon, Jul 13, 2015 at 2:07 PM, Kevin Liu kevinl...@gmail.com 
 javascript: wrote:

 Hey Jacob, thanks for the suggestion. ODBC just doesn't sound like the 
 optimal way to go for being too generic. I am studying its implications and 
 alternatives, but probably won't follow with ODBC. I appreciate the help. 

 On Monday, July 13, 2015 at 4:02:25 PM UTC-3, Jacob Quinn wrote:

 You may also try Pkg.add(ODBC) if you can find a working ODBC driver 
 for mongo. I feel like I've heard of people going this route.

 -Jacob

 On Mon, Jul 13, 2015 at 9:23 AM, Kevin Liu kevinl...@gmail.com wrote:

 Hey Stefan, thanks for replying. I have not opened an issue on Github's 
 pzion/Mongo.jl. I will, and I will attempt to debug it. Thank you. Kevin

 On Monday, July 13, 2015 at 9:02:30 AM UTC-3, Stefan Karpinski wrote:

 Have you tried opening issues on the relevant packages? Most people 
 here (myself included) won't know much about mongoDB or these packages.


 On Jul 13, 2015, at 12:27 AM, Kevin Liu kevinl...@gmail.com wrote:

 Any help would be greatly appreciated. I am even debating over the 
 idea of contributing to the development of this package because I believe 
 so much in the language and need to use MongoDB. 

 On Sunday, July 12, 2015 at 4:17:44 AM UTC-3, Kevin Liu wrote:

 Hi, 

 I have Julia 0.3, Mongodb-osx-x86_64-3.0.4, and Mongo-c-driver-1.1.9 
 installed, but can't get Julia to access the Mongo Client through this 
 'untestable' package https://github.com/pzion/Mongo.jl, according to 
  http://pkg.julialang.org/. 

 I have tried Lytol/Mongo.jl and the command require(Mongo.jl) 
 can't open file Mongo.jl, or the auto-generated deps.jl. 

 Is anyone having similar problems trying to make Julia work with 
 Mongo? 

 Thank you

 Kevin





Re: [julia-users] Performance of multidimensional arrays

2015-07-13 Thread Tim Holy
https://github.com/JuliaLang/julia/issues/9622

--Tim

On Monday, July 13, 2015 01:15:28 PM Matthieu wrote:
 I get bad performance when filling arrays with high dimension.  My version
 of Julia is 0.3.10. This is the simplest example I could come up with:
 
 function f(A::Array{Float64, 7})
 for i1 in 1:10
 for i2 in 1:10
 for i3 in 1:10
 for i4 in 1:10
 for i5 in 1:10
 for i6 in 1:10
 for i7 in 1:10
 A[i7, i6, i5, i4, i3, i2, i1] = 1.0
 end
 end
 end
 end
 end
 end
 end
 end
 A = Array(Float64, (10, 10, 10, 10, 10, 10, 10));
 @time f(A)
 
 # elapsed time: 3.43857013 seconds (1280585500 bytes allocated, 39.46% gc
 time)
 
 
 A function that creates an intermediary matrix is much faster
 
 function g(A::Array{Float64, 7})
 out = Array(Float64, (10, 10))
 for i1 in 1:10
 for i2 in 1:10
 for i3 in 1:10
 for i4 in 1:10
 for i5 in 1:10
 for i6 in 1:10
 for i7 in 1:10
 out[i7, i6] = 1.0
 end
 end
 A[:, :, i5, i4, i3, i2, i1] = out
 end
 end
 end
 end
 end
 end
 A = Array(Float64, (10, 10, 10, 10, 10, 10, 10));
 @time g(A)
 
 # elapsed time: 0.130418247 seconds (32668788 bytes allocated)
 
 What is going on?



Re: [julia-users] Re: MongoDB and Julia

2015-07-13 Thread Jacob Quinn
No worries. I realize it's a bit of a square peg-round hole there.

On Mon, Jul 13, 2015 at 2:07 PM, Kevin Liu kevinliu2...@gmail.com wrote:

 Hey Jacob, thanks for the suggestion. ODBC just doesn't sound like the
 optimal way to go for being too generic. I am studying its implications and
 alternatives, but probably won't follow with ODBC. I appreciate the help.

 On Monday, July 13, 2015 at 4:02:25 PM UTC-3, Jacob Quinn wrote:

 You may also try Pkg.add(ODBC) if you can find a working ODBC driver
 for mongo. I feel like I've heard of people going this route.

 -Jacob

 On Mon, Jul 13, 2015 at 9:23 AM, Kevin Liu kevinl...@gmail.com wrote:

 Hey Stefan, thanks for replying. I have not opened an issue on Github's
 pzion/Mongo.jl. I will, and I will attempt to debug it. Thank you. Kevin

 On Monday, July 13, 2015 at 9:02:30 AM UTC-3, Stefan Karpinski wrote:

 Have you tried opening issues on the relevant packages? Most people
 here (myself included) won't know much about mongoDB or these packages.


 On Jul 13, 2015, at 12:27 AM, Kevin Liu kevinl...@gmail.com wrote:

 Any help would be greatly appreciated. I am even debating over the idea
 of contributing to the development of this package because I believe so
 much in the language and need to use MongoDB.

 On Sunday, July 12, 2015 at 4:17:44 AM UTC-3, Kevin Liu wrote:

 Hi,

 I have Julia 0.3, Mongodb-osx-x86_64-3.0.4, and Mongo-c-driver-1.1.9
 installed, but can't get Julia to access the Mongo Client through this
 'untestable' package https://github.com/pzion/Mongo.jl, according to
 http://pkg.julialang.org/.

 I have tried Lytol/Mongo.jl and the command require(Mongo.jl) can't
 open file Mongo.jl, or the auto-generated deps.jl.

 Is anyone having similar problems trying to make Julia work with
 Mongo?

 Thank you

 Kevin





Re: [julia-users] Performance of multidimensional arrays

2015-07-13 Thread Matthieu
Thanks a lot! Not surprised you're already on it.

On Monday, July 13, 2015 at 4:51:37 PM UTC-4, Tim Holy wrote:

 https://github.com/JuliaLang/julia/issues/9622 

 --Tim 

 On Monday, July 13, 2015 01:15:28 PM Matthieu wrote: 
  I get bad performance when filling arrays with high dimension.  My 
 version 
  of Julia is 0.3.10. This is the simplest example I could come up with: 
  
  function f(A::Array{Float64, 7}) 
  for i1 in 1:10 
  for i2 in 1:10 
  for i3 in 1:10 
  for i4 in 1:10 
  for i5 in 1:10 
  for i6 in 1:10 
  for i7 in 1:10 
  A[i7, i6, i5, i4, i3, i2, i1] = 1.0 
  end 
  end 
  end 
  end 
  end 
  end 
  end 
  end 
  A = Array(Float64, (10, 10, 10, 10, 10, 10, 10)); 
  @time f(A) 
  
  # elapsed time: 3.43857013 seconds (1280585500 bytes allocated, 39.46% 
 gc 
  time) 
  
  
  A function that creates an intermediary matrix is much faster 
  
  function g(A::Array{Float64, 7}) 
  out = Array(Float64, (10, 10)) 
  for i1 in 1:10 
  for i2 in 1:10 
  for i3 in 1:10 
  for i4 in 1:10 
  for i5 in 1:10 
  for i6 in 1:10 
  for i7 in 1:10 
  out[i7, i6] = 1.0 
  end 
  end 
  A[:, :, i5, i4, i3, i2, i1] = out 
  end 
  end 
  end 
  end 
  end 
  end 
  A = Array(Float64, (10, 10, 10, 10, 10, 10, 10)); 
  @time g(A) 
  
  # elapsed time: 0.130418247 seconds (32668788 bytes allocated) 
  
  What is going on? 



[julia-users] Re: MongoDB and Julia

2015-07-13 Thread Jeff Waller


On Monday, July 13, 2015 at 3:27:49 AM UTC-4, Kevin Liu wrote:

 Any help would be greatly appreciated. I am even debating over the idea of 
 contributing to the development of this package because I believe so much 
 in the language and need to use MongoDB. 


I think this is why it's untestable.  

https://travis-ci.org/pzion/Mongo.jl/jobs/54034564

Lytol/Mongo.jl looks abandoned.  It has a bunch of issues created over the 
past 2 years and
the last update was in 2013 the pzion repo is a fork which was updated 4 
months ago, maybe
it's abandoned too and you'll have to fork.  But it's at least work 
contacting him.




Re: [julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Jeff Waller


On Monday, July 13, 2015 at 10:54:57 AM UTC-4, Kostas Tavlaridis-Gyparakis 
wrote:

 Any ideas on how to fix the compile error: julia.h: No such file or 
 directory  #include julia.h compilation terminated.?


Yea of course.  This is a result of the -I flag having the wrong value.  

There was a space of time that this command (julia_config.jl) was not 
working
as a result of the new use of  sys.so instead of sys.ji, but that has sense 
been fixed, so
that's why I'm asking what version it was.  It would be working in the 
newest version, but
I am verifying that now.

The cause of libjulia.so not found is the link step is missing -Wl,-rpath, 
which julia_config gives you
that's why I keep coming back to it.


Re: [julia-users] Re: MongoDB and Julia

2015-07-13 Thread Kevin Liu
Hey Stefan, thanks for replying. I have not opened an issue on Github's 
pzion/Mongo.jl. I will, and I will attempt to debug it. Thank you. Kevin

On Monday, July 13, 2015 at 9:02:30 AM UTC-3, Stefan Karpinski wrote:

 Have you tried opening issues on the relevant packages? Most people here 
 (myself included) won't know much about mongoDB or these packages.


 On Jul 13, 2015, at 12:27 AM, Kevin Liu kevinl...@gmail.com javascript: 
 wrote:

 Any help would be greatly appreciated. I am even debating over the idea of 
 contributing to the development of this package because I believe so much 
 in the language and need to use MongoDB. 

 On Sunday, July 12, 2015 at 4:17:44 AM UTC-3, Kevin Liu wrote:

 Hi, 

 I have Julia 0.3, Mongodb-osx-x86_64-3.0.4, and Mongo-c-driver-1.1.9 
 installed, but can't get Julia to access the Mongo Client through this 
 'untestable' package https://github.com/pzion/Mongo.jl, according to  
 http://pkg.julialang.org/. 

 I have tried Lytol/Mongo.jl and the command require(Mongo.jl) can't 
 open file Mongo.jl, or the auto-generated deps.jl. 

 Is anyone having similar problems trying to make Julia work with Mongo? 

 Thank you

 Kevin



Re: [julia-users] SilverFrost Fortran

2015-07-13 Thread Tony Kelman
Where did you compile that from? Maybe you were trying to use the cygwin 
version of gfortran instead of the mingw cross-compiler version 
(x86_64-w64-mingw32-gfortran) ? That test file and ccall invocation gives 
an EXCEPTION_ACCESS_VIOLATION for me, I think you need to pass parameters 
by reference.

ccall((:__nuts_MOD_foo, nuts.so), Int32, (Ref{Int32},), 3)

works for me on Julia 0.4-dev, or

ccall((:__nuts_MOD_foo, nuts.so), Int32, (Ptr{Int32},), 3)

on Julia 0.3.


On Monday, July 13, 2015 at 3:39:05 PM UTC-7, LarryD wrote:

 I'm afraid I'm not getting anywhere. I decided to forget the SilverFrost 
 compiler and go with gfortran, attempting to mimic the examples I've found 
 online.  My test Fortran code is

 !fileName = nuts.f95
 module nuts
 integer none
 contains
 function foo(i)
   integer :: i, foo
   foo = i + 3
 end function foo
 end module nuts

 I compiled with

 gfortran nuts.f95 -o nuts.so -shared -fPIC

 and got the warning message 

 f951.exe: warning: -fPIC ignored for target (all code is position 
 independent).

 Since a nuts.so file was generated I tried the ccall:  (my working 
 direcfory is c:\users\larry\juliastuff)

 ccall((:__nuts_MOD_foo, C:\\Users\\Larry\\JuliaStuff\\nuts.so), Int32, 
 (Int32,), 3)

 and got the error message

 error compiling anonymous: could not load module 
 C:\Users\Larry\JuliaStuff\nuts.so: The specified module could not be found.

 Clearly I'm doing some thing(s) wrong. Why am I getting the Fortran 
 warning that nobody else gets, and why can't ccall find the module?

 Thanks in advance for your time and patience with a newbie.

 Larry

 On Saturday, July 11, 2015 at 12:31:44 PM UTC-5, Tony Kelman wrote:

 I've never heard of that compiler, which surprises me a little. It looks 
 like it's primarily for 32 bit Windows, so you'll need to use a 32 bit 
 version of Julia to call into shared libraries built using that compiler. 
 If you have access to the Fortran source you could also try rebuilding with 
 the more common open-source MinGW-w64 version of gfortran, for either 32 or 
 64 bit Windows. If you only have access to compiled binaries, are they 
 shared libraries (dlls) or static libraries? If they're dll's, you can try 
 looking at them using Dependency Walker to see what the exported symbol 
 names are, then call them according to the interfacing with C and Fortran 
 documentation. If you only have static libraries, you could try calling the 
 linker to build a shared library out of them.


 On Saturday, July 11, 2015 at 6:36:30 AM UTC-7, Stefan Karpinski wrote:

 In general, the only issues with calling Fortran involve calling 
 convention incompatibility with C. There's a fairly old issue about 
 implementing fcall https://github.com/JuliaLang/julia/issues/2167 (cf 
 ccall), which natively emits calls using the Fortran calling convention. 
 Have you tried calling code compiled with this compiler and had problems?

 On Saturday, July 11, 2015, LarryD larryd...@gmail.com wrote:

 I'm just starting to learn Julia, so I apologize for dumb questions.  
 Does anybody have experience calling stuff written in SilverFrost Fortran 
 from Julia? Thanks.

 LarryD



Re: [julia-users] SilverFrost Fortran

2015-07-13 Thread LarryD
I'm afraid I'm not getting anywhere. I decided to forget the SilverFrost 
compiler and go with gfortran, attempting to mimic the examples I've found 
online.  My test Fortran code is

!fileName = nuts.f95
module nuts
integer none
contains
function foo(i)
  integer :: i, foo
  foo = i + 3
end function foo
end module nuts

I compiled with

gfortran nuts.f95 -o nuts.so -shared -fPIC

and got the warning message 

f951.exe: warning: -fPIC ignored for target (all code is position 
independent).

Since a nuts.so file was generated I tried the ccall:  (my working 
direcfory is c:\users\larry\juliastuff)

ccall((:__nuts_MOD_foo, C:\\Users\\Larry\\JuliaStuff\\nuts.so), Int32, 
(Int32,), 3)

and got the error message

error compiling anonymous: could not load module 
C:\Users\Larry\JuliaStuff\nuts.so: The specified module could not be found.

Clearly I'm doing some thing(s) wrong. Why am I getting the Fortran warning 
that nobody else gets, and why can't ccall find the module?

Thanks in advance for your time and patience with a newbie.

Larry

On Saturday, July 11, 2015 at 12:31:44 PM UTC-5, Tony Kelman wrote:

 I've never heard of that compiler, which surprises me a little. It looks 
 like it's primarily for 32 bit Windows, so you'll need to use a 32 bit 
 version of Julia to call into shared libraries built using that compiler. 
 If you have access to the Fortran source you could also try rebuilding with 
 the more common open-source MinGW-w64 version of gfortran, for either 32 or 
 64 bit Windows. If you only have access to compiled binaries, are they 
 shared libraries (dlls) or static libraries? If they're dll's, you can try 
 looking at them using Dependency Walker to see what the exported symbol 
 names are, then call them according to the interfacing with C and Fortran 
 documentation. If you only have static libraries, you could try calling the 
 linker to build a shared library out of them.


 On Saturday, July 11, 2015 at 6:36:30 AM UTC-7, Stefan Karpinski wrote:

 In general, the only issues with calling Fortran involve calling 
 convention incompatibility with C. There's a fairly old issue about 
 implementing fcall https://github.com/JuliaLang/julia/issues/2167 (cf 
 ccall), which natively emits calls using the Fortran calling convention. 
 Have you tried calling code compiled with this compiler and had problems?

 On Saturday, July 11, 2015, LarryD larryd...@gmail.com wrote:

 I'm just starting to learn Julia, so I apologize for dumb questions.  
 Does anybody have experience calling stuff written in SilverFrost Fortran 
 from Julia? Thanks.

 LarryD



[julia-users] Re: MongoDB and Julia

2015-07-13 Thread Kevin Liu
Thanks Jeff, I will look into it and see if that's the case. I will review 
it carefully because I want the driver to run smoothly. 

In my last post, there was a typo, so I'm just pasting the 
source http://www.slideshare.net/NorbertoLeite/how-mongodb-drv for slide 
28/29. 

On Monday, July 13, 2015 at 6:19:08 PM UTC-3, Jeff Waller wrote:



 On Monday, July 13, 2015 at 3:27:49 AM UTC-4, Kevin Liu wrote:

 Any help would be greatly appreciated. I am even debating over the idea 
 of contributing to the development of this package because I believe so 
 much in the language and need to use MongoDB. 


 I think this is why it's untestable.  

 https://travis-ci.org/pzion/Mongo.jl/jobs/54034564

 Lytol/Mongo.jl looks abandoned.  It has a bunch of issues created over 
 the past 2 years and
 the last update was in 2013 the pzion repo is a fork which was updated 4 
 months ago, maybe
 it's abandoned too and you'll have to fork.  But it's at least work 
 contacting him.




Re: [julia-users] Re: Help to eliminate Calendar.jl dependence

2015-07-13 Thread Avik Sengupta
now() - now(Dates.UTC) does not actually return the correct current local 
offset, since the two now() invocations happen a few milliseconds apart, 
and thus do not denote the exact same time. Hence their difference then is 
not the exact local difference. So unfortunately, as a hack, it does not 
quite seem to work. 

For example, with GMT-1, on a mac:

*julia **now() - now(Dates.UTC)*

*3599430 milliseconds*


*julia **Dates.Second(div(Dates.value(now() - now(Dates.UTC)),1000))*

*3599 seconds*

On Friday, 10 July 2015 17:49:45 UTC+1, Jacob Quinn wrote:


 On Fri, Jul 10, 2015 at 8:11 AM, Tom Breloff t...@breloff.com 
 javascript: wrote:

 as


 Tom,

 Yes, the method I proposed won't work retroactively since the method for 
 getting the current local offset from GMT is `now() - now(Dates.UTC)`. If 
 you were to run that every second crossing over the daylight savings 
 moment, you'd see that it correctly adjusts for daylight savings, but it's 
 only going to give you the *current* offset from GMT. Something more 
 elaborate will require tapping into the tzinfo database (which TimeZones.jl 
 will do).

 -Jacob



Re: [julia-users] Re: Too many packages?

2015-07-13 Thread milktrader
I echo Tom Breloff's sentiment that Pkg.add() should simply be expanded to 
allow various repos of repos in a kwarg argument. This is an open issue here 
https://github.com/JuliaLang/julia/issues/11914.

The default repo of repos would be METADATA or maybe it's possible new name 
of CuratedPackages, and third parties would be welcome to add their own 
repo of repos which I'm sure could be coded into the method (i.e., JewelBox 
would map to https://github.com/JuliaJulia/JewelBox.jl) or alternatively 
the url can be provided as a string.  

On Monday, July 13, 2015 at 2:57:14 PM UTC-4, Jacob Quinn wrote:

 Note there's also an open issue for requiring a higher overall standard 
 for officially registered packages in the JuliaLang/METADATA.jl package 
 repository. It's a big issue with a lot of work required to get to the 
 proposal, but it would lead to (hopefully) instilling more confidence in 
 users knowing that anything they add through `Pkg.add()` would meet some 
 acceptable level of quality and robustness.

 -Jacob

 On Mon, Jul 13, 2015 at 11:11 AM, Christoph Ortner christop...@gmail.com 
 javascript: wrote:

 I seem to be in the minority too many packages camp. I would prefer 
 stable updates of julia version which means that key functionality should 
 be included in core, e.g. BLAS, sparse solvers, eig, eigs, basic plotting 
 and so on and so forth. But at some point there was an idea of having core 
 and Stdlib, which I think is equally acceptable.
 Christoph




Re: [julia-users] Re: Strange performance problem for array scaling

2015-07-13 Thread Jeffrey Sarnoff
Denormals were made part of the IEEE Floating Point standard after some 
very careful numerical analysis showed that accomodating them would 
substantively improve the quality of floating point results and this would 
lift the quality of all floating point work. Surprising it may be, 
nonetheless you (and if not you today, you tomorrow and one of your 
neighbors today) really do care about those unusual, and often rarely 
observed values.

fyi
William Kahan on the introduction of denormals to the standard 
https://www.cs.berkeley.edu/~wkahan/ieee754status/754story.html
and an early, important paper on this
Effects of Underflow on Solving Linear Systems - J.Demmel 1981 
http://www.eecs.berkeley.edu/Pubs/TechRpts/1983/CSD-83-128.pdf
  

On Monday, July 13, 2015 at 12:35:24 AM UTC-4, Yichao Yu wrote:

 On Sun, Jul 12, 2015 at 10:30 PM, Yichao Yu yyc...@gmail.com 
 javascript: wrote: 
  Further update: 
  
  I made a c++ version[1] and see a similar effect (depending on 
  optimization levels) so it's not a julia issue (not that I think it 
  really was to begin with...). 

 After investigating the c++ version more, I find that the difference 
 between the fast_math and the non-fast_math version is that the 
 compiler emit a function called `set_fast_math` (see below). 

 From what I can tell, the function sets bit 6 and bit 15 on the MXCSR 
 register (for SSE) and according to this page[1] these are DAZ and FZ 
 bits (both related to underflow). It also describe denormals as take 
 considerably longer to process. Since the operation I have keeps 
 decreasing the value, I guess it makes sense that there's a value 
 dependent performance (and it kind of make sense that fft also 
 suffers from these values) 

 So now the question is: 

 1. How important are underflow and denormal values? Note that I'm not 
 catching underflow explicitly anyway and I don't really care about 
 values that are really small compare to 1. 

 2. Is there a way to set up the SSE registers as done by the c 
 compilers? @fastmath does not seems to be doing this. 

 05b0 set_fast_math: 
  5b0:0f ae 5c 24 fc   stmxcsr -0x4(%rsp) 
  5b5:81 4c 24 fc 40 80 00 orl$0x8040,-0x4(%rsp) 
  5bc:00 
  5bd:0f ae 54 24 fc   ldmxcsr -0x4(%rsp) 
  5c2:c3   retq 
  5c3:66 2e 0f 1f 84 00 00 nopw   %cs:0x0(%rax,%rax,1) 
  5ca:00 00 00 
  5cd:0f 1f 00 nopl   (%rax) 

 [1] http://softpixel.com/~cwright/programming/simd/sse.php 

  
  The slow down is presented in the c++ version for all optimization 
  levels except Ofast and ffast-math. The julia version is faster than 
  the default performance for both gcc and clang but is slower in the 
  fast case for higher optmization levels. For O2 and higher, the c++ 

 The slowness of the julia version seems to be due to multi dimentional 
 arrays. Using 1d array yields similar performance with C. 

  version shows a ~100x slow down for the slow case. 
  
  @fast_math in julia doesn't seem to have an effect for this although 
  it does for clang and gcc... 
  
  [1] 
 https://github.com/yuyichao/explore/blob/5a644cd46dc6f8056cee69f508f9e995b5839a01/julia/array_prop/propagate.cpp
  
  
  On Sun, Jul 12, 2015 at 9:23 PM, Yichao Yu yyc...@gmail.com 
 javascript: wrote: 
  Update: 
  
  I've just got an even simpler version without any complex numbers and 
  only has Float64. The two loops are as small as the following LLVM-IR 
  now and there's only simple arithmetics in the loop body. 
  
  ```llvm 
  L9.preheader: ; preds = %L12, 
  %L9.preheader.preheader 
%#s3.0 = phi i64 [ %60, %L12 ], [ 1, %L9.preheader.preheader ] 
br label %L9 
  
  L9:   ; preds = %L9, 
 %L9.preheader 
%#s4.0 = phi i64 [ %44, %L9 ], [ 1, %L9.preheader ] 
%44 = add i64 %#s4.0, 1 
%45 = add i64 %#s4.0, -1 
%46 = mul i64 %45, %10 
%47 = getelementptr double* %7, i64 %46 
%48 = load double* %47, align 8 
%49 = add i64 %46, 1 
%50 = getelementptr double* %7, i64 %49 
%51 = load double* %50, align 8 
%52 = fmul double %51, %3 
%53 = fmul double %38, %48 
%54 = fmul double %33, %52 
%55 = fadd double %53, %54 
store double %55, double* %50, align 8 
%56 = fmul double %38, %52 
%57 = fmul double %33, %48 
%58 = fsub double %56, %57 
store double %58, double* %47, align 8 
%59 = icmp eq i64 %#s4.0, %12 
br i1 %59, label %L12, label %L9 
  
  L12:  ; preds = %L9 
%60 = add i64 %#s3.0, 1 
%61 = icmp eq i64 %#s3.0, %42 
br i1 %61, label %L14.loopexit, label %L9.preheader 
  ``` 
  
  On Sun, Jul 12, 2015 at 9:01 PM, Yichao Yu yyc...@gmail.com 
 javascript: wrote: 
  On Sun, Jul 12, 2015 at 8:31 PM, Kevin Owens kevin@gmail.com 
 javascript: wrote: 
  I can't really help you debug the IR code, but I can at least say I'm 
 

Re: [julia-users] Re: Strange performance problem for array scaling

2015-07-13 Thread Jeffrey Sarnoff
and this: Cleve Moler tries to see it your way 
Moler on floating point denormals 
http://blogs.mathworks.com/cleve/2014/07/21/floating-point-denormals-insignificant-but-controversial-2/

On Monday, July 13, 2015 at 2:11:22 AM UTC-4, Jeffrey Sarnoff wrote:

 Denormals were made part of the IEEE Floating Point standard after some 
 very careful numerical analysis showed that accomodating them would 
 substantively improve the quality of floating point results and this would 
 lift the quality of all floating point work. Surprising it may be, 
 nonetheless you (and if not you today, you tomorrow and one of your 
 neighbors today) really do care about those unusual, and often rarely 
 observed values.

 fyi
 William Kahan on the introduction of denormals to the standard 
 https://www.cs.berkeley.edu/~wkahan/ieee754status/754story.html
 and an early, important paper on this
 Effects of Underflow on Solving Linear Systems - J.Demmel 1981 
 http://www.eecs.berkeley.edu/Pubs/TechRpts/1983/CSD-83-128.pdf
   

 On Monday, July 13, 2015 at 12:35:24 AM UTC-4, Yichao Yu wrote:

 On Sun, Jul 12, 2015 at 10:30 PM, Yichao Yu yyc...@gmail.com wrote: 
  Further update: 
  
  I made a c++ version[1] and see a similar effect (depending on 
  optimization levels) so it's not a julia issue (not that I think it 
  really was to begin with...). 

 After investigating the c++ version more, I find that the difference 
 between the fast_math and the non-fast_math version is that the 
 compiler emit a function called `set_fast_math` (see below). 

 From what I can tell, the function sets bit 6 and bit 15 on the MXCSR 
 register (for SSE) and according to this page[1] these are DAZ and FZ 
 bits (both related to underflow). It also describe denormals as take 
 considerably longer to process. Since the operation I have keeps 
 decreasing the value, I guess it makes sense that there's a value 
 dependent performance (and it kind of make sense that fft also 
 suffers from these values) 

 So now the question is: 

 1. How important are underflow and denormal values? Note that I'm not 
 catching underflow explicitly anyway and I don't really care about 
 values that are really small compare to 1. 

 2. Is there a way to set up the SSE registers as done by the c 
 compilers? @fastmath does not seems to be doing this. 

 05b0 set_fast_math: 
  5b0:0f ae 5c 24 fc   stmxcsr -0x4(%rsp) 
  5b5:81 4c 24 fc 40 80 00 orl$0x8040,-0x4(%rsp) 
  5bc:00 
  5bd:0f ae 54 24 fc   ldmxcsr -0x4(%rsp) 
  5c2:c3   retq 
  5c3:66 2e 0f 1f 84 00 00 nopw   %cs:0x0(%rax,%rax,1) 
  5ca:00 00 00 
  5cd:0f 1f 00 nopl   (%rax) 

 [1] http://softpixel.com/~cwright/programming/simd/sse.php 

  
  The slow down is presented in the c++ version for all optimization 
  levels except Ofast and ffast-math. The julia version is faster than 
  the default performance for both gcc and clang but is slower in the 
  fast case for higher optmization levels. For O2 and higher, the c++ 

 The slowness of the julia version seems to be due to multi dimentional 
 arrays. Using 1d array yields similar performance with C. 

  version shows a ~100x slow down for the slow case. 
  
  @fast_math in julia doesn't seem to have an effect for this although 
  it does for clang and gcc... 
  
  [1] 
 https://github.com/yuyichao/explore/blob/5a644cd46dc6f8056cee69f508f9e995b5839a01/julia/array_prop/propagate.cpp
  
  
  On Sun, Jul 12, 2015 at 9:23 PM, Yichao Yu yyc...@gmail.com wrote: 
  Update: 
  
  I've just got an even simpler version without any complex numbers and 
  only has Float64. The two loops are as small as the following LLVM-IR 
  now and there's only simple arithmetics in the loop body. 
  
  ```llvm 
  L9.preheader: ; preds = %L12, 
  %L9.preheader.preheader 
%#s3.0 = phi i64 [ %60, %L12 ], [ 1, %L9.preheader.preheader ] 
br label %L9 
  
  L9:   ; preds = %L9, 
 %L9.preheader 
%#s4.0 = phi i64 [ %44, %L9 ], [ 1, %L9.preheader ] 
%44 = add i64 %#s4.0, 1 
%45 = add i64 %#s4.0, -1 
%46 = mul i64 %45, %10 
%47 = getelementptr double* %7, i64 %46 
%48 = load double* %47, align 8 
%49 = add i64 %46, 1 
%50 = getelementptr double* %7, i64 %49 
%51 = load double* %50, align 8 
%52 = fmul double %51, %3 
%53 = fmul double %38, %48 
%54 = fmul double %33, %52 
%55 = fadd double %53, %54 
store double %55, double* %50, align 8 
%56 = fmul double %38, %52 
%57 = fmul double %33, %48 
%58 = fsub double %56, %57 
store double %58, double* %47, align 8 
%59 = icmp eq i64 %#s4.0, %12 
br i1 %59, label %L12, label %L9 
  
  L12:  ; preds = %L9 
%60 = add i64 %#s3.0, 1 
%61 = icmp eq i64 %#s3.0, %42 
br i1 %61, label %L14.loopexit, label %L9.preheader 
  ``` 
  
  

[julia-users] Re: MongoDB and Julia

2015-07-13 Thread Kevin Liu
Any help would be greatly appreciated. I am even debating over the idea of 
contributing to the development of this package because I believe so much 
in the language and need to use MongoDB. 

On Sunday, July 12, 2015 at 4:17:44 AM UTC-3, Kevin Liu wrote:

 Hi, 

 I have Julia 0.3, Mongodb-osx-x86_64-3.0.4, and Mongo-c-driver-1.1.9 
 installed, but can't get Julia to access the Mongo Client through this 
 'untestable' package https://github.com/pzion/Mongo.jl, according to  
 http://pkg.julialang.org/. 

 I have tried Lytol/Mongo.jl and the command require(Mongo.jl) can't 
 open file Mongo.jl, or the auto-generated deps.jl. 

 Is anyone having similar problems trying to make Julia work with Mongo? 

 Thank you

 Kevin



[julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Jeff Waller
It's not clear to me which version you are using?  Depending on the 
version, It is referred
to in the URL you linked...  

I'll just cut to the chase use 0.4 and julia_config.jl as described in the 
doc, create a
Makefile just cut-and-paste the example, and augment with your source.  All 
but
one of your errors is a result of the wrong compile/link flags.

The last error is that main() is either not being compiled or linked, 
that's just straight-up
C programming, and has nothing to do with Julia.

As far as eclipse goes, I'm confident it's possible, I can't imagine 
eclipse not supporting
compilation using Makefiles, but even if it doesn't you can still automate 
things, but just
get something working first and you can embellish later.

TL;DR

0.4,  julia_config, cut-paste Makefile, add your source, done