Re: [julia-users] Intel Xeon Phi support?

2015-03-12 Thread Stefan Karpinski
0.5 should be released around end of 2015, but there will be support on
master before that.

On Tue, Mar 10, 2015 at 2:00 PM, Jeff Waller truth...@gmail.com wrote:



 On Tuesday, March 10, 2015 at 1:39:42 PM UTC-4, Stefan Karpinski wrote:

 I'm not sure what that would mean – CPUs don't ship with software. Julia
 will support Kinght's Landing, however, although it probably won't do so
 until version 0.5.

 On Tue, Mar 10, 2015 at 1:36 PM, Karel Zapfe kza...@gmail.com wrote:

 Hello:

 Is it true then, that Knight's Landing will have Julia out-of-the-box? I
 was checking the page of Intel, but found nothing to the respect. At my
 laboratory we had some extra money, and were considering on getting one,
 but the point is that none of us is really good at using fortran+mpi or
 c+mpi, so with Julia most of us non-programers-but-researchers could have
 hope of really using it.


 Knight's Landing is not supposed to be available until Q2 which strictly
 speaking, I guess is just a couple of weeks but I would have expected to
 see some big announce.  Maybe it won't really be readily available with
 (non-reference) motherboards until summer?  do you expect dev 0.5 by say
 July (kinda like last year)?



Re: [julia-users] Specifying return type of a function

2015-03-12 Thread Shivkumar Chandrasekaran
Just documentation and readability of the functions themselves. For now I 
will just stick the return type in a comment (and hope I don't forget to 
change it if needed).

On Tuesday, March 10, 2015 at 3:20:42 PM UTC-7, Milan Bouchet-Valat wrote:

 Le mardi 10 mars 2015 à 15:12 -0700, Shivkumar Chandrasekaran a écrit : 
  Thanks! I guess I will put the return type in the calling code 
  instead. Nuisance though. 
 But you shouldn't need to. Julia is able to find out what the return 
 type is as long as you write type-stable code. Can you give more details 
 about what you're trying to achieve? 


 Regards 

  On Tuesday, March 10, 2015 at 2:39:37 PM UTC-7, Mauro wrote: 
  Sadly not.  Have a look at 
  https://github.com/JuliaLang/julia/issues/1090 
  and 
  https://github.com/JuliaLang/julia/pull/10269 
  
  The complication in Julia is that with its multimethods, it is 
  not so 
  clear what the return type of a generic function actually 
  means. 
  
  On Tue, 2015-03-10 at 21:24, Shivkumar Chandrasekaran 
  00s...@gmail.com wrote: 
   I am new to Julia, so forgive the elementary question, but I 
  could not seem 
   to find the answer in the docs or by googling the news 
  group. 
   
   Is it possible to specify the return type of a function in 
  Julia? 
   
   Thanks. 
   
   --shiv-- 
  



[julia-users] Re: Error array could not be broadcast to a common size

2015-03-12 Thread David P. Sanders


El martes, 10 de marzo de 2015, 18:14:45 (UTC-6), Rafael Guariento escribió:

 Hi I am trying to run the following code but I get an error when I try to 
 run the model (evaluate the result variable (calling the ODE23))

 the error is: Error array could not be broadcast to a common size

 Does anyone have any idea why? Thanks in advance


Hi,

You have the order of the arguments to `ode23` wrong. Try the following 
version.
It has some extra cleanup; note, in particular, the added . after  to 
make an array of the correct type (`Float64`s) ).

Try to avoid using the anonymous function syntax, and just pass the SIR 
function straight to ode23.
There is a way to pass the extra arguments (your `p`) to the SIR function 
straight from ode23, but I don't have time to check
how right now.

Best,
David.


using ODE

function SIR(t,x,p)
S, I, R = x 
beta, gamma = p
N = S + I
dS = -beta*S*I/N
dI = beta*S*I/N - gamma*I
dR = gamma*I

[dS,dI,dR]
end

# Initialise model
t = linspace(0,500,101);
inits=[.,1,0];
p=[0.1,0.05];

# Run model
result=ode23((t,x)- SIR(t,x,p), inits, t);
 



 # Load libraries
 using ODE
 using DataFrames
 using Gadfly

 # Define the model
 # Has to return a column vector
 function SIR(t,x,p)
 S=x[1]
 I=x[2]
 R=x[3]
 beta=p[1]
 gamma=p[2]
 N=S+I
 dS=-beta*S*I/N
 dI=beta*S*I/N-gamma*I
 dR=gamma*I
 return([dS;dI;dR])
 end

 # Initialise model
 t = linspace(0,500,101);
 inits=[,1,0];
 p=[0.1,0.05];


 # Run model
 result=ode23((t,x)- SIR(t,x,p),t,inits);






[julia-users] how to paste png into ipython julia notebook?

2015-03-12 Thread Edward Chen
from IPython.display import Image
Image(filename='image.png')

doesn't seem to work

Thanks!
-Ed


[julia-users] Why don't copied arrays change in function calls?

2015-03-12 Thread Peter Drummond
How do you pass two arrays to a function so that the function can copy one 
to the other, and return the values in place? Of course, a=b won't work, 
but neither does a=copy(b). See the following example:

julia function copytest!(a,b)
   b=copy(a)
   println (copytest: a,b ,a,b)
   nothing
   end
copytest! (generic function with 1 method)

julia a=ones(2,2)
2x2 Array{Float64,2}:
 1.0  1.0
 1.0  1.0

julia b=zeros(2,2)
2x2 Array{Float64,2}:
 0.0  0.0
 0.0  0.0

julia copytest!(a,b)
copytest: a,b [1.0 1.0
 1.0 1.0][1.0 1.0
 1.0 1.0]

julia println (copytest: a,b ,a,b)
copytest: a,b [1.0 1.0
 1.0 1.0][0.0 0.0
 0.0 0.0]

The first println shows that b WAS changed inside the copytest function. 
The second println shows that it WASN'T changed on return. I've also tried 
with do loops, but this causes a segmentation fault.


[julia-users] How can I convert a set into an array?

2015-03-12 Thread Ali Rezaee
In Python I would normally to something like this:

a = set([1,2,1,3])
a = list(a)

What is the equivalent way to do this in Julia?


Thanks a lot in advance for your help


[julia-users] Serving static files in a Julia web app

2015-03-12 Thread jock . lawrie
Hi all,

I am building a bare bones web app 
https://bitbucket.org/jocklawrie/skeleton-webapp.jl for displaying data 
with interactive charts.

The app currently serves static files from root/static/xxx/filename, where 
xxx is, for example, css or js. It does this via:
route(serve_static_files, app, GET, /static/level1/filename)

However, I can't get it to serve files from deeper in the hierarchy, for 
example root/static/xxx/yyy/filename. I've tried several variations of the 
above code but clearly I am missing something. Any ideas?

Cheers,
Jock


Re: [julia-users] How can I convert a set into an array?

2015-03-12 Thread Kevin Squire
Should be obvious, but if your set doesn't contain integers, you would use

a = Set(['a','c','d','c'])
b = collect(a)

In either case, in Julia, it's usually better not to change the type of a
from Set (or IntSet) to Array.

Another option is to do

a = [1,2,1,3]
a = unique(a)

which actually uses a Set in unique function the to find unique elements,
but then returns an array.

Cheers,
   Kevin

On Wed, Mar 11, 2015 at 9:12 AM, René Donner li...@donner.at wrote:

 Hi,

 try:

   a = IntSet([1,2,1,3])
   collect(a)

 that gives you a 3-element Array{Int64,1}.  This also work with ranges,
 like collect(1:3)

 Cheers,

 Rene



 Am 11.03.2015 um 17:03 schrieb Ali Rezaee arv.ka...@gmail.com:

  In Python I would normally to something like this:
 
  a = set([1,2,1,3])
  a = list(a)
 
  What is the equivalent way to do this in Julia?
 
 
  Thanks a lot in advance for your help




Re: [julia-users] Saving timing results from @time

2015-03-12 Thread Stefan Karpinski
I generally find that using a comprehension around @elapsed is pretty terse
and clear – it also makes it easy to compose with reducers like min, max,
median, and quantile, which is convenient for analysis.

On Wed, Mar 11, 2015 at 12:51 PM, Patrick Kofod Mogensen 
patrick.mogen...@gmail.com wrote:

 It is indeed, thank you. I was also told that @timeit might do something
 along the lines of what I am doing.



 On Wednesday, March 11, 2015 at 5:07:24 PM UTC+1, Andreas Noack wrote:

 @elapsed is what you are looking for

 2015-03-11 7:43 GMT-04:00 Patrick Kofod Mogensen patrick@gmail.com:

 I am testing the run times of two different algorithms, solving the same
 problem. I know there is the @time macro, but I cannot seem to wrap my head
 around how I should save the printed times. Any clever way of doing this? I
 thought I would be able to

 [@time algo(input) for i = 1:500],

 but this saves the return form algo.


 Best,
 Patrick





Re: [julia-users] Intel Xeon Phi support?

2015-03-12 Thread Karel Zapfe
Hello:

Is it true then, that Knight's Landing will have Julia out-of-the-box? I 
was checking the page of Intel, but found nothing to the respect. At my 
laboratory we had some extra money, and were considering on getting one, 
but the point is that none of us is really good at using fortran+mpi or 
c+mpi, so with Julia most of us non-programers-but-researchers could have 
hope of really using it. 

El sábado, 8 de noviembre de 2014, 18:40:20 (UTC-6), John Drummond escribió:

 http://www.colfax-intl.com/nd/xeonphi/31s1p-promo.aspx is a link from a 
 distributor. Presumably Intel are trying to encourage the growth of the 
 use, also with knight's landing turning up and they made a lot of them. In 
 lots of 10 they're selling at 125 usd each.

 On Saturday, November 8, 2014 7:20:17 AM UTC, Jeff Waller wrote:



 On Thursday, November 6, 2014 1:14:51 PM UTC-5, Viral Shah wrote:

 We had ordered a couple, but they are really difficult to get working. 
 There is a fair bit of compiler work that is required to get it to work - 
 so it is safe to assume that this is not coming anytime soon. However, the 
 Knight's Landing should work out of the box with Julia whenever it comes 
 and we will most likely have robust multi-threading support by then to 
 leverage it.


 Aww!
  


 Out of curiosity, what would you like to run on the Xeon Phi? It may be 
 a good multi-threading benchmark for us in general.


 Something that requires 1TFlop, or maybe 1000 things that take 1 GFlop?

 Hmm, how about realtime photogrammetry?
  


 -viral

 On Thursday, November 6, 2014 9:35:57 PM UTC+5:30, John Drummond wrote:

 Did you have any success?
 There's an offer of the cards for 200usd at the moment


 That's like 1/10th the price?




[julia-users] Re: Sparse matrix with diagonal index

2015-03-12 Thread Steven G. Johnson


On Tuesday, March 10, 2015 at 2:40:24 PM UTC-4, Amit Jamadagni wrote:

 Thank you very much for the response.
 But the behavior of the same in scipy is different i.e., it omits the 
 elements. Is this not the expected behavior ?? 


Why would you expect the function to silently ignore some of your inputs? 


[julia-users] Re: Why is Forward Reference possible?

2015-03-12 Thread Matt Bauman
This is very intentional and is addressed in the manual 
(http://docs.julialang.org/en/release-0.3/manual/variables-and-scoping/?highlight=forward).
 The key thing to realize is that functions are just global identifiers, too. 
If forward references weren't possible, Julia would require c-style forward 
definitions and header files. 

Re: [julia-users] Re: webpy equivalent

2015-03-12 Thread Paul Analyst

Thx for info look nice
but I am using win7 , usualy using Pkg.add. How to install 
skeleton-webapp on my Julia under win? ?


julia Pkg.available()

551-element Array{ASCIIString,1}:
 AffineTransforms
...
Sims
SIUnits
SliceSampler
Smile
SmoothingKernels
SMTPClient
Snappy
Sobol
...

No skeleton-webapp
Paul

W dniu 2015-03-11 o 21:28, jock.law...@gmail.com pisze:

Hi Jonathan,

Uncanny timing. Here 
https://bitbucket.org/jocklawrie/skeleton-webapp.jl is an example of 
Julia working with the Mustache.jl package which I posted just a 
couple of hours before your post. It works fine but is not nearly as 
mature as webpy. Hope you find it helpful.


Cheers,
Jock


On Friday, November 1, 2013 at 3:00:25 AM UTC+11, Jonathan Malmaud wrote:

Anywhere aware of a package that implements a light-weight HTTP
server framework for Julia? Something like webpy + and a
templating engine like jinja2?





Re: [julia-users] for and function don't work in the same way in terms of scope

2015-03-12 Thread Mauro
I think this is the soft vs hard scope issue.  See:
https://github.com/JuliaLang/julia/issues/9955

That issue could use some fleshing out though...

On Tue, 2015-03-10 at 20:03, Wendell Zheng zhengwend...@gmail.com wrote:
 *Input 1:*
 y = 0
 function foo()
 y = 10
 end
 foo()
 y

 *Output 1:*
 0

 *Input 2:*
 y = 0
 for i = 1:1
 y = 10 
 end
 y

 *Output 2:*
 10

 In the first example, y introduces a local variable. 
 In the second example, y is still a global variable.

 This is not consistent to what the official document said.

 I tried these examples in JuliaBox.



[julia-users] Specifying return type of a function

2015-03-12 Thread Shivkumar Chandrasekaran
I am new to Julia, so forgive the elementary question, but I could not seem 
to find the answer in the docs or by googling the news group.

Is it possible to specify the return type of a function in Julia?

Thanks.

--shiv--


Re: [julia-users] How to interpolate a variable as a macro argument?

2015-03-12 Thread Isaiah Norton
There are tools mentioned in the Metaprogramming section of the manual
which will help to better understand what is going on. In particular,
macroexpand.

http://docs.julialang.org/en/latest/manual/metaprogramming/#basics

Are there better ways to do this, is it OK to use eval() in this context?


The general advice is don't use eval in a macro. Among other issues,
doing so will usually lead to slow code. A macro should not (generally)
have side-effects like printing, but rather should return an expression
that does what you want when itself evaluated.

On Wed, Mar 11, 2015 at 8:37 AM, Kaj Wiik kaj.w...@gmail.com wrote:

 I have a problem in using variables as argument for macros. Consider a
 simple macro:

 macro testmacro(N)
 for i = 1:N
 println(Hello!)
 end
 end

 @testmacro 2

 Hello!
 Hello!


 So, all is good. But if I use a variable as an argument,

 n = 2
 @testmacro n


 I get an (understandable) error message ERROR: `colon` has no method
 matching colon(::Int64, ::Symbol).

 Is this the correct place to use eval() in macros, like

 macro testmacro(N)
 for i = 1:eval(N)
 println(Hello!)
 end
 end

 This seems to work as expected. I tried multitude of combinations of
 dollar signs, esc, quotes and brackets, none of them worked :-), got
 ERROR: error compiling anonymous: syntax: prefix $ in non-quoted
 expression...

 Are there better ways to do this, is it OK to use eval() in this context?

 Thanks,
 Kaj




Re: [julia-users] for and function don't work in the same way in terms of scope

2015-03-12 Thread Mauro

On Wed, 2015-03-11 at 10:24, Wendell Zheng zhengwend...@gmail.com wrote:
 I did more experiments.

 *Input 1:*
 y = 0
 begin 
 y = 10 
 end
 y
 *Output 1:*
 10

 *Input 2:*
 y = 0
 begin 
 local y = 10 
 end
 y
 *Output 2:*
 0

 It's the same for *if *block. 

begin-end and if-end blocks do not introduce a new scope:
http://docs.julialang.org/en/latest/manual/variables-and-scoping/

So above is the same as not having any blocks.  But the behaviour of
local at the REPL is strange:

julia local y = 10
10

julia y
ERROR: y not defined

So, what is y local to then?  Shouldn't the first line either throw an
error or be equivalent to `y = 10`?

 May I conclude that:
 1) *Function *introduces *hard scope, *where the assignment introduce new 
 local variables;
 2) Other blocks (including *if*, *begin-end*) introduce soft scope, where 
 the assignment either refers to an outer variable 
 or introduces an variable which can be conveyed into an outer scope;
 3) The keyword *global *used in a hard scope turns the hard scope soft;
 4) The keyword *local *used in a soft scope turns the soft scope hard.

Rule 2 is not right:

julia for i=1:10
   j = 10
   end

julia j
ERROR: j not defined

julia j = 1
1

julia for i=1:10
   j = 10
   end

julia j
10

So, I think, the rules are these:

In a soft scope:
s1) normal assignment `x = 5`
  1) if a binding exists in the global scope, assign to that
  2) if no binding exists, make a new *local* binding
s2) local assignment `local x=5`
  1) make a new local binding
s3) global assignment `global x=5`
  1) make a new global binding

In a hard scope
h1) normal assignment `x = 5` 
  1) make a new *local* binding
h2) local assignment `local x=5`
  2) make a new local binding (i.e. equivalent to h1.1)
h3) global assignment `global x=5`
  3) make a new global binding

If no assignment happens, just reading, then hard and soft are equivalent.

The confusing bit about soft scopes are that its rules are dependent on
the outer scopes.  I guess the reason for these more complicated rules
are typical usage of loops: being able to modify outside bindings but
newly introduced bindings will go out of scope once the loop terminates.

Another caveat is that these rule seem to break down in nested
functions.  Here an example from
https://github.com/JuliaLang/julia/issues/423#issuecomment-4100869

function namespace()
x = 0
function f()
x = 10
end
f()
println(x)
end
namespace() # prints 10, which suggests that the inner function has a soft 
scope!!!

 By the way, I find the describe of Python's scope in Wikipedia 
 http://en.wikipedia.org/wiki/Scope_%28computer_science%29#Lexical_scoping_vs._dynamic_scoping
  
 is quite clear, Could you write the same thing about Julia in Wikipedia?

Well, this is about lexical vs dynamic scope, which is (I think) a
different issue to soft and hard scope.

I'll move some of this discussion to 
https://github.com/JuliaLang/julia/issues/9955

 On Tuesday, March 10, 2015 at 10:40:53 PM UTC+1, Mauro wrote:

 I think this is the soft vs hard scope issue.  See: 
 https://github.com/JuliaLang/julia/issues/9955 

 That issue could use some fleshing out though... 

 On Tue, 2015-03-10 at 20:03, Wendell Zheng zhengw...@gmail.com 
 javascript: wrote: 
  *Input 1:* 
  y = 0 
  function foo() 
  y = 10 
  end 
  foo() 
  y 
  
  *Output 1:* 
  0 
  
  *Input 2:* 
  y = 0 
  for i = 1:1 
  y = 10 
  end 
  y 
  
  *Output 2:* 
  10 
  
  In the first example, y introduces a local variable. 
  In the second example, y is still a global variable. 
  
  This is not consistent to what the official document said. 
  
  I tried these examples in JuliaBox. 





Re: [julia-users] How can I convert a set to an array?

2015-03-12 Thread Jameson Nash
a = collect(a)
On Wed, Mar 11, 2015 at 12:07 PM Jacob Quinn quinn.jac...@gmail.com wrote:

 a = IntSet([1,2,3])
 a = [a...]

 On Wed, Mar 11, 2015 at 9:35 AM, Ali Rezaee arv.ka...@gmail.com wrote:

 In Python I would do

 a = set([1,2])
 a = list(a)

 How can I do that in Julia?

 Thanks a lot in advance for your help





[julia-users] sum array

2015-03-12 Thread pip7kids
Hi
I was playing with Comprehension syntax and then trying to sum the output 
and failed!
My example ... the next two lines work ok for me (using 0.3.5 on Windows).
const x = rand(8)
[ x[i-4:i-1] for i = 6] .. this gives me a 4 element array.

I now want to sum the ouput - this is what I tried ...
sum([ x[i-4:i-1] for i = 6]) ... what am I doing wrong?

Regards



[julia-users] Saving timings from @time

2015-03-12 Thread Patrick Kofod Mogensen
This might be sort of a duplicate posts, but I think the other post wasn't 
actually posted, so I'll try again.

If I time a function 500 times and want to save all the times, how would I 
go about this?

[@time algo(input) for i = 1:500]

Catches all the algo returns instead of output from @time.

Best,
Patrick


[julia-users] How can I convert a set to an array?

2015-03-12 Thread Ali Rezaee
In Python I would do

a = set([1,2])
a = list(a)

How can I do that in Julia?

Thanks a lot in advance for your help


[julia-users] Re: How does Measure work at y-axis in Gadfly.jl?

2015-03-12 Thread nanaya tachibana
Thank you very much. 
Can anything affect the orientation of the absolute measurement of Measure? 
It goes from left-to-right and top-to-bottom, no matter how I set the units 
in the context.

On Thursday, March 12, 2015 at 3:14:28 PM UTC+8, Daniel Jones wrote:


 It can actually actually work both ways: cx and cy give context units, 
 which by default are between 0 and 1, and go from left-to-right and 
 top-to-bottom, but can be redefined to be anything.

 So, this draws a line from the top-left to the bottom-right.
 compose(context(), line([(0cx,0cy), (1cx,1cy)]), stroke(black))

 But I can change the units in the context, it draws a line from the 
 bottom-left to top-right.
 compose(context(units=UnitBox(0,1,1,-1)), line([(0cx,0cy), (1cx,1cy)]), 
 stroke(black))

 UnitBox defines a new coordinate system in which the top-left corner is 
 (0, 1), and the context is 1 unit wide and 1 unit tall, but the height is 
 given as -1, which somewhat unintuitively flips the orientation of the 
 units.

 If you look at the bottom of coord.jl in Gadfly, you'll see how the 
 coordinate system for the plot gets set up:

 context(units=UnitBox(
 coord.xflip ? xmax : xmin,
 coord.yflip ? ymin : ymax,
 coord.xflip ? -width : width,
 coord.yflip ? height : -height,
 leftpad=xpadding,
 rightpad=xpadding,
 toppad=ypadding,
 bottompad=ypadding),






 On Wednesday, March 11, 2015 at 10:33:35 PM UTC-7, nanaya tachibana wrote:

 I wanted to contribute to Gadfly.jl and I started from looking into 
 bar.jl. 
 I found that the value cy of Measure in Gadfly.jl increases from bottom 
 to top, but it increases from top to bottom in Compose.jl.
 What makes Measure work in that way? 

 I really appreciate any help you can provide.



Re: [julia-users] Memory allocation in Closed loop Control Simulation

2015-03-12 Thread Tim Holy
--track-allocation doesn't report the _net_ memory allocated, it reports the 
_gross_ memory allocation. In other words, allocate/free adds to the tally, 
even if all memory is eventually freed.

If you're still concerned about memory allocation and its likely impact on 
performance: there are some things you can do. From glancing at your code very 
briefly, a couple of comments:
- My crystal ball tells me you will soon come to adore the push! function :-)
- If you wish (and it's your choice), you can reduce allocations by doing more 
operations with scalars. For example, in computeReferenceCurrents, instead of 
computing tpu and iref arrays outside the loop, consider performing the 
equivalent operations on scalar values inside the loop.

Best,
--Tim


On Wednesday, March 11, 2015 07:41:19 AM Bartolomeo Stellato wrote:
 Hi all,
 
 I recently started using Julia for my *Closed loop MPC simulations.* I fond
 very interesting the fact that I was able to do almost everything I was
 doing in MATLAB with Julia. Unfortunately, when I started working on more
 complex simulations I notice a *memory allocation problem*.
 
 I am using OSX Yosemite and Julia 0.3.6. I attached a MWE that can be
 executed with include(simulation.jl)
 
 The code executes a single simulation of the closed loop system with a *MPC
 controller* solving an optimization problem at each time step via *Gurobi
 interface*. At the end of the simulation I am interested in only *two
 performance indices* (float numbers).
 The simulation, however, takes more than 600mb of memory and, even if most
 of the declared variables local to different functions, I can't get rid of
 them afterwards with the garbage collector: gc()
 
 I analyzed the memory allocation with julia --track-allocation=user and I
 included the generated .mem files. Probably my code is not optimized, but I
 can't understand *why all that memory doesn't get deallocated after the
 simulation*.
 
 Is there anyone who could give me any explanation or suggestion to solve
 that problem? I need to perform several of these simulations and it is
 impossible for me to allocate for each one more than 600mb.
 
 
 Thank you!
 
 Bartolomeo



[julia-users] Re: how to paste png into ipython julia notebook?

2015-03-12 Thread Patrick O'Leary
You can do this with Images.jl. Example: 
http://htmlpreview.github.io/?https://github.com/timholy/Images.jl/blob/master/ImagesDemo.html

On Wednesday, March 11, 2015 at 11:05:07 AM UTC-5, Edward Chen wrote:

 from IPython.display import Image
 Image(filename='image.png')

 doesn't seem to work

 Thanks!
 -Ed



[julia-users] Re: RFC: JuliaWeb Roadmap + Call for Contributors

2015-03-12 Thread Avik Sengupta
I think this is very useful. The web stack is a bit lacking in 
documentation, so this is great. Maybe flesh out the doucumentation a bit, 
explaining the usage of the morsel/meddle/mustache API's in this code. And 
possibly host the documentation separately on bitbucket-pages. 

I think it would be good to link to this from the JuliaWebStack docs. 

On Thursday, 12 March 2015 01:05:33 UTC, jock@gmail.com wrote:

 Hi all,

 Here https://bitbucket.org/jocklawrie/skeleton-webapp.jl is a bare 
 bones application that fetches some data, runs a model and produces some 
 pretty charts.
 I'll flesh this out over the next few months, including documentation 
 aimed at data scientists (I'm a statistician not a web programmer).
 Would this help with your request for docs and examples?
 Happy to discuss.

 Cheers,
 Jock


 On Friday, February 13, 2015 at 9:57:56 AM UTC+11, Iain Dunning wrote:

 Hi all,

 TL;DR: 
 - New JuliaWeb roadmap: https://github.com/JuliaWeb/Roadmap/issues
 - Please consider volunteering on core JuliaWeb infrastructure (e.g. 
 Requests, GnuTLS), esp. by adding tests/docs/examples.

 ---

 *JuliaWeb* (https://github.com/JuliaWeb) is a collection of 
 internet-related packages, including HTTP servers/parsers/utilities, IP 
 address tools, and more.

 Many of these packages were either created by rockstar ninja guru 
 developer Keno, or by students at Hacker School. Some of these packages, 
 like Requests.jl/HttpParser.jl/GnuTLS.jl/... are almost surely installed on 
 your system, but some (e.g. GnuTLS.jl) haven't really been touched much 
 since they were created and aren't actively maintained. For such core 
 packages, it isn't fair to put all the burden on one developer.

 On a personal level, I've been trying to help out where I can by merging 
 PRs, but this web stuff isn't really my strength, and I'm not really able 
 to effectively triage the issues that have built up on some of these 
 packages. So heres what we're (Seth Bromberger has been part of this 
 too) doing:

 - We've made a *roadmap repo for JuliaWeb* to discuss some of these 
 issues and co-ordinate limited resources: 
 https://github.com/JuliaWeb/Roadmap/issues . We'd like to hear your 
 perspectives!

 - *We want you!* You don't have to be a Julia master - you can even 
 start just by reading the code of one of these packages, and then adding 
 some tests or documentation. Maybe you'll even get comfortable to add 
 features! Right now, the focus is definitely on maintainence and making 
 sure whats there works (on Julia 0.3 and 0.4!). Your Pull Requests are very 
 welcome!



Re: [julia-users] sum array

2015-03-12 Thread Mauro
 const x = rand(8)
 [ x[i-4:i-1] for i = 6] .. this gives me a 4 element array.

This seems a bit odd, what are you trying to achieve here?  Anyway it
produces a Array{Array{Float64,1},1}, i.e. an array of arrays containing
one array.

 I now want to sum the ouput - this is what I tried ...
 sum([ x[i-4:i-1] for i = 6]) ... what am I doing wrong?

This sums all first elements, second elements, etc.  As there is only on
array in the array, it doesn't do all that much.


[julia-users] Automatic doc tools for Julia

2015-03-12 Thread Ján Adamčák
Hi guys,

Can I ask you for something like best practice with auto doc tools for 
parsing Julia code? I try use Doxygen and Sphinx, but I think this is not 
good solutions in this timeversion(0.3.6). And/Or some tool for generate 
UML diagrams from julia code?

Thanks.

P.S.:
My idea with this thread is generate something like big manual of knowlege 
how to use auto doc tools in Julia.


Re: [julia-users] sum array

2015-03-12 Thread pip7kids
Hi
I was simply dabbling whilst learning - nothing specific.
Can I still sum?
Regards

On Thursday, 12 March 2015 09:50:19 UTC, Mauro wrote:

  const x = rand(8) 
  [ x[i-4:i-1] for i = 6] .. this gives me a 4 element array. 

 This seems a bit odd, what are you trying to achieve here?  Anyway it 
 produces a Array{Array{Float64,1},1}, i.e. an array of arrays containing 
 one array. 

  I now want to sum the ouput - this is what I tried ... 
  sum([ x[i-4:i-1] for i = 6]) ... what am I doing wrong? 

 This sums all first elements, second elements, etc.  As there is only on 
 array in the array, it doesn't do all that much. 



[julia-users] Re: Help: Too many open files -- How do I figure out which files are open?

2015-03-12 Thread René Donner
More are hint than a direct answer, are you using the do syntax for 
opening the files?

open(somefile, w) do file
 write(file, ...);
 read(file, );
end

Regardless of how you exit that block, regularly or via exceptions, the 
file will be closed, so at least there are no files accidentally left open.

To really list the open files of your process this should help: 
http://www.cyberciti.biz/faq/howto-linux-get-list-of-open-files/


Am Donnerstag, 12. März 2015 12:00:22 UTC+1 schrieb Daniel Carrera:

 Hello,

 My program is dying with the error message:

 ERROR: opening file ...file name...: Too many open files
  in open at ./iostream.jl:117
  in open at ./iostream.jl:125
  ...

 I have reviewed my program and as far as I can tell, everywhere that I 
 open a file I close it immediately. I need to query more information to 
 debug this. Is there a way that I can get information about the files that 
 are currently open? Or at least the *number* of open files? With that I 
 could at least sprinkle my program with print open files statements and 
 trace the bug.


 Cheers,
 Daniel.






Re: [julia-users] Memory allocation in Closed loop Control Simulation

2015-03-12 Thread Bartolomeo Stellato
I installed valgrind 3.11 SVN with homebrew and tried to run the code but I 
am not familiar with the generated output.
I added a valgrind-julia.supp and used the command parameters explained here 
https://github.com/JuliaLang/julia/blob/master/doc/devdocs/valgrind.rst

I executed:

valgrind --smc-check=all-non-file --suppressions=valgrind-julia.supp 
/Applications/Julia-dev.app/Contents/Resources/julia/bin/julia  
simulation.jl a.out  log.txt 21
I attach the generated log.txt file.

Bartolomeo

Il giorno giovedì 12 marzo 2015 03:10:10 UTC, Tim Holy ha scritto:

 On Wednesday, March 11, 2015 05:33:10 PM Bartolomeo Stellato wrote: 
  I also tried with Julia 0.4.0-dev+3752 and I encounter the same problem. 

 Hm. If you're sure there's a leak, this should be investigated. Any chance 
 you 
 can try valgrind? 

 --Tim 

  
  Il giorno mercoledì 11 marzo 2015 22:51:18 UTC, Tony Kelman ha scritto: 
   The majority of the memory allocation is almost definitely coming from 
 the 
   problem setup here. You're using a dense block-triangular fomulation 
 of 
   MPC, eliminating states and only solving for inputs with inequality 
   constraints. Since you're converting your problem data to sparse 
   initially, 
   you're doing a lot of extra allocation, integer arithmetic, and 
 consuming 
   more memory to represent a large dense matrix in sparse format. 
   Reformulate 
   your problem to include both states and inputs as unknowns, and 
 enforce 
   the 
   dynamics as equality constraints. This will result in a block-banded 
   problem structure and maintain sparsity much better. The matrices 
 within 
   the blocks are not sparse here since you're doing an exact 
 discretization 
   with expm, but a banded problem will scale much better to longer 
 horizons 
   than a triangular one. 
   
   You also should be able to reuse the problem data, with the exception 
 of 
   bounds or maybe vector coefficients, between different MPC iterations. 
   
   
   
   On Wednesday, March 11, 2015 at 12:14:03 PM UTC-7, Bartolomeo Stellato 
   
   wrote: 
   Thank you for the quick replies and for the suggestions! 
   
   I checked which lines give more allocation with 
 --track-allocation=user 
   and the amount of memory I posted is from the OSX Activity monitor. 
   Even if it is not all necessarily used, if it grows too much the 
   operating system is forced to kill Julia. 
   
   I slightly edited the code in order to *simulate the closed loop 6 
 times* 
   (for different parameters of N and lambdau). I attach the files. The* 
   allocated memory *with the OSX Activity monitor is *2gb now.* 
   If I run the code twice with a clear_malloc_data() in between to save 
   --track-allocation=user information I get something around 3.77gb! 
   
   Are there maybe problems with my code for which the allocated memory 
   increases? I can't understand why by simply running the same function 
 6 
   times, the memory increases so much. Unfortunately I need to do it 
   hundreds 
   of times in this way it is impossible. 
   
   Do you think that using the push! function together with reducing the 
   vector computations could significantly reduce this big amount of 
   allocated 
   memory? 
   
   
   Bartolomeo 
   
   Il giorno mercoledì 11 marzo 2015 17:07:23 UTC, Tim Holy ha scritto: 
   --track-allocation doesn't report the _net_ memory allocated, it 
 reports 
   the 
   _gross_ memory allocation. In other words, allocate/free adds to the 
   tally, 
   even if all memory is eventually freed. 
   
   If you're still concerned about memory allocation and its likely 
 impact 
   on 
   performance: there are some things you can do. From glancing at your 
   code very 
   briefly, a couple of comments: 
   - My crystal ball tells me you will soon come to adore the push! 
   function :-) 
   - If you wish (and it's your choice), you can reduce allocations by 
   doing more 
   operations with scalars. For example, in computeReferenceCurrents, 
   instead of 
   computing tpu and iref arrays outside the loop, consider performing 
 the 
   equivalent operations on scalar values inside the loop. 
   
   Best, 
   --Tim 
   
   On Wednesday, March 11, 2015 07:41:19 AM Bartolomeo Stellato wrote: 
Hi all, 

I recently started using Julia for my *Closed loop MPC 
 simulations.* I 
   
   fond 
   
very interesting the fact that I was able to do almost everything 
 I 
   
   was 
   
doing in MATLAB with Julia. Unfortunately, when I started working 
 on 
   
   more 
   
complex simulations I notice a *memory allocation problem*. 

I am using OSX Yosemite and Julia 0.3.6. I attached a MWE that can 
 be 
executed with include(simulation.jl) 

The code executes a single simulation of the closed loop system 
 with a 
   
   *MPC 
   
controller* solving an optimization problem at each time step via 
   
   *Gurobi 
   
interface*. At the end of the simulation I am interested in only 
 

Re: [julia-users] sum array

2015-03-12 Thread pip7kids
Hi
Mauro - thanks for that as that makes it clear whats happening under the 
bonnet. So, what if you then wanted to sum...
1.4827   
 1.48069 
 0.884897 
 1.22739 
 is that possible or am I being a bit dumb here.
Regards


On Thursday, 12 March 2015 10:59:34 UTC, Mauro wrote:

  Can I still sum? 


 Maybe it's clearer like this: 
 julia [ x[i-4:i-1] for i = [6,7,8]] 
 3-element Array{Array{Float64,1},1}: 
  [0.392471,0.775959,0.314272,0.390463] 
  [0.775959,0.314272,0.390463,0.180162] 
  [0.314272,0.390463,0.180162,0.656762] 

 julia sum(ans) 
 4-element Array{Float64,1}: 
  1.4827   
  1.48069 
  0.884897 
  1.22739 

 So 1.4827 = ans[1][1]+ans[2][1]+ans[3][1] 

  On Thursday, 12 March 2015 09:50:19 UTC, Mauro wrote: 
  
   const x = rand(8) 
   [ x[i-4:i-1] for i = 6] .. this gives me a 4 element array. 
  
  This seems a bit odd, what are you trying to achieve here?  Anyway it 
  produces a Array{Array{Float64,1},1}, i.e. an array of arrays 
 containing 
  one array. 
  
   I now want to sum the ouput - this is what I tried ... 
   sum([ x[i-4:i-1] for i = 6]) ... what am I doing wrong? 
  
  This sums all first elements, second elements, etc.  As there is only 
 on 
  array in the array, it doesn't do all that much. 
  



Re: [julia-users] How can I convert a set to an array?

2015-03-12 Thread Ali Rezaee
Thanks a lot for your help :)

On Wednesday, March 11, 2015 at 5:07:45 PM UTC+1, Jacob Quinn wrote:

 a = IntSet([1,2,3])
 a = [a...]

 On Wed, Mar 11, 2015 at 9:35 AM, Ali Rezaee arv@gmail.com 
 javascript: wrote:

 In Python I would do

 a = set([1,2])
 a = list(a)

 How can I do that in Julia?

 Thanks a lot in advance for your help




[julia-users] Re: Help: Too many open files -- How do I figure out which files are open?

2015-03-12 Thread René Donner


 With that I was able to debug the problem in a snap. It turns out that one 
 of my functions had readlines(open(...)) instead of 
 open(readlines,...). The critical difference is that the former leaves a 
 file pointer dangling.


Ah, ok, that's exactly the difference: using open(myfunc, filename) is the 
same as 

open(filename) do io 
 myfunc(io)
end

Like this you never have to worry about closing files, even on exceptions.

When using open without a function it looks like this:

 io = open(filename)
 myfunc(io)
 # this is crucial:
 close(io)

In this version, when an exception occurs in myfunc the file will never be 
closed.

Last note: the do syntax is not special for open but can be used with 
your own functions as well:

function functhatneedsfunc(f, a)
 println(let's just apply f)
 f(a)
end

# lets call it:
functhatneedsfunc(a) do x
 show(x)
 println(printed x!)
end


 


Re: [julia-users] Different ways to create a vector of strings in Julia

2015-03-12 Thread Ismael VC
@Tammas Array{T}(0) works in 0.4-dev, but I think that's why there is also 
a Vector{T} typealias for Array{T,1}, so it could be Vector{T}(0) which is 
slightly better IMHO.

julia Vector{Int}(0)
0-element Array{Int64,1}

julia Array{Int,1}(0)
0-element Array{Int64,1}

julia Array{Int}(0)
0-element Array{Int64,1}



El jueves, 12 de marzo de 2015, 8:10:14 (UTC-6), Tamas Papp escribió:


 On Thu, Mar 12 2015, Ivar Nesje wrote: 

  
  So, what you would want to do is `Array{String,1}()`. 
  That ought to construct a array of strings with dimension 1 but 
 doesn't. 
  
  
  But in 0.4 you can use Array{String,1}(0) to create a 1d array with 0 
  elements. Note that you have to provide the size of the array, and 0 is 
 not 
  default (, but maybe it should be?) 

 Array{T}(dims...) 

 is equivalent to 

 Array{T,length(dims)}(dims...) 

 while the latter has a redundant parameter. So instead of providing a 
 default for dims..., wouldn't 

 Array{T}(0) 

 be an idiomatic solution for creating an empty vector of eltype T? Has 
 one less character than 

 Array{T,1}(0) 

 :D 

 Best, 

 Tamas 



Re: [julia-users] Swapping two columns (or rows) of an array efficiently

2015-03-12 Thread Steven G. Johnson
As a general rule, with Julia one needs to unlearn the instinct (from 
Matlab or Python) that efficiency == clever use of library functions, 
which turns all optimization questions into is there a built-in function 
for X (and if the answer is no you are out of luck).   Loops are fast, 
and you can easily beat general-purpose library functions with your own 
special-purpose code.


Re: [julia-users] Different ways to create a vector of strings in Julia

2015-03-12 Thread Ivar Nesje


 So, what you would want to do is `Array{String,1}()`. 
 That ought to construct a array of strings with dimension 1 but doesn't. 


But in 0.4 you can use Array{String,1}(0) to create a 1d array with 0 
elements. Note that you have to provide the size of the array, and 0 is not 
default (, but maybe it should be?)


Re: [julia-users] sum array

2015-03-12 Thread pip7kids
Hi
Thanks Mauro for the advice - all makes sense now.
Regards

On Thursday, 12 March 2015 11:28:22 UTC, Mauro wrote:

  Hi 
  Mauro - thanks for that as that makes it clear whats happening under the 
  bonnet. So, what if you then wanted to sum... 
  1.4827   
   1.48069 
   0.884897 
   1.22739 
   is that possible or am I being a bit dumb here. 

 Just add another sum(ans) after below two statements, that then sums the 
  4-element Array{Float64,1}: 
   1.4827   
   1.48069 
   0.884897 
   1.22739 

  On Thursday, 12 March 2015 10:59:34 UTC, Mauro wrote: 
  
   Can I still sum? 
  
  
  Maybe it's clearer like this: 
  julia [ x[i-4:i-1] for i = [6,7,8]] 
  3-element Array{Array{Float64,1},1}: 
   [0.392471,0.775959,0.314272,0.390463] 
   [0.775959,0.314272,0.390463,0.180162] 
   [0.314272,0.390463,0.180162,0.656762] 
  
  julia sum(ans) 
  4-element Array{Float64,1}: 
   1.4827   
   1.48069 
   0.884897 
   1.22739 
  
  So 1.4827 = ans[1][1]+ans[2][1]+ans[3][1] 
  
   On Thursday, 12 March 2015 09:50:19 UTC, Mauro wrote: 
   
const x = rand(8) 
[ x[i-4:i-1] for i = 6] .. this gives me a 4 element array. 
   
   This seems a bit odd, what are you trying to achieve here?  Anyway 
 it 
   produces a Array{Array{Float64,1},1}, i.e. an array of arrays 
  containing 
   one array. 
   
I now want to sum the ouput - this is what I tried ... 
sum([ x[i-4:i-1] for i = 6]) ... what am I doing wrong? 
   
   This sums all first elements, second elements, etc.  As there is 
 only 
  on 
   array in the array, it doesn't do all that much. 
   
  
  



[julia-users] Re: Something equivalent to Python's xrange()?

2015-03-12 Thread Ivar Nesje
Python learned that lesson in moving from python 2 to python 3, so Julia 
creates lazy ranges by default. With the focus Julia has on performance, 
this would probably an obvious choice anyway.

For the ideom you present, we actually don't even create a range (because 
of inlining), but generate equivalent machine code to the similar C code.

for (int i = 1; i = 10; i++){
#println(i)
}

Regards

torsdag 12. mars 2015 14.19.31 UTC+1 skrev Ali Rezaee følgende:

 Hi,

 I am trying to iterate over a range of numbers. I know I can do this:

 for i in 1:10
 println(i)
 end

 but, if I am not wrong, it creates a list from 1 to 10 and iterates over 
 it.
 Is there a more memory efficient method so that it does not create and 
 store the list? something that returns an iterator object similar to 
 Python's xrange().

 Many thanks



Re: [julia-users] Automatic doc tools for Julia

2015-03-12 Thread Ismael VC
tshort, could you provide us an example please?

El jueves, 12 de marzo de 2015, 4:59:14 (UTC-6), tshort escribió:

 The Lexicon package works well for me along with Mkdocs.
 On Mar 12, 2015 6:03 AM, Ján Adamčák jada...@gmail.com javascript: 
 wrote:

 Hi guys,

 Can I ask you for something like best practice with auto doc tools for 
 parsing Julia code? I try use Doxygen and Sphinx, but I think this is not 
 good solutions in this timeversion(0.3.6). And/Or some tool for generate 
 UML diagrams from julia code?

 Thanks.

 P.S.:
 My idea with this thread is generate something like big manual of 
 knowlege how to use auto doc tools in Julia.



[julia-users] Something equivalent to Python's xrange()?

2015-03-12 Thread Ali Rezaee
Hi,

I am trying to iterate over a range of numbers. I know I can do this:

for i in 1:10
println(i)
end

but, if I am not wrong, it creates a list from 1 to 10 and iterates over it.
Is there a more memory efficient method so that it does not create and 
store the list? something that returns an iterator object similar to 
Python's xrange().

Many thanks


[julia-users] Re: Something equivalent to Python's xrange()?

2015-03-12 Thread Ali Rezaee
That's cool. Thank you

On Thursday, March 12, 2015 at 2:19:31 PM UTC+1, Ali Rezaee wrote:

 Hi,

 I am trying to iterate over a range of numbers. I know I can do this:

 for i in 1:10
 println(i)
 end

 but, if I am not wrong, it creates a list from 1 to 10 and iterates over 
 it.
 Is there a more memory efficient method so that it does not create and 
 store the list? something that returns an iterator object similar to 
 Python's xrange().

 Many thanks



Re: [julia-users] Different ways to create a vector of strings in Julia

2015-03-12 Thread Tamas Papp

On Thu, Mar 12 2015, Ivar Nesje wrote:


 So, what you would want to do is `Array{String,1}()`.
 That ought to construct a array of strings with dimension 1 but doesn't.


 But in 0.4 you can use Array{String,1}(0) to create a 1d array with 0
 elements. Note that you have to provide the size of the array, and 0 is not
 default (, but maybe it should be?)

Array{T}(dims...)

is equivalent to

Array{T,length(dims)}(dims...)

while the latter has a redundant parameter. So instead of providing a
default for dims..., wouldn't

Array{T}(0)

be an idiomatic solution for creating an empty vector of eltype T? Has
one less character than

Array{T,1}(0)

:D

Best,

Tamas


Re: [julia-users] Different ways to create a vector of strings in Julia

2015-03-12 Thread Charles Novaes de Santana
Thank you all for the explanation! We don't stop learning in this list!

Best,

Charles

On Thu, Mar 12, 2015 at 3:10 PM, Tamas Papp tkp...@gmail.com wrote:


 On Thu, Mar 12 2015, Ivar Nesje wrote:

 
  So, what you would want to do is `Array{String,1}()`.
  That ought to construct a array of strings with dimension 1 but doesn't.
 
 
  But in 0.4 you can use Array{String,1}(0) to create a 1d array with 0
  elements. Note that you have to provide the size of the array, and 0 is
 not
  default (, but maybe it should be?)

 Array{T}(dims...)

 is equivalent to

 Array{T,length(dims)}(dims...)

 while the latter has a redundant parameter. So instead of providing a
 default for dims..., wouldn't

 Array{T}(0)

 be an idiomatic solution for creating an empty vector of eltype T? Has
 one less character than

 Array{T,1}(0)

 :D

 Best,

 Tamas




-- 
Um axé! :)

--
Charles Novaes de Santana, PhD
http://www.imedea.uib-csic.es/~charles


Re: [julia-users] Swapping two columns (or rows) of an array efficiently

2015-03-12 Thread Steven G. Johnson


On Thursday, March 12, 2015 at 10:08:47 AM UTC-4, Ján Dolinský wrote:

 Hi,

 Is this an efficient way to swap two columns of a matrix ?
 e.g. 1st column with the 5th

 X = rand(10,5)
 X[:,1], X[:,5] = X[:,5], X[:,1]


It is not optimal, because it allocates temporary arrays.  Instead, you can 
just write:

function swapcols(X, i, j)
 for k = 1:size(X,1)
   X[k,i],X[k,j] = X[k,j],X[k,i]
 end
 return X
end

which allocates no temporary arrays and should be quite efficient.   You 
could gain a bit more efficiency by moving all of the bounds checks outside 
of the loop:

function swapcols(X, i, j)
 m, n = size(X)
 if (1 = i = n)  (1 = j = n)
 for k = 1:m
   @inbounds X[k,i],X[k,j] = X[k,j],X[k,i]
 end
 return X
 else
 throw(BoundsError())
 end
end

but these kinds of micro-optimizations are rarely worthwhile.


Re: [julia-users] Something equivalent to Python's xrange()?

2015-03-12 Thread Mauro
1:10 creates a UnitRange which stores just two numbers, start and
stop:

julia typeof(1:10)
UnitRange{Int64} (constructor with 1 method)

julia names(UnitRange)
2-element Array{Symbol,1}:
 :start
 :stop 

To actually create the list you'd use collect(1:10)

On Thu, 2015-03-12 at 14:19, Ali Rezaee arv.ka...@gmail.com wrote:
 Hi,

 I am trying to iterate over a range of numbers. I know I can do this:

 for i in 1:10
 println(i)
 end

 but, if I am not wrong, it creates a list from 1 to 10 and iterates over it.
 Is there a more memory efficient method so that it does not create and 
 store the list? something that returns an iterator object similar to 
 Python's xrange().

 Many thanks



Re: [julia-users] Plotting table of numbers in Gadfly?

2015-03-12 Thread Jiahao Chen
Daniel is the authoritative source, but for such situations I use
layers and manual color schemes like this:

using Color, Gadfly

xgrid=0:10:100
data=rand(10,10)
nrows = size(data, 1)
cm = distinguishable_colors(nrows, lchoices=0:50) #lchoices between 50
and 100 are too bright for my taste for plotting lines
plot(
   [layer(x=xgrid, y=data[i, :], Geom.line,
Theme(default_color=cm[i])) for i=1:nrows]...,
   Guide.manual_color_key(row id, [row $i for i=1:nrows], cm),
  )

Thanks,

Jiahao Chen
Staff Research Scientist
MIT Computer Science and Artificial Intelligence Laboratory


On Thu, Mar 12, 2015 at 11:54 PM, Sheehan Olver dlfivefi...@gmail.com wrote:

 I have a table of numbers that I want to line plot in Gadfly: i.e., each
 column corresponds to values of a function.  Is this possible without
 creating a DataFrame?




[julia-users] Plotting table of numbers in Gadfly?

2015-03-12 Thread Sheehan Olver

I have a table of numbers that I want to line plot in Gadfly: i.e., each 
column corresponds to values of a function.  Is this possible without 
creating a DataFrame?




[julia-users] Re: Best practices for migrating 0.3 code to 0.4? (specifically constructors)

2015-03-12 Thread Avik Sengupta
I think this is simply due to your passing a UTF8String, while your 
function defined only for ASCIIString. Since there is no function defined 
for UTF8String, julia falls back to the default constructor that calls 
convert. 

julia type A
 a::ASCIIString
 b::Int
   end

julia function A(fn::ASCIIString)
   A(fn, length(fn)
   end
ERROR: syntax: missing comma or ) in argument list

julia function A(fn::ASCIIString)
   A(fn, length(fn))
   end
A

julia A(X)
A(X,1)

julia A(∞)
ERROR: MethodError: `convert` has no method matching convert(::Type{A}, 
::UTF8String)
This may have arisen from a call to the constructor A(...),
since type constructors fall back to convert methods.
Closest candidates are:
  convert{T}(::Type{T}, ::T)

 in call at no file


#Now try this: 

julia type A
 a::AbstractString
 b::Int
   end

julia function A(fn::AbstractString)
   A(fn, length(fn))
   end
A

julia A(X)
A(X,1)

julia A(∞)
A(∞,1)

Not that using AbstractString is only one way to solve this, which may or 
may not be appropriate for your use case. The code above is simply to 
demonstrate the issue at hand. 



On Thursday, 12 March 2015 23:43:58 UTC, Phil Tomson wrote:

 I thought I'd give 0.4 a spin to try out the new garbage collector.

 On my current codebase developed with 0.3 I ran into several warnings 
 (*float32() 
 should now be Float32()* - that sort of thing)

 And then this error:






 *ERROR: LoadError: LoadError: LoadError: LoadError: LoadError: 
 MethodError: `convert` has no method matching convert(::Type{Img.ImgHSV}, 
 ::UTF8String)This may have arisen from a call to the constructor 
 Img.ImgHSV(...),since type constructors fall back to convert 
 methods.Closest candidates are:  convert{T}(::Type{T}, ::T)*
 After poking around New Language Features and the list here a bit it seems 
 that there are changes to how overloaded constructors work.

 In my case I've got:

 type ImgHSV
   name::ASCIIString
   data::Array{HSV{Float32},2}  
   #data::Array{IntHSV,2}  
   height::Int64
   wid::Int64
   h_mean::Float32
   s_mean::Float32
   v_mean::Float32
   h_std::Float32
   s_std::Float32
   v_std::Float32
 end

 # Given a filename of an image file, construct an ImgHSV
 function ImgHSV(fn::ASCIIString)
   name,ext = splitext(basename(fn))
   source_img_hsv = Images.data(convert(Image{HSV{Float64}},imread(fn)))
   #scale all the values up from (0-1) to (0-255)
   source_img_scaled = map(x- HSV( ((x.h/360)*255),(x.s*255),(x.v*255)),
   source_img_hsv)
   img_ht  = size(source_img_hsv,2)
   img_wid = size(source_img_hsv,1)
   h_mean = (mean(map(x- x.h,source_img_hsv)/360)*255)
   s_mean = (mean(map(x- x.s,source_img_hsv))*255)
   v_mean = (mean(map(x- x.v,source_img_hsv))*255)
   h_std  = (std(map(x- x.h,source_img_hsv)/360)*255)
   s_std  = (std(map(x- x.s,source_img_hsv))*255)
   v_std  = (std(map(x- x.v,source_img_hsv))*255)
   ImgHSV(
 name,
 float32(source_img_scaled),
 img_ht,
 img_wid,
 h_mean,
 s_mean,
 v_mean,
 h_std,
 s_std,
 v_std
   )
 end

 Should I rename this function to something like buildImgHSV so it's not 
 actually a constructor and *convert* doesn't enter the picture?

 Phil



Re: [julia-users] Re: Getting people to switch to Julia - tales of no(?) success

2015-03-12 Thread Jiahao Chen
 the biggest drawback was the risk that the language might die in 5 years.

With virtualization technology, one can always take a snapshot of a
working installation, and then the problem is simply reduced to
finding a virtualization environment that is sufficiently backwards
compatible that it can run such an ancient format of the image you
saved in 2015.

Thanks,

Jiahao Chen
Staff Research Scientist
MIT Computer Science and Artificial Intelligence Laboratory


[julia-users] Some simple use cases for multi-threading

2015-03-12 Thread Viral Shah
I am looking to put together a set of use cases for our multi-threading 
capabilities - mainly to push forward as well as a showcase. I am thinking of 
starting with stuff in the microbenchmarks and the shootout implementations 
that are already in test/perf.

I am looking for other ideas that would be of interest. If there is real 
interest, we can collect all of these in a repo in JuliaParallel.

-viral





[julia-users] Parallel for-loops

2015-03-12 Thread Pieter Barendrecht
I'm wondering how to save data/results in a parallel for-loop. Let's assume 
there is a single Int64 array, initialised using zeros() before starting 
the for-loop. In the for-loop (typically ~100,000 iterations, that's the 
reason I'm interested in parallel processing) the entries of this Int64 
array should be increased (based on the results of an algorithm that's 
invoked in the for-loop).

Everything works fine when using just a single proc, but I'm not sure how 
to modify the code such that, when using e.g. addprocs(4), the data/results 
stored in the Int64 array can be processed once the for-loop ends. The 
algorithm (a separate function) is available to all procs (using the 
require() function). Just using the Int64 array in the for-loop (using 
@parallel for k=1:10) does not work as each proc receives its own copy, 
so after the for-loop it contains just zeros (as illustrated in a set of 
slides on the Julia language). I guess it involves @spawn and fetch() 
and/or pmap(). Any suggestions or examples would be much appreciated :).


[julia-users] Re: Block Matrices: problem enforcing parametric type constraints

2015-03-12 Thread Greg Plowman
I don't really understand how this works, but this might point someone in 
the right direction.
It seems Julia can't fully infer types, in particular the element type S.
So we get further if we give a hint:

type BlockMatrix{S,TA:AbstractMatrix{S},TB:AbstractMatrix{S},TC:
AbstractMatrix{S},TD:AbstractMatrix{S}} : AbstractMatrix{S}
A::TA
B::TB
C::TC
D::TD
end

typealias Block{S} BlockMatrix{S,AbstractMatrix{S},AbstractMatrix{S},
AbstractMatrix{S},AbstractMatrix{S}}

# not really sure what size() should be, but need to define for output
Base.size(x::BlockMatrix) = (size(x.A,1) + size(x.C,1), size(x.A,2) + size(x
.B,2))

N = Block{Float64}(A,A,A,B)


julia N.A
4x4 Array{Float64,2}:
 0.805914  0.473687  0.721984  0.464178
 0.306 0.728015  0.148804  0.776728
 0.439048  0.566558  0.72709   0.524761
 0.255731  0.16528   0.331941  0.167353
julia N.B
4x4 Array{Float64,2}:
 0.805914  0.473687  0.721984  0.464178
 0.306 0.728015  0.148804  0.776728
 0.439048  0.566558  0.72709   0.524761
 0.255731  0.16528   0.331941  0.167353
julia N.C
4x4 Array{Float64,2}:
 0.805914  0.473687  0.721984  0.464178
 0.306 0.728015  0.148804  0.776728
 0.439048  0.566558  0.72709   0.524761
 0.255731  0.16528   0.331941  0.167353
julia N.D
4x4 Diagonal{Float64}:
 1.0  0.0  0.0  0.0
 0.0  2.0  0.0  0.0
 0.0  0.0  3.0  0.0
 0.0  0.0  0.0  4.0





On Friday, March 13, 2015 at 8:49:41 AM UTC+11, Gabriel Mitchell wrote:

 @g Sorry, I guess I didn't state my intent that clearly. While your 
 example does enforce the Matrix/eltype constraint that is only part of what 
 I am after. Having a type parameter for each block is a main thing that I 
 am interested in. The reason is that I can write methods that dispatch on 
 those types. An example of such a method with an explicit 4-ary structure 
 would be

 #generic fallback
 det(A::Matrix,B::Matrix,C::Matrix,D::Matrix) = det([A B; C D])

 #specialized method, should actually, check for A invertible, but you get 
 the idea
 det(A::Diagonal,B::Diagonal,C::Diagonal,D::Diagonal) = 
 det(A)*det(D-C*inv(A)*B)

 #etc...

 In my applications there are at least a dozen situations where certain 
 block structure allow for significantly more efficient implementations than 
 the generic fallback. One would like to make the calls to these methods 
 (det,inv,trace, and so on) with the normal 1-ary argument, that matrix M 
 itself. This would be possible if the type information of the blocks could 
 be read out of the type of M. I hope this clear up my motivation for the 
 above question.



 On Thursday, March 12, 2015 at 8:49:05 PM UTC+1, g wrote:

 BlockMatrix only needs one type parameter to fully specify the type, so 
 you should probably only use one type parameter. Like so:

 *type BlockMatrix{S} : AbstractMatrix{S}*

   *A::AbstractMatrix{S}*

   *B::AbstractMatrix{S}*

   *C::AbstractMatrix{S}*

   *D::AbstractMatrix{S}*

   *end*


 I'm sure someone else can explain in more detail why yours didn't work. 



Re: [julia-users] Automatic doc tools for Julia

2015-03-12 Thread Ismael VC
Thank you very much Tom!

On Thu, Mar 12, 2015 at 9:26 AM, Tom Short tshort.rli...@gmail.com wrote:

 Here is an example of documentation for a package I maintain:

 https://tshort.github.io/Sims.jl/

 Here are examples of docstrings:

 https://github.com/tshort/Sims.jl/blob/master/src/sim.jl#L1-L96

 Here is the config file for Mkdocs:

 https://github.com/tshort/Sims.jl/blob/master/mkdocs.yml

 Here is a Julia script that uses the Lexicon package to build the API
 documentation from the docstrings:

 https://github.com/tshort/Sims.jl/blob/master/docs/build.jl

 Here are other packages that use Mkdocs:


 https://www.google.com/search?q=mkdocs.yml+jl+site:github.comie=utf-8oe=utf-8


 On Thu, Mar 12, 2015 at 10:28 AM, Ismael VC ismael.vc1...@gmail.com
 wrote:

 tshort, could you provide us an example please?

 El jueves, 12 de marzo de 2015, 4:59:14 (UTC-6), tshort escribió:

 The Lexicon package works well for me along with Mkdocs.
 On Mar 12, 2015 6:03 AM, Ján Adamčák jada...@gmail.com wrote:

 Hi guys,

 Can I ask you for something like best practice with auto doc tools for
 parsing Julia code? I try use Doxygen and Sphinx, but I think this is not
 good solutions in this timeversion(0.3.6). And/Or some tool for generate
 UML diagrams from julia code?

 Thanks.

 P.S.:
 My idea with this thread is generate something like big manual of
 knowlege how to use auto doc tools in Julia.





Re: [julia-users] Re: Getting people to switch to Julia - tales of no(?) success

2015-03-12 Thread Tamas Papp
IMO there is no way to ensure that a new language like Julia will live
(= have a viable, active community which keeps improving the language
and the libraries) over a 5-year timeframe. I really hope it will, but
there is no way to be sure.

That said, since it is open source, the client will always have a
version which can run the program you wrote a while ago, possibly with a
bit of tweaking.

Talking about a 5 year interval, my primary concern as a client would
not be Julia dying, but the opposite: I would be concerned that the
language is moving too fast, and keeping the code up to date will
require continuous work (even if one sticks to the stable releases).

Best,

Tamas


On Thu, Mar 12 2015, Ken B wrote:

 Back on topic, I just convinced a client to use Julia with my current
 project. It will be an online image processing tool. The other choices were
 Matlab and Python with C#.

 The fast speed and short development time were the deciding factors here,
 but the biggest drawback was the risk that the language might die in 5
 years. Any material that I could use if that argument comes up again?

 Ken

 On Sunday, 8 March 2015 11:41:12 UTC+1, Joachim Dahl wrote:

 The package is very similiar to Gloptipoly or SparsePOP, and it can be
 found here:
 https://github.com/joachimdahl/Polyopt.jl

 It was a design decision to keep the API close to the formulation of the
 Lasserre hierarchy, so that there is a close correspondence between the
 problem you specify and the actual semidefinite problem you solve.  Yalmip
 and SOSTOOL have much more flexible modeling capabilities, but it becomes
 less transparent what the resulting SDP is.

 There is no documentation yet, but the tests show how to use it. There are
 some SOS examples, but actually the toolbox started as tool for forming the
 Lasserre hierarchy while exploiting chordal sparisty structure.  I don't
 think many things will change, except for perhaps different ways to exploit
 sparsity in SOS certficates;  if you want to solve polynomial problems
 using the Lasserre hierarchy it's probably useful already now, but not as
 an alternative to Yalmip or SOSTOOL.

 The plan is to have it finished by summer and present it at a software
 session at ISMP.

 On Sun, Mar 8, 2015 at 10:45 AM, Davide Lasagna lasagn...@gmail.com
 javascript: wrote:

 Joachim, would you share this toolbox for polynomial optimisation? Is it
 on GitHub?
 I guess you wrote something's equivalent to yalmip or sostools. Did you
 compare performances?
 Davide





[julia-users] Re: Help: Too many open files -- How do I figure out which files are open?

2015-03-12 Thread ggggg
The other suggestions are good practice. If you are on linux the following 
commands will help you figure out which file or files you have open, and 
therefore where in your code to look:

pidof julia
lsof -p  #  is from previous command


On Thursday, March 12, 2015 at 6:46:10 AM UTC-6, René Donner wrote:


 With that I was able to debug the problem in a snap. It turns out that one 
 of my functions had readlines(open(...)) instead of 
 open(readlines,...). The critical difference is that the former leaves a 
 file pointer dangling.


 Ah, ok, that's exactly the difference: using open(myfunc, filename) is the 
 same as 

 open(filename) do io 
  myfunc(io)
 end

 Like this you never have to worry about closing files, even on exceptions.

 When using open without a function it looks like this:

  io = open(filename)
  myfunc(io)
  # this is crucial:
  close(io)

 In this version, when an exception occurs in myfunc the file will never be 
 closed.

 Last note: the do syntax is not special for open but can be used with 
 your own functions as well:

 function functhatneedsfunc(f, a)
  println(let's just apply f)
  f(a)
 end

 # lets call it:
 functhatneedsfunc(a) do x
  show(x)
  println(printed x!)
 end


  



Re: [julia-users] Swapping two columns (or rows) of an array efficiently

2015-03-12 Thread Ján Dolinský
Hi Steven,

This is very cool. Indeed de-vectorizing will do the job.

Thanks,
Jan

Dňa štvrtok, 12. marca 2015 15:49:50 UTC+1 Steven G. Johnson napísal(-a):

 As a general rule, with Julia one needs to unlearn the instinct (from 
 Matlab or Python) that efficiency == clever use of library functions, 
 which turns all optimization questions into is there a built-in function 
 for X (and if the answer is no you are out of luck).   Loops are fast, 
 and you can easily beat general-purpose library functions with your own 
 special-purpose code.



Re: [julia-users] Memory allocation in Closed loop Control Simulation

2015-03-12 Thread Tim Holy
Hmm, looks borked. I may be able to try sometime, but it could be a few days 
until I get to it. You posted all the code earlier in this thread?

Jim Garrison is probably the current expert on combining Julia  valgrind.

--Tim

On Thursday, March 12, 2015 03:01:59 AM Bartolomeo Stellato wrote:
 I installed valgrind 3.11 SVN with homebrew and tried to run the code but I
 am not familiar with the generated output.
 I added a valgrind-julia.supp and used the command parameters explained here
 https://github.com/JuliaLang/julia/blob/master/doc/devdocs/valgrind.rst
 
 I executed:
 
 valgrind --smc-check=all-non-file --suppressions=valgrind-julia.supp
 /Applications/Julia-dev.app/Contents/Resources/julia/bin/julia
 simulation.jl a.out  log.txt 21
 I attach the generated log.txt file.
 
 Bartolomeo
 
 Il giorno giovedì 12 marzo 2015 03:10:10 UTC, Tim Holy ha scritto:
  On Wednesday, March 11, 2015 05:33:10 PM Bartolomeo Stellato wrote:
   I also tried with Julia 0.4.0-dev+3752 and I encounter the same problem.
  
  Hm. If you're sure there's a leak, this should be investigated. Any chance
  you
  can try valgrind?
  
  --Tim
  
   Il giorno mercoledì 11 marzo 2015 22:51:18 UTC, Tony Kelman ha scritto:
The majority of the memory allocation is almost definitely coming from
  
  the
  
problem setup here. You're using a dense block-triangular fomulation
  
  of
  
MPC, eliminating states and only solving for inputs with inequality
constraints. Since you're converting your problem data to sparse
initially,
you're doing a lot of extra allocation, integer arithmetic, and
  
  consuming
  
more memory to represent a large dense matrix in sparse format.
Reformulate
your problem to include both states and inputs as unknowns, and
  
  enforce
  
the
dynamics as equality constraints. This will result in a block-banded
problem structure and maintain sparsity much better. The matrices
  
  within
  
the blocks are not sparse here since you're doing an exact
  
  discretization
  
with expm, but a banded problem will scale much better to longer
  
  horizons
  
than a triangular one.

You also should be able to reuse the problem data, with the exception
  
  of
  
bounds or maybe vector coefficients, between different MPC iterations.



On Wednesday, March 11, 2015 at 12:14:03 PM UTC-7, Bartolomeo Stellato

wrote:
Thank you for the quick replies and for the suggestions!

I checked which lines give more allocation with
  
  --track-allocation=user
  
and the amount of memory I posted is from the OSX Activity monitor.
Even if it is not all necessarily used, if it grows too much the
operating system is forced to kill Julia.

I slightly edited the code in order to *simulate the closed loop 6
  
  times*
  
(for different parameters of N and lambdau). I attach the files. The*
allocated memory *with the OSX Activity monitor is *2gb now.*
If I run the code twice with a clear_malloc_data() in between to save
--track-allocation=user information I get something around 3.77gb!

Are there maybe problems with my code for which the allocated memory
increases? I can't understand why by simply running the same function
  
  6
  
times, the memory increases so much. Unfortunately I need to do it
hundreds
of times in this way it is impossible.

Do you think that using the push! function together with reducing the
vector computations could significantly reduce this big amount of
allocated
memory?


Bartolomeo

Il giorno mercoledì 11 marzo 2015 17:07:23 UTC, Tim Holy ha scritto:
--track-allocation doesn't report the _net_ memory allocated, it
  
  reports
  
the
_gross_ memory allocation. In other words, allocate/free adds to the
tally,
even if all memory is eventually freed.

If you're still concerned about memory allocation and its likely
  
  impact
  
on
performance: there are some things you can do. From glancing at your
code very
briefly, a couple of comments:
- My crystal ball tells me you will soon come to adore the push!
function :-)
- If you wish (and it's your choice), you can reduce allocations by
doing more
operations with scalars. For example, in computeReferenceCurrents,
instead of
computing tpu and iref arrays outside the loop, consider performing
  
  the
  
equivalent operations on scalar values inside the loop.

Best,
--Tim

On Wednesday, March 11, 2015 07:41:19 AM Bartolomeo Stellato wrote:
 Hi all,
 
 I recently started using Julia for my *Closed loop MPC
  
  simulations.* I
  
fond

 very interesting the fact that I was able to do almost everything
  
  I
  
was

 doing in MATLAB with Julia. Unfortunately, when I started working
  
  on
  
more

 complex simulations I notice a 

Re: [julia-users] Swapping two columns (or rows) of an array efficiently

2015-03-12 Thread Tim Holy
This is something that many people (understandably) have a hard time 
appreciating, so I think this post should be framed and put up on the julia 
wall.

We go to considerable lengths to try to make code work efficiently in the 
general case (check out subarray.jl and subarray2.jl in master some time...), 
but sometimes there's no competing with a hand-rolled version for a particular 
case. Folks should not be shy to implement such tricks in their own code.

--Tim

On Thursday, March 12, 2015 07:49:49 AM Steven G. Johnson wrote:
 As a general rule, with Julia one needs to unlearn the instinct (from
 Matlab or Python) that efficiency == clever use of library functions,
 which turns all optimization questions into is there a built-in function
 for X (and if the answer is no you are out of luck).   Loops are fast,
 and you can easily beat general-purpose library functions with your own
 special-purpose code.



[julia-users] question about module, using/import

2015-03-12 Thread antony schutz

Hello

I'm trying to generalize an algorithm for alpha user. 
The algorithm can draw plot but I dont want this to be mandatory, so in the 
module i don't import the library (for example, i dont call using 
PyPlot)
I want the plot drawing to be an option and has to be done by the user.

Unfortunately, when I call using Pyplot and if i am not working in the 
folder containing the module, the package is not recognized by the 
algorithm. 

module mymodule 
 using needed_library
 export needed_function
 include(needed_files)
end

julia using mymodule
julia using PyPlot 

*INFO: Loading help data...*


*ERROR: figure not defined*


I tried to define 2 module with different name but I can't load the second 
module (mymodulePyPlot) because the module is inside folder mymodule and 
not in folder mymodulePyPlot


Is somebody know a solution to this problem ? 


Thanks in advance


Re: [julia-users] Re: Getting people to switch to Julia - tales of no(?) success

2015-03-12 Thread Ken B
Back on topic, I just convinced a client to use Julia with my current 
project. It will be an online image processing tool. The other choices were 
Matlab and Python with C#.

The fast speed and short development time were the deciding factors here, 
but the biggest drawback was the risk that the language might die in 5 
years. Any material that I could use if that argument comes up again?

Ken

On Sunday, 8 March 2015 11:41:12 UTC+1, Joachim Dahl wrote:

 The package is very similiar to Gloptipoly or SparsePOP, and it can be 
 found here:
 https://github.com/joachimdahl/Polyopt.jl

 It was a design decision to keep the API close to the formulation of the 
 Lasserre hierarchy, so that there is a close correspondence between the 
 problem you specify and the actual semidefinite problem you solve.  Yalmip 
 and SOSTOOL have much more flexible modeling capabilities, but it becomes 
 less transparent what the resulting SDP is.

 There is no documentation yet, but the tests show how to use it. There are 
 some SOS examples, but actually the toolbox started as tool for forming the 
 Lasserre hierarchy while exploiting chordal sparisty structure.  I don't 
 think many things will change, except for perhaps different ways to exploit 
 sparsity in SOS certficates;  if you want to solve polynomial problems 
 using the Lasserre hierarchy it's probably useful already now, but not as 
 an alternative to Yalmip or SOSTOOL.

 The plan is to have it finished by summer and present it at a software 
 session at ISMP.

 On Sun, Mar 8, 2015 at 10:45 AM, Davide Lasagna lasagn...@gmail.com 
 javascript: wrote:

 Joachim, would you share this toolbox for polynomial optimisation? Is it 
 on GitHub?
 I guess you wrote something's equivalent to yalmip or sostools. Did you 
 compare performances?
 Davide




[julia-users] Re: How does Measure work at y-axis in Gadfly.jl?

2015-03-12 Thread Daniel Jones

No, absolute measures are always left-to-right, top-to-bottom.


On Thursday, March 12, 2015 at 2:13:57 AM UTC-7, nanaya tachibana wrote:

 Thank you very much. 
 Can anything affect the orientation of the absolute measurement of 
 Measure? It goes from left-to-right and top-to-bottom, no matter how I set 
 the units in the context.

 On Thursday, March 12, 2015 at 3:14:28 PM UTC+8, Daniel Jones wrote:


 It can actually actually work both ways: cx and cy give context units, 
 which by default are between 0 and 1, and go from left-to-right and 
 top-to-bottom, but can be redefined to be anything.

 So, this draws a line from the top-left to the bottom-right.
 compose(context(), line([(0cx,0cy), (1cx,1cy)]), stroke(black))

 But I can change the units in the context, it draws a line from the 
 bottom-left to top-right.
 compose(context(units=UnitBox(0,1,1,-1)), line([(0cx,0cy), (1cx,1cy)]), 
 stroke(black))

 UnitBox defines a new coordinate system in which the top-left corner is 
 (0, 1), and the context is 1 unit wide and 1 unit tall, but the height is 
 given as -1, which somewhat unintuitively flips the orientation of the 
 units.

 If you look at the bottom of coord.jl in Gadfly, you'll see how the 
 coordinate system for the plot gets set up:

 context(units=UnitBox(
 coord.xflip ? xmax : xmin,
 coord.yflip ? ymin : ymax,
 coord.xflip ? -width : width,
 coord.yflip ? height : -height,
 leftpad=xpadding,
 rightpad=xpadding,
 toppad=ypadding,
 bottompad=ypadding),






 On Wednesday, March 11, 2015 at 10:33:35 PM UTC-7, nanaya tachibana wrote:

 I wanted to contribute to Gadfly.jl and I started from looking into 
 bar.jl. 
 I found that the value cy of Measure in Gadfly.jl increases from bottom 
 to top, but it increases from top to bottom in Compose.jl.
 What makes Measure work in that way? 

 I really appreciate any help you can provide.



[julia-users] dict comprehension syntax in 0.4

2015-03-12 Thread Jim Garrison

It is well known that the dict syntax in 0.4 has changed

julia [1=2,3=4]
WARNING: deprecated syntax [a=b, ...].
Use Dict(a=b, ...) instead.

However, I was surprised to notice that a similar syntax still works for 
dict comprehensions, without warning:


julia [i = 2i for i = 1:3]
Dict{Int64,Int64} with 3 entries:
  2 = 4
  3 = 6
  1 = 2

Is this intentional?


Re: [julia-users] How to introduce scope inside macro

2015-03-12 Thread Tim Holy
Now I understand. Yes, the problem is you're running the operations in global 
scope. If you really want to go this way, then you'll probably have to make 
your macro create a function and then call it.

--Tim

On Thursday, March 12, 2015 09:41:18 AM Johan Sigfrids wrote:
 But the compile time shouldn't be very big, should it? For some bigger data
 set and more complex computation the compile time should add pretty
 insignificantly to the running time. The case I'm running into is something
 like this:
 
 function test(a, b, c)
 @map(sqrt(a^2 + b^2) + c, a, b)
 end
 a = randn(10_000_000)
 b = randn(10_000_000)
 c = randn()
 @time test1(a, b ,c)
 @time @map(sqrt(a^2 + b^2) + c, a, b)
 
 elapsed time: 0.096866291 seconds (80004616 bytes allocated)
 
 elapsed time: 4.713548649 seconds (1839991368 bytes allocated, 20.06% gc
 time)
 
 I wonder if I could do something to get closer the the function case
 performance in the global case.
 On Thursday, March 12, 2015 at 5:07:57 AM UTC+2, Tim Holy wrote:
  If your _usage_ of @map is in the global scope, then it has to compile the
  expressions each time you use it. That may or may not be a problem for
  you.
  
  As an alternative that only requires compilation on the first call, try
  FastAnonymous or NumericFuns.
  
  --Tim
  
  On Wednesday, March 11, 2015 02:25:10 PM Johan Sigfrids wrote:
   I've been playing around with creating a @map macro:
   
   indexify(s::Symbol, i, syms) = s in syms ? Expr(:ref, s, i) : s
   indexify(e::Expr, i, syms) = Expr(e.head, e.args[1], [indexify(a, i,
  
  syms)
  
   for a in e.args[2:end]]...)
   indexify(a::Any, i, syms) = a
   macro map(expr, args...)
   
   quote
   
   @assert all([map(length, ($(args...),))...] .==
  
  length($(args[1])))
  
   out = Array(typeof($(indexify(expr, 1, args))),
  
  size($(args[1])))
  
   for i in 1:length(out)
   
   @inbounds out[i] = $(indexify(expr, :i, args))
   
   end
   out
   
   end
   
   end
   
   When used inside a function it is nice and fast (around 40x faster than
   map), but when used in global scope is is twice as slow as map. I assume
   this is because of global variables prevent optimization. Now I wonder
  
  if
  
   there is some way to introduce a scope to make the variables in expr
  
  local?



Re: [julia-users] Swapping two columns (or rows) of an array efficiently

2015-03-12 Thread Ethan Anderes
Just quick addendum to what Steven wrote, be sure to note that the function 
swapcols mutates the argument X. 

Re: [julia-users] Re: Memory allocation questions

2015-03-12 Thread Phil Tomson


On Thursday, March 12, 2015 at 2:14:34 AM UTC-7, Mauro wrote:

 Julia is not yet very good with producing fast vectorized code which 
 does not allocate temporaries.  The temporaries is what gets you here. 

 However, running your example, I get a slightly different a different 
 *.mem file (which makes more sense to me): 

 - function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
 0   nl.hx = x 
 248832000   wx = nl.w * nl.hx 
 348364800   nl.pa = nl.b+wx 
 1094864752   nl.pr = tanh(nl.pa).*nl.scale 
 - end 

 (what version of julia are you running, me 0.3.6).  So everytime 
 forward_propagate is called some temporaries are allocated.  So in 
 performance critical code you have write loops instead: 

 function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
 nl.hx = x # note: nl.hx now point to the same junk of memory 
 for i=1:size(nl.w,1) 
 nl.pa[i] = 0.; 
 for j=1:size(nl.w,2) 
 nl.pa[i] += nl.w[i,j]*nl.hx[j] 
 end 
 nl.pa[i] += nl.b[i] 
 nl.pr[i] = tanh(nl.pa[i])*nl.scale[i] 
 end 
 end 

 This does not allocate any memory and runs your test case at about 2x 
 the speed. 


Just tried that, I'm seeing a much bigger improvement. went from 8 seconds 
to 0.5 seconds per image evaluation. Nice improvment!


 Also a note on the code in your first email.  Instead of: 

   for y in 1:img.height 
 @simd for x in 1:img.wid 
   if 1  x  img.wid 
 @inbounds left   = img.data[x-1,y] 
 @inbounds center = img.data[x,y] 
 @inbounds right  = img.data[x+1,y] 

 you should be able to write: 

   @inbounds for y in 1:img.height 
 @simd for x in 1:img.wid 
   if 1  x  img.wid 
 left   = img.data[x-1,y] 
 center = img.data[x,y] 
 @inbounds right  = img.data[x+1,y] 

 Just curious, why did you get rid of the @inbounds on the assignments to 
left and center, but not right?
 

 Also, did you check that the @simd works?  I'm no expert on that but my 
 understanding is that most of the time it doesn't work with if-else.  If 
 that is the case, maybe special-case the first and last iteration and 
 run the loop like: @simd for x in 2:img.wid-1 . 


I just did that and I don't see a huge difference there. I'm not sure @simd 
is doing much there, in fact I took it out and nothing changed. Probably 
have to look at the LLVM IR output to see what's happening there.

 In fact that would save 
 you a comparisons in each iteration irrespective of @simd. 


Yes, that's a good point.  I think I'll just pre-load those two columns 
(the 1st and last columns of the matrix)


 On Thu, 2015-03-12 at 02:17, Phil Tomson philt...@gmail.com javascript: 
 wrote: 
  I transformed it into a single-file testcase: 
  
  # 
  type NeuralLayer 
  w::Matrix{Float32}   # weights 
  cm::Matrix{Float32}  # connection matrix 
  b::Vector{Float32}   # biases 
  scale::Vector{Float32}  # 
  a_func::Symbol # activation function 
  hx::Vector{Float32}  # input values 
  pa::Vector{Float32}  # pre activation values 
  pr::Vector{Float32}  # predictions (activation values) 
  frozen::Bool 
  end 
  
  function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
nl.hx = x 
wx = nl.w * nl.hx 
nl.pa = nl.b+wx 
nl.pr = tanh(nl.pa).*nl.scale 
  end 
  
  out_dim = 10 
  in_dim = 10 
  b = sqrt(6) / sqrt(in_dim + out_dim) 
  
  nl = NeuralLayer( 
 float32(2.0b * rand(Float32,out_dim,in_dim) - b), #setup rand 
 weights 
 ones(Float32,out_dim,in_dim), #connection matrix 
   float32(map(x-x*(randbool()?-1:1),rand(out_dim)*rand(1:4))), 
  #biases 
 rand(Float32,out_dim),  # scale 
 :tanh, 
 rand(Float32,in_dim), 
 rand(Float32,out_dim), 
 rand(Float32,out_dim), 
 false 
  ) 
  
  x = ones(Float32,in_dim) 
  forward_propagate(nl,x) 
  clear_malloc_data() 
  for i in 1:(1920*1080) 
forward_propagate(nl,x) 
  end 
  println(nl.pr is: $(nl.pr)) 
  
 # 

  
  Now the interesting part of the  .mem file looks like this: 
  
 - function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
  0   nl.hx = x 
  0   wx = nl.w * nl.hx 
348368752   nl.pa = nl.b+wx 
  0   nl.pr = tanh(nl.pa).*nl.scale 
  - end 
  
  I split up the matrix multiply and the addition of bias vector into two 
  separate lines and it looks like it's the vector addition that's 
 allocating 
  all of the memory (which seems surprising, but maybe I'm missing 
 something). 
  
  Phil 



Re: [julia-users] How to introduce scope inside macro

2015-03-12 Thread Johan Sigfrids
But the compile time shouldn't be very big, should it? For some bigger data 
set and more complex computation the compile time should add pretty 
insignificantly to the running time. The case I'm running into is something 
like this:

function test(a, b, c)
@map(sqrt(a^2 + b^2) + c, a, b)
end
a = randn(10_000_000)
b = randn(10_000_000)
c = randn()
@time test1(a, b ,c)
@time @map(sqrt(a^2 + b^2) + c, a, b)

elapsed time: 0.096866291 seconds (80004616 bytes allocated)

elapsed time: 4.713548649 seconds (1839991368 bytes allocated, 20.06% gc time)

I wonder if I could do something to get closer the the function case 
performance in the global case.


On Thursday, March 12, 2015 at 5:07:57 AM UTC+2, Tim Holy wrote:

 If your _usage_ of @map is in the global scope, then it has to compile the 
 expressions each time you use it. That may or may not be a problem for 
 you. 

 As an alternative that only requires compilation on the first call, try 
 FastAnonymous or NumericFuns. 

 --Tim 

 On Wednesday, March 11, 2015 02:25:10 PM Johan Sigfrids wrote: 
  I've been playing around with creating a @map macro: 
  
  indexify(s::Symbol, i, syms) = s in syms ? Expr(:ref, s, i) : s 
  indexify(e::Expr, i, syms) = Expr(e.head, e.args[1], [indexify(a, i, 
 syms) 
  for a in e.args[2:end]]...) 
  indexify(a::Any, i, syms) = a 
  macro map(expr, args...) 
  quote 
  @assert all([map(length, ($(args...),))...] .== 
 length($(args[1]))) 
  out = Array(typeof($(indexify(expr, 1, args))), 
 size($(args[1]))) 
  for i in 1:length(out) 
  @inbounds out[i] = $(indexify(expr, :i, args)) 
  end 
  out 
  end 
  end 
  
  When used inside a function it is nice and fast (around 40x faster than 
  map), but when used in global scope is is twice as slow as map. I assume 
  this is because of global variables prevent optimization. Now I wonder 
 if 
  there is some way to introduce a scope to make the variables in expr 
 local? 



Re: [julia-users] Re: Getting people to switch to Julia - tales of no(?) success

2015-03-12 Thread Isaiah Norton

 The fast speed and short development time were the deciding factors here,
 but the biggest drawback was the risk that the language might die in 5
 years. Any material that I could use if that argument comes up again?


The argument I would make is that Julia already has a fairly substantial
core of user-developers (overlapped, by design) who are passionate and
invested in the language. For example, based on Jiahao's World of Julia
notebook --

http://nbviewer.ipython.org/github/jiahao/ijulia-notebooks/blob/master/2014-06-30-world-of-julia.ipynb

-- we can see that across the Julia ecosystem there are ~30 users who have
more than 500 commits, and ~100 who have more than 100 commits. (this is
also under-counted because there are projects with 100 commits that are
not registered in METADATA for various reasons).

As a very rough point of comparison, consider the number of contributors
who built up the core of scientific Python. There were less than 50 total
contributors over the first ~8 years of NumPy and SciPy development:

http://arokem.github.io/2014/09/05/python-is-still/

The comparison is rough (for a litany of reasons), but the point is that a
solid core of several dozens of users can go a very long way in a project
like this. Julia has (by design) the advantage that knowledge of C is not
required to make substantive contributions to performance-sensitive code,
so the potential contributor base is also much broader.

In this sense the heartbeat for the foreseeable future will hopefully be
driven by factors more similar to early Ruby or Python/NumPy than to
closed-source languages that languished under fickle or dying paymasters
(e.g. Dylan with Apple and Harlequin, or Fortress with Sun). The Ruby and
Python platforms were largely built by passionate users and/or
procrastinating grad students (NumPy specifically). soapboxAs such, Julia
is arguably a safer bet than a mostly-corporate-backed language unless the
backer is Microsoft/Apple/Google ;)/soapbox

On Thu, Mar 12, 2015 at 11:16 AM, Ken B ken.bastiaen...@gmail.com wrote:

 Back on topic, I just convinced a client to use Julia with my current
 project. It will be an online image processing tool. The other choices were
 Matlab and Python with C#.

 The fast speed and short development time were the deciding factors here,
 but the biggest drawback was the risk that the language might die in 5
 years. Any material that I could use if that argument comes up again?

 Ken

 On Sunday, 8 March 2015 11:41:12 UTC+1, Joachim Dahl wrote:

 The package is very similiar to Gloptipoly or SparsePOP, and it can be
 found here:
 https://github.com/joachimdahl/Polyopt.jl

 It was a design decision to keep the API close to the formulation of the
 Lasserre hierarchy, so that there is a close correspondence between the
 problem you specify and the actual semidefinite problem you solve.  Yalmip
 and SOSTOOL have much more flexible modeling capabilities, but it becomes
 less transparent what the resulting SDP is.

 There is no documentation yet, but the tests show how to use it. There
 are some SOS examples, but actually the toolbox started as tool for forming
 the Lasserre hierarchy while exploiting chordal sparisty structure.  I
 don't think many things will change, except for perhaps different ways to
 exploit sparsity in SOS certficates;  if you want to solve polynomial
 problems using the Lasserre hierarchy it's probably useful already now, but
 not as an alternative to Yalmip or SOSTOOL.

 The plan is to have it finished by summer and present it at a software
 session at ISMP.

 On Sun, Mar 8, 2015 at 10:45 AM, Davide Lasagna lasagn...@gmail.com
 wrote:

 Joachim, would you share this toolbox for polynomial optimisation? Is it
 on GitHub?
 I guess you wrote something's equivalent to yalmip or sostools. Did you
 compare performances?
 Davide





Re: [julia-users] Re: Memory allocation questions

2015-03-12 Thread Phil Tomson


On Thursday, March 12, 2015 at 2:14:34 AM UTC-7, Mauro wrote:

 Julia is not yet very good with producing fast vectorized code which 
 does not allocate temporaries.  The temporaries is what gets you here. 

 However, running your example, I get a slightly different a different 
 *.mem file (which makes more sense to me): 

 - function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
 0   nl.hx = x 
 248832000   wx = nl.w * nl.hx 
 348364800   nl.pa = nl.b+wx 
 1094864752   nl.pr = tanh(nl.pa).*nl.scale 
 - end 


I would have guessed it should look more like that; why would the 
multiplication not result in temporaries (in my case)? That was a bit 
mysterious. 


 (what version of julia are you running, me 0.3.6).  


0.3.4 in my case.

 

 So everytime 
 forward_propagate is called some temporaries are allocated.  So in 
 performance critical code you have write loops instead: 


Will this always be the case or is this a current limitation of the Julia 
compiler? It seems like the more idiomatic, compact code should be handled 
more efficiently. Having to break this out into nested for-loops definitely 
hurts both readability as well as productivity.


 function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
 nl.hx = x # note: nl.hx now point to the same junk of memory 
 for i=1:size(nl.w,1) 
 nl.pa[i] = 0.; 
 for j=1:size(nl.w,2) 
 nl.pa[i] += nl.w[i,j]*nl.hx[j] 
 end 
 nl.pa[i] += nl.b[i] 
 nl.pr[i] = tanh(nl.pa[i])*nl.scale[i] 
 end 
 end 

  

 This does not allocate any memory and runs your test case at about 2x 
 the speed. 

 Also a note on the code in your first email.  Instead of: 

   for y in 1:img.height 
 @simd for x in 1:img.wid 
   if 1  x  img.wid 
 @inbounds left   = img.data[x-1,y] 
 @inbounds center = img.data[x,y] 
 @inbounds right  = img.data[x+1,y] 

 you should be able to write: 

   @inbounds for y in 1:img.height 
 @simd for x in 1:img.wid 
   if 1  x  img.wid 
 left   = img.data[x-1,y] 
 center = img.data[x,y] 
 @inbounds right  = img.data[x+1,y] 

 Also, did you check that the @simd works?  I'm no expert on that but my 
 understanding is that most of the time it doesn't work with if-else.  If 
 that is the case, maybe special-case the first and last iteration and 
 run the loop like: @simd for x in 2:img.wid-1 .  In fact that would save 
 you a comparisons in each iteration irrespective of @simd. 

 On Thu, 2015-03-12 at 02:17, Phil Tomson philt...@gmail.com javascript: 
 wrote: 
  I transformed it into a single-file testcase: 
  
  # 
  type NeuralLayer 
  w::Matrix{Float32}   # weights 
  cm::Matrix{Float32}  # connection matrix 
  b::Vector{Float32}   # biases 
  scale::Vector{Float32}  # 
  a_func::Symbol # activation function 
  hx::Vector{Float32}  # input values 
  pa::Vector{Float32}  # pre activation values 
  pr::Vector{Float32}  # predictions (activation values) 
  frozen::Bool 
  end 
  
  function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
nl.hx = x 
wx = nl.w * nl.hx 
nl.pa = nl.b+wx 
nl.pr = tanh(nl.pa).*nl.scale 
  end 
  
  out_dim = 10 
  in_dim = 10 
  b = sqrt(6) / sqrt(in_dim + out_dim) 
  
  nl = NeuralLayer( 
 float32(2.0b * rand(Float32,out_dim,in_dim) - b), #setup rand 
 weights 
 ones(Float32,out_dim,in_dim), #connection matrix 
   float32(map(x-x*(randbool()?-1:1),rand(out_dim)*rand(1:4))), 
  #biases 
 rand(Float32,out_dim),  # scale 
 :tanh, 
 rand(Float32,in_dim), 
 rand(Float32,out_dim), 
 rand(Float32,out_dim), 
 false 
  ) 
  
  x = ones(Float32,in_dim) 
  forward_propagate(nl,x) 
  clear_malloc_data() 
  for i in 1:(1920*1080) 
forward_propagate(nl,x) 
  end 
  println(nl.pr is: $(nl.pr)) 
  
 # 

  
  Now the interesting part of the  .mem file looks like this: 
  
 - function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
  0   nl.hx = x 
  0   wx = nl.w * nl.hx 
348368752   nl.pa = nl.b+wx 
  0   nl.pr = tanh(nl.pa).*nl.scale 
  - end 
  
  I split up the matrix multiply and the addition of bias vector into two 
  separate lines and it looks like it's the vector addition that's 
 allocating 
  all of the memory (which seems surprising, but maybe I'm missing 
 something). 
  
  Phil 



[julia-users] Re: 1 - 0.8

2015-03-12 Thread Patrick O'Leary
Julia does not try to hide the complexities of floating-point 
representations, so this is expected. There's a brief section in the manual 
[1] which lists some references on this topic--I personally recommend 
reading What Every Computer Scientist Should Know About Floating-Point 
Arithmetic 
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.102.244rep=rep1type=pdf,
 
but the other references are good, too.

[1] 
http://julia.readthedocs.org/en/latest/manual/integers-and-floating-point-numbers/#background-and-references
 
http://julia.readthedocs.org/en/latest/manual/integers-and-floating-point-numbers/#background-and-references

On Thursday, March 12, 2015 at 3:40:45 PM UTC-5, Hanrong Chen wrote:

 julia 1-0.8
 0.19996

 Is this a bug?



[julia-users] 1 - 0.8

2015-03-12 Thread Hanrong Chen
julia 1-0.8
0.19996

Is this a bug?


Re: [julia-users] 1 - 0.8

2015-03-12 Thread René Donner
Just the limitations of floating point numbers: 
https://en.wikipedia.org/wiki/Floating_point


Am 12.03.2015 um 20:23 schrieb Hanrong Chen hc...@cornell.edu:

 julia 1-0.8
 0.19996
 
 Is this a bug?



Re: [julia-users] Automatic doc tools for Julia

2015-03-12 Thread Tom Short
Here is an example of documentation for a package I maintain:

https://tshort.github.io/Sims.jl/

Here are examples of docstrings:

https://github.com/tshort/Sims.jl/blob/master/src/sim.jl#L1-L96

Here is the config file for Mkdocs:

https://github.com/tshort/Sims.jl/blob/master/mkdocs.yml

Here is a Julia script that uses the Lexicon package to build the API
documentation from the docstrings:

https://github.com/tshort/Sims.jl/blob/master/docs/build.jl

Here are other packages that use Mkdocs:

https://www.google.com/search?q=mkdocs.yml+jl+site:github.comie=utf-8oe=utf-8


On Thu, Mar 12, 2015 at 10:28 AM, Ismael VC ismael.vc1...@gmail.com wrote:

 tshort, could you provide us an example please?

 El jueves, 12 de marzo de 2015, 4:59:14 (UTC-6), tshort escribió:

 The Lexicon package works well for me along with Mkdocs.
 On Mar 12, 2015 6:03 AM, Ján Adamčák jada...@gmail.com wrote:

 Hi guys,

 Can I ask you for something like best practice with auto doc tools for
 parsing Julia code? I try use Doxygen and Sphinx, but I think this is not
 good solutions in this timeversion(0.3.6). And/Or some tool for generate
 UML diagrams from julia code?

 Thanks.

 P.S.:
 My idea with this thread is generate something like big manual of
 knowlege how to use auto doc tools in Julia.




[julia-users] Question: Variable Names lenght / speed

2015-03-12 Thread Julia User
I'm new to julia and read Allowed Variable Names 
http://docs.julialang.org/en/release-0.3/manual/variables/#allowed-variable-names
 
and have three general question.


   - Is there no limit to the lenght a variable name can have?
   - Does the lenght of a variable name in some way effect the speed of a 
   program.
   - Does the lenght of a variable name in some way effect the memory usage 
   of a program.

Thanks a lot.


[julia-users] Re: Block Matrices: problem enforcing parametric type constraints

2015-03-12 Thread ggggg
BlockMatrix only needs one type parameter to fully specify the type, so you 
should probably only use one type parameter. Like so:

*type BlockMatrix{S} : AbstractMatrix{S}*

   *A::AbstractMatrix{S}*

   *B::AbstractMatrix{S}*

   *C::AbstractMatrix{S}*

   *D::AbstractMatrix{S}*

   *end*


I'm sure someone else can explain in more detail why yours didn't work. 


Re: [julia-users] Swapping two columns (or rows) of an array efficiently

2015-03-12 Thread Tim Holy
That's a very satisfying result :-).

--Tim

On Thursday, March 12, 2015 07:19:01 PM Milan Bouchet-Valat wrote:
 Le jeudi 12 mars 2015 à 11:01 -0500, Tim Holy a écrit :
  This is something that many people (understandably) have a hard time
  appreciating, so I think this post should be framed and put up on the
  julia
  wall.
  
  We go to considerable lengths to try to make code work efficiently in the
  general case (check out subarray.jl and subarray2.jl in master some
  time...), but sometimes there's no competing with a hand-rolled version
  for a particular case. Folks should not be shy to implement such tricks
  in their own code.
 Though with the new array views in 0.4, the vectorized version should be
 more efficient than in 0.3. I've tried it, and indeed it looks like
 unrolling is not really needed, though it's still faster and uses less
 RAM:
 
 X = rand(100_000, 5)
 
 function f1(X, i, j)
 for _ in 1:1000
 X[:, i], X[:, j] = X[:, j], X[:, i]
 end
 end
 
 function f2(X, i, j)
 for _ in 1:1000
 a = sub(X, :, i)
 b = sub(X, :, j)
 a[:], b[:] = b, a
 end
 end
 
 function f3(X, i, j)
 for _ in 1:1000
 @inbounds for k in 1:size(X, 1)
 X[k, i], X[k, j] = X[k, j], X[k, i]
 end
 end
 end
 
 
 julia f1(X, 1, 5); f2(X, 1, 5); f3(X, 1, 5);
 
 julia @time f1(X, 1, 5)
 elapsed time: 1.027090951 seconds (1526 MB allocated, 3.63% gc time in
 69 pauses with 0 full sweep)
 
 julia @time f2(X, 1, 5)
 elapsed time: 0.172375013 seconds (390 kB allocated)
 
 julia @time f3(X, 1, 5)
 elapsed time: 0.155069259 seconds (80 bytes allocated)
 
 
 Regards
 
  --Tim
  
  On Thursday, March 12, 2015 07:49:49 AM Steven G. Johnson wrote:
   As a general rule, with Julia one needs to unlearn the instinct (from
   Matlab or Python) that efficiency == clever use of library functions,
   which turns all optimization questions into is there a built-in
   function
   for X (and if the answer is no you are out of luck).   Loops are
   fast,
   and you can easily beat general-purpose library functions with your own
   special-purpose code.



Re: [julia-users] Re: Memory allocation questions

2015-03-12 Thread Tim Holy
On Thursday, March 12, 2015 10:31:21 AM Phil Tomson wrote:
 Will this always be the case or is this a current limitation of the Julia 
 compiler? It seems like the more idiomatic, compact code should be handled 
 more efficiently. Having to break this out into nested for-loops definitely 
 hurts both readability as well as productivity.

A big step has been taken in 0.4 with a better garbage collector. Still the 
same amount of memory allocated, but much more efficient at cleanup.

--Tim


[julia-users] Question: Variable Names lenght / speed

2015-03-12 Thread Ivar Nesje
There seems to be a limit on 524,288 bytes. (See 
https://github.com/JuliaLang/julia/pull/8241)

Naturally we use a little bit more memory for long variable names when parsing, 
but I highly doubt that it is measurable.

At runtime the length of variable name does not affect the speed.

Re: [julia-users] Swapping two columns (or rows) of an array efficiently

2015-03-12 Thread Milan Bouchet-Valat
Le jeudi 12 mars 2015 à 11:01 -0500, Tim Holy a écrit :
 This is something that many people (understandably) have a hard time 
 appreciating, so I think this post should be framed and put up on the julia 
 wall.
 
 We go to considerable lengths to try to make code work efficiently in the 
 general case (check out subarray.jl and subarray2.jl in master some time...), 
 but sometimes there's no competing with a hand-rolled version for a 
 particular 
 case. Folks should not be shy to implement such tricks in their own code.
Though with the new array views in 0.4, the vectorized version should be
more efficient than in 0.3. I've tried it, and indeed it looks like
unrolling is not really needed, though it's still faster and uses less
RAM:

X = rand(100_000, 5)

function f1(X, i, j)
for _ in 1:1000
X[:, i], X[:, j] = X[:, j], X[:, i]
end
end

function f2(X, i, j)
for _ in 1:1000
a = sub(X, :, i)
b = sub(X, :, j)
a[:], b[:] = b, a
end
end

function f3(X, i, j)
for _ in 1:1000
@inbounds for k in 1:size(X, 1)
X[k, i], X[k, j] = X[k, j], X[k, i]
end
end
end


julia f1(X, 1, 5); f2(X, 1, 5); f3(X, 1, 5);

julia @time f1(X, 1, 5)
elapsed time: 1.027090951 seconds (1526 MB allocated, 3.63% gc time in
69 pauses with 0 full sweep)

julia @time f2(X, 1, 5)
elapsed time: 0.172375013 seconds (390 kB allocated)

julia @time f3(X, 1, 5)
elapsed time: 0.155069259 seconds (80 bytes allocated)


Regards

 --Tim
 
 On Thursday, March 12, 2015 07:49:49 AM Steven G. Johnson wrote:
  As a general rule, with Julia one needs to unlearn the instinct (from
  Matlab or Python) that efficiency == clever use of library functions,
  which turns all optimization questions into is there a built-in function
  for X (and if the answer is no you are out of luck).   Loops are fast,
  and you can easily beat general-purpose library functions with your own
  special-purpose code.



[julia-users] Block Matrices: problem enforcing parametric type constraints

2015-03-12 Thread Gabriel Mitchell
I have an application where I want to have a type to represent block 
matrices and then call various linear algebra function which are 
specialized to different situations. This can be neatly accomplished by 
writing (for 2x2 blocks)

type BlockMatrix{TA,TB,TC,TD}
A::TA
B::TB
C::TC
D::TD
end

So that you can do things like

A,B,C = rand(4,4),Diagonal([1:1.:4]),Diagonal([2:1.:5])
Δ = rand(2,2)
D = BlockMatrix(Diagonal([1.,2.]),Δ,Diagonal([1.,2.]),Δ)
M= BlockMatrix(A,B,C,D)
typeof(M)
BlockMatrix{Array{Float64,2},Diagonal{Float64},Diagonal{Float64},
BlockMatrix{Diagonal{Float64},Array{Float64,2},Diagonal{Float64},Array{
Float64,2}}}

at which point it is indeed possible to do fine grained dispatch depending 
on block types. Now, I don't want my block matrix type to accept just 
anything though. Even before doing dimension checking I would at least like 
to ensure that all the arguments are of Matrix type. It would also make 
sense that the element type should be enforced. Since the blocks can be 
composed of blocks themselves it would also seem prudent to make sure that 
BlockMatrix has the same abstract type as its super. So I was able to write

type BlockMatrix{S,TA:AbstractMatrix{S},TB:AbstractMatrix{S},TC:
AbstractMatrix{S},TD:AbstractMatrix{S}}:AbstractMatrix{S}
A::TA
B::TB
C::TC
D::TD
end

however, I then get

M= BlockMatrix(A,B,B,B)

`BlockMatrix{S,TA:AbstractArray{S,2},TB:AbstractArray{S,2},TC:AbstractArray{S,2},TD:AbstractArray{S,2}}`
 has no method matching 
BlockMatrix{S,TA:AbstractArray{S,2},TB:AbstractArray{S,2},TC:AbstractArray{S,2},TD:AbstractArray{S,2}}(::Array{Float64,2},
 ::Diagonal{Float64}, ::Diagonal{Float64}, ::Diagonal{Float64})


Indeed, calling methods(BlockMatrix) reveals that no constructors are 
available. So basically, I am not really sure what is going on, having 
apparently reached the limit of my current parametric type no jutsu. Can 
anyone give any pointer (yes I saw the manual, but still not 
understanding)? I am on 0.3.5.



Re: [julia-users] Re: Memory allocation questions

2015-03-12 Thread Mauro
 you should be able to write: 

   @inbounds for y in 1:img.height 
 @simd for x in 1:img.wid 
   if 1  x  img.wid 
 left   = img.data[x-1,y] 
 center = img.data[x,y] 
 @inbounds right  = img.data[x+1,y] 

 Just curious, why did you get rid of the @inbounds on the assignments to 
 left and center, but not right?

My mistake, should be `right  = img.data[x+1,y]` without the @inbounds

 Also, did you check that the @simd works?  I'm no expert on that but my 
 understanding is that most of the time it doesn't work with if-else.  If 
 that is the case, maybe special-case the first and last iteration and 
 run the loop like: @simd for x in 2:img.wid-1 . 


 I just did that and I don't see a huge difference there. I'm not sure @simd 
 is doing much there, in fact I took it out and nothing changed. Probably 
 have to look at the LLVM IR output to see what's happening there.

You have to look at the machine code, see
https://software.intel.com/en-us/articles/vectorization-in-julia

  In fact that would save 
 you a comparisons in each iteration irrespective of @simd. 


 Yes, that's a good point.  I think I'll just pre-load those two columns 
 (the 1st and last columns of the matrix)





Re: [julia-users] 1 - 0.8

2015-03-12 Thread Jameson Nash
Note that Julia tries to print numbers at full precision by default, except
places like Array formatting where horizontal screen space is at premium.
Erik's code does print more digits, but it does not provide any more
accuracy for representing the number (this is contrary to most other
programming languages, perhaps excepting Chrome's implementation of
JavaScript)
On Thu, Mar 12, 2015 at 5:47 PM Erik Schnetter schnet...@gmail.com wrote:

 Keeping this a bit less abstract: You can output the numbers 0.2 and 0.8
 with a bit more precision using @sprintf. For Float64, I prefer to output
 values with 17 digits, since this corresponds approximately to the values'
 internal precision.

 julia @sprintf(%.17f, 0.2)
 0.20001

 julia @sprintf(%.17f, 0.8)
 0.80004

 julia @sprintf(%.17f, 1-0.8)
 0.19996

 (I'd really like to use %.17g instead, but Julia doesn't support %g
 yet.)

 -erik

 On Mar 12, 2015, at 16:44 , Patrick O'Leary patrick.ole...@gmail.com
 wrote:
 
  Julia does not try to hide the complexities of floating-point
 representations, so this is expected. There's a brief section in the manual
 [1] which lists some references on this topic--I personally recommend
 reading What Every Computer Scientist Should Know About Floating-Point
 Arithmetic, but the other references are good, too.
 
  [1] http://julia.readthedocs.org/en/latest/manual/integers-and-
 floating-point-numbers/#background-and-references
 
  On Thursday, March 12, 2015 at 3:40:45 PM UTC-5, Hanrong Chen wrote:
  julia 1-0.8
  0.19996
 
  Is this a bug?

 --
 Erik Schnetter schnet...@gmail.com
 http://www.perimeterinstitute.ca/personal/eschnetter/

 My email is as private as my paper mail. I therefore support encrypting
 and signing email messages. Get my PGP key from https://sks-keyservers.net
 .




Re: [julia-users] 1 - 0.8

2015-03-12 Thread Erik Schnetter
Keeping this a bit less abstract: You can output the numbers 0.2 and 0.8 with a 
bit more precision using @sprintf. For Float64, I prefer to output values with 
17 digits, since this corresponds approximately to the values' internal 
precision.

julia @sprintf(%.17f, 0.2)
0.20001

julia @sprintf(%.17f, 0.8)
0.80004

julia @sprintf(%.17f, 1-0.8)
0.19996

(I'd really like to use %.17g instead, but Julia doesn't support %g yet.)

-erik

On Mar 12, 2015, at 16:44 , Patrick O'Leary patrick.ole...@gmail.com wrote:
 
 Julia does not try to hide the complexities of floating-point 
 representations, so this is expected. There's a brief section in the manual 
 [1] which lists some references on this topic--I personally recommend reading 
 What Every Computer Scientist Should Know About Floating-Point Arithmetic, 
 but the other references are good, too.
 
 [1] 
 http://julia.readthedocs.org/en/latest/manual/integers-and-floating-point-numbers/#background-and-references
 
 On Thursday, March 12, 2015 at 3:40:45 PM UTC-5, Hanrong Chen wrote:
 julia 1-0.8
 0.19996
 
 Is this a bug?

--
Erik Schnetter schnet...@gmail.com
http://www.perimeterinstitute.ca/personal/eschnetter/

My email is as private as my paper mail. I therefore support encrypting
and signing email messages. Get my PGP key from https://sks-keyservers.net.



signature.asc
Description: Message signed with OpenPGP using GPGMail


[julia-users] Re: Block Matrices: problem enforcing parametric type constraints

2015-03-12 Thread Gabriel Mitchell
@g Sorry, I guess I didn't state my intent that clearly. While your 
example does enforce the Matrix/eltype constraint that is only part of what 
I am after. Having a type parameter for each block is a main thing that I 
am interested in. The reason is that I can write methods that dispatch on 
those types. An example of such a method with an explicit 4-ary structure 
would be

#generic fallback
det(A::Matrix,B::Matrix,C::Matrix,D::Matrix) = det([A B; C D])

#specialized method, should actually, check for A invertible, but you get 
the idea
det(A::Diagonal,B::Diagonal,C::Diagonal,D::Diagonal) = 
det(A)*det(D-C*inv(A)*B)

#etc...

In my applications there are at least a dozen situations where certain 
block structure allow for significantly more efficient implementations than 
the generic fallback. One would like to make the calls to these methods 
(det,inv,trace, and so on) with the normal 1-ary argument, that matrix M 
itself. This would be possible if the type information of the blocks could 
be read out of the type of M. I hope this clear up my motivation for the 
above question.



On Thursday, March 12, 2015 at 8:49:05 PM UTC+1, g wrote:

 BlockMatrix only needs one type parameter to fully specify the type, so 
 you should probably only use one type parameter. Like so:

 *type BlockMatrix{S} : AbstractMatrix{S}*

   *A::AbstractMatrix{S}*

   *B::AbstractMatrix{S}*

   *C::AbstractMatrix{S}*

   *D::AbstractMatrix{S}*

   *end*


 I'm sure someone else can explain in more detail why yours didn't work. 



[julia-users] Re: TSNE error related to blas

2015-03-12 Thread René Donner
I can reproduce this with the following code on both 0.3.6 and a 10 days 
old master with the following code:

using TSne, MNIST
data, labels = traindata()
Y = tsne(data, 2, 50, 1000, 20.0)


Filed an issue here: https://github.com/JuliaLang/julia/issues/10487


my versioninfo():

Julia Version 0.4.0-dev+3639
Commit 7f7e9ae* (2015-03-01 22:49 UTC)
Platform Info:
System: Darwin (x86_64-apple-darwin13.4.0)
CPU: Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
LAPACK: libopenblas
LIBM: libopenlibm
LLVM: libLLVM-3.3


Error message:

signal (10): Bus error: 10
dgemm_oncopy_HASWELL at 
/Users/rene/local/devjulia/usr/lib/libopenblas.dylib (unknown line)
inner_thread at /Users/rene/local/devjulia/usr/lib/libopenblas.dylib 
(unknown line)
blas_thread_server at /Users/rene/local/devjulia/usr/lib/libopenblas.dylib 
(unknown line)
_pthread_body at /usr/lib/system/libsystem_pthread.dylib (unknown line)
_pthread_struct_init at /usr/lib/system/libsystem_pthread.dylib (unknown 
line)
/



Re: [julia-users] Re: Memory allocation questions

2015-03-12 Thread René Donner
For inplace matrix multipliation you can also try the in-place BLAS 
operations: 

http://docs.julialang.org/en/release-0.3/stdlib/math/?highlight=at_mul_b#Base.A_mul_B!


Am Donnerstag, 12. März 2015 10:14:34 UTC+1 schrieb Mauro:

 Julia is not yet very good with producing fast vectorized code which 
 does not allocate temporaries.  The temporaries is what gets you here. 

 However, running your example, I get a slightly different a different 
 *.mem file (which makes more sense to me): 

 - function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
 0   nl.hx = x 
 248832000   wx = nl.w * nl.hx 
 348364800   nl.pa = nl.b+wx 
 1094864752   nl.pr = tanh(nl.pa).*nl.scale 
 - end 

 (what version of julia are you running, me 0.3.6).  So everytime 
 forward_propagate is called some temporaries are allocated.  So in 
 performance critical code you have write loops instead: 

 function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
 nl.hx = x # note: nl.hx now point to the same junk of memory 
 for i=1:size(nl.w,1) 
 nl.pa[i] = 0.; 
 for j=1:size(nl.w,2) 
 nl.pa[i] += nl.w[i,j]*nl.hx[j] 
 end 
 nl.pa[i] += nl.b[i] 
 nl.pr[i] = tanh(nl.pa[i])*nl.scale[i] 
 end 
 end 

 This does not allocate any memory and runs your test case at about 2x 
 the speed. 

 Also a note on the code in your first email.  Instead of: 

   for y in 1:img.height 
 @simd for x in 1:img.wid 
   if 1  x  img.wid 
 @inbounds left   = img.data[x-1,y] 
 @inbounds center = img.data[x,y] 
 @inbounds right  = img.data[x+1,y] 

 you should be able to write: 

   @inbounds for y in 1:img.height 
 @simd for x in 1:img.wid 
   if 1  x  img.wid 
 left   = img.data[x-1,y] 
 center = img.data[x,y] 
 @inbounds right  = img.data[x+1,y] 

 Also, did you check that the @simd works?  I'm no expert on that but my 
 understanding is that most of the time it doesn't work with if-else.  If 
 that is the case, maybe special-case the first and last iteration and 
 run the loop like: @simd for x in 2:img.wid-1 .  In fact that would save 
 you a comparisons in each iteration irrespective of @simd. 

 On Thu, 2015-03-12 at 02:17, Phil Tomson philt...@gmail.com javascript: 
 wrote: 
  I transformed it into a single-file testcase: 
  
  # 
  type NeuralLayer 
  w::Matrix{Float32}   # weights 
  cm::Matrix{Float32}  # connection matrix 
  b::Vector{Float32}   # biases 
  scale::Vector{Float32}  # 
  a_func::Symbol # activation function 
  hx::Vector{Float32}  # input values 
  pa::Vector{Float32}  # pre activation values 
  pr::Vector{Float32}  # predictions (activation values) 
  frozen::Bool 
  end 
  
  function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
nl.hx = x 
wx = nl.w * nl.hx 
nl.pa = nl.b+wx 
nl.pr = tanh(nl.pa).*nl.scale 
  end 
  
  out_dim = 10 
  in_dim = 10 
  b = sqrt(6) / sqrt(in_dim + out_dim) 
  
  nl = NeuralLayer( 
 float32(2.0b * rand(Float32,out_dim,in_dim) - b), #setup rand 
 weights 
 ones(Float32,out_dim,in_dim), #connection matrix 
   float32(map(x-x*(randbool()?-1:1),rand(out_dim)*rand(1:4))), 
  #biases 
 rand(Float32,out_dim),  # scale 
 :tanh, 
 rand(Float32,in_dim), 
 rand(Float32,out_dim), 
 rand(Float32,out_dim), 
 false 
  ) 
  
  x = ones(Float32,in_dim) 
  forward_propagate(nl,x) 
  clear_malloc_data() 
  for i in 1:(1920*1080) 
forward_propagate(nl,x) 
  end 
  println(nl.pr is: $(nl.pr)) 
  
 # 

  
  Now the interesting part of the  .mem file looks like this: 
  
 - function forward_propagate(nl::NeuralLayer,x::Vector{Float32}) 
  0   nl.hx = x 
  0   wx = nl.w * nl.hx 
348368752   nl.pa = nl.b+wx 
  0   nl.pr = tanh(nl.pa).*nl.scale 
  - end 
  
  I split up the matrix multiply and the addition of bias vector into two 
  separate lines and it looks like it's the vector addition that's 
 allocating 
  all of the memory (which seems surprising, but maybe I'm missing 
 something). 
  
  Phil 



[julia-users] Help: Too many open files -- How do I figure out which files are open?

2015-03-12 Thread Daniel Carrera
Hello,

My program is dying with the error message:

ERROR: opening file ...file name...: Too many open files
 in open at ./iostream.jl:117
 in open at ./iostream.jl:125
 ...

I have reviewed my program and as far as I can tell, everywhere that I open 
a file I close it immediately. I need to query more information to debug 
this. Is there a way that I can get information about the files that are 
currently open? Or at least the *number* of open files? With that I could 
at least sprinkle my program with print open files statements and trace 
the bug.


Cheers,
Daniel.






[julia-users] Re: Help: Too many open files -- How do I figure out which files are open?

2015-03-12 Thread Daniel Carrera


On Thursday, 12 March 2015 12:07:36 UTC+1, René Donner wrote:

 More are hint than a direct answer, are you using the do syntax for 
 opening the files?

 open(somefile, w) do file
  write(file, ...);
  read(file, );
 end


No, I'm using close(file) or open(readlines, ...) when I just need to 
read the lines. I'll replace the close(file) with the do-form.


 To really list the open files of your process this should help: 
 http://www.cyberciti.biz/faq/howto-linux-get-list-of-open-files/


Thanks! 

With that I was able to debug the problem in a snap. It turns out that one 
of my functions had readlines(open(...)) instead of 
open(readlines,...). The critical difference is that the former leaves a 
file pointer dangling.

Cheers,
Daniel.
 


Re: [julia-users] Re: webpy equivalent

2015-03-12 Thread Keith Campbell
You can clone the repo from the '...' icon on the left, or download it 
using cloud icon on the left.

On Thursday, March 12, 2015 at 4:13:34 AM UTC-4, paul analyst wrote:

  Thx for info look nice
 but I am using win7 , usualy using Pkg.add. How to install skeleton-webapp 
 on my Julia under win? ?

 julia Pkg.available()

 551-element Array{ASCIIString,1}:
  AffineTransforms
 ...
 Sims
 SIUnits
 SliceSampler
 Smile
 SmoothingKernels
 SMTPClient
 Snappy
 Sobol
 ...

 No skeleton-webapp 
 Paul

 W dniu 2015-03-11 o 21:28, jock@gmail.com javascript: pisze:
  
 Hi Jonathan,

 Uncanny timing. Here https://bitbucket.org/jocklawrie/skeleton-webapp.jl 
 is an example of Julia working with the Mustache.jl package which I posted 
 just a couple of hours before your post. It works fine but is not nearly as 
 mature as webpy. Hope you find it helpful.

 Cheers,
 Jock


 On Friday, November 1, 2013 at 3:00:25 AM UTC+11, Jonathan Malmaud wrote: 

 Anywhere aware of a package that implements a light-weight HTTP server 
 framework for Julia? Something like webpy + and a templating engine like 
 jinja2?

  
  

[julia-users] Re: webpy equivalent

2015-03-12 Thread jock . lawrie
Yep, clone it.

Also, note the updated README - it gives a crash course in web app 
development (as I understand it, which may be flawed) aimed at data 
scientists who have never built a web app before. Hope it helps.


On Friday, November 1, 2013 at 3:00:25 AM UTC+11, Jonathan Malmaud wrote:

 Anywhere aware of a package that implements a light-weight HTTP server 
 framework for Julia? Something like webpy + and a templating engine like 
 jinja2?



[julia-users] Re: RFC: JuliaWeb Roadmap + Call for Contributors

2015-03-12 Thread jock . lawrie
Hi Avik,

I've updated the README to explain web app development as I understand it 
(which may be flawed). It is aimed at data scientists who have never built 
a web app before. It falls short of explicitly explaining the stack APIs 
but should enable readers to follow what's happening anyway, as well as 
adapt the example to their own needs. Another shortcoming is that I haven't 
explained how to modify the html templates.

Thoughts?


On Friday, February 13, 2015 at 9:57:56 AM UTC+11, Iain Dunning wrote:

 Hi all,

 TL;DR: 
 - New JuliaWeb roadmap: https://github.com/JuliaWeb/Roadmap/issues
 - Please consider volunteering on core JuliaWeb infrastructure (e.g. 
 Requests, GnuTLS), esp. by adding tests/docs/examples.

 ---

 *JuliaWeb* (https://github.com/JuliaWeb) is a collection of 
 internet-related packages, including HTTP servers/parsers/utilities, IP 
 address tools, and more.

 Many of these packages were either created by rockstar ninja guru 
 developer Keno, or by students at Hacker School. Some of these packages, 
 like Requests.jl/HttpParser.jl/GnuTLS.jl/... are almost surely installed on 
 your system, but some (e.g. GnuTLS.jl) haven't really been touched much 
 since they were created and aren't actively maintained. For such core 
 packages, it isn't fair to put all the burden on one developer.

 On a personal level, I've been trying to help out where I can by merging 
 PRs, but this web stuff isn't really my strength, and I'm not really able 
 to effectively triage the issues that have built up on some of these 
 packages. So heres what we're (Seth Bromberger has been part of this too) 
 doing:

 - We've made a *roadmap repo for JuliaWeb* to discuss some of these 
 issues and co-ordinate limited resources: 
 https://github.com/JuliaWeb/Roadmap/issues . We'd like to hear your 
 perspectives!

 - *We want you!* You don't have to be a Julia master - you can even start 
 just by reading the code of one of these packages, and then adding some 
 tests or documentation. Maybe you'll even get comfortable to add features! 
 Right now, the focus is definitely on maintainence and making sure whats 
 there works (on Julia 0.3 and 0.4!). Your Pull Requests are very welcome!



Re: [julia-users] Different ways to create a vector of strings in Julia

2015-03-12 Thread Tamas Papp
Hi Charles,

Array{String} is a type, not an array:

julia isa(Array{String}, Type)
true

julia typeof(Array{String})
DataType

See http://docs.julialang.org/en/release-0.3/manual/types/ .

The function Array(eltype, dimensions...) can be used to instantiate an
object of this type, which is the convention in Julia, so IMO you should
just keep using that.

Best,

Tamas


On Thu, Mar 12 2015, Charles Novaes de Santana wrote:

 Dear all,

 I was trying to create a vector of strings in Julia and I didn't understand
 why the following ways give me different results.

 Everything is fine If I try the following two approaches:
 names = String[];
 push!(names,word1);

 names = Array(String,0);
 push!(names,word1);

 However, I supposed this other way should work too, but it didn't:
 names = Array{String};
 push!(names,word2);

 It gives me the following error: ERROR: `push!` has no method matching
 push!(::Type{Array{String,N}}, ::ASCIIString)

 Why is String[] and Array(String,0) different from Array{String}?

 Thanks for any help!

 Best,

 Charles

 --
 Um axé! :)


Re: [julia-users] Different ways to create a vector of strings in Julia

2015-03-12 Thread Mauro
 However, I supposed this other way should work too, but it didn't:
 names = Array{String};
 push!(names,word2);

 It gives me the following error: ERROR: `push!` has no method matching
 push!(::Type{Array{String,N}}, ::ASCIIString)

 Why is String[] and Array(String,0) different from Array{String}?

`Array{String}` is a type (abstract at that), but you want an instance
of that type.  So, what you would want to do is `Array{String,1}()`.
That ought to construct a array of strings with dimension 1 but doesn't.
At least that is what most constructors do, e.g. `Dict{Int,String}()`.
However, there are no array constructors which look like this, you
construct them with the syntax you used in the first two examples:

 names = String[];
 push!(names,word1);

 names = Array(String,0);
 push!(names,word1);

Have a look at the Construtors section of the manual


Re: [julia-users] sum array

2015-03-12 Thread Mauro
 Hi
 Mauro - thanks for that as that makes it clear whats happening under the 
 bonnet. So, what if you then wanted to sum...
 1.4827   
  1.48069 
  0.884897 
  1.22739 
  is that possible or am I being a bit dumb here.

Just add another sum(ans) after below two statements, that then sums the 
 4-element Array{Float64,1}: 
  1.4827   
  1.48069 
  0.884897 
  1.22739 

 On Thursday, 12 March 2015 10:59:34 UTC, Mauro wrote:

  Can I still sum? 


 Maybe it's clearer like this: 
 julia [ x[i-4:i-1] for i = [6,7,8]] 
 3-element Array{Array{Float64,1},1}: 
  [0.392471,0.775959,0.314272,0.390463] 
  [0.775959,0.314272,0.390463,0.180162] 
  [0.314272,0.390463,0.180162,0.656762] 

 julia sum(ans) 
 4-element Array{Float64,1}: 
  1.4827   
  1.48069 
  0.884897 
  1.22739 

 So 1.4827 = ans[1][1]+ans[2][1]+ans[3][1] 

  On Thursday, 12 March 2015 09:50:19 UTC, Mauro wrote: 
  
   const x = rand(8) 
   [ x[i-4:i-1] for i = 6] .. this gives me a 4 element array. 
  
  This seems a bit odd, what are you trying to achieve here?  Anyway it 
  produces a Array{Array{Float64,1},1}, i.e. an array of arrays 
 containing 
  one array. 
  
   I now want to sum the ouput - this is what I tried ... 
   sum([ x[i-4:i-1] for i = 6]) ... what am I doing wrong? 
  
  This sums all first elements, second elements, etc.  As there is only 
 on 
  array in the array, it doesn't do all that much. 
  





[julia-users] Re: How can I convert a set into an array?

2015-03-12 Thread Ali Rezaee
Thanks Rene and Kevin.
The unique function is what I needed.

On Wednesday, March 11, 2015 at 5:05:08 PM UTC+1, Ali Rezaee wrote:

 In Python I would normally to something like this:

 a = set([1,2,1,3])
 a = list(a)

 What is the equivalent way to do this in Julia?


 Thanks a lot in advance for your help



Re: [julia-users] Automatic doc tools for Julia

2015-03-12 Thread Tom Short
The Lexicon package works well for me along with Mkdocs.
On Mar 12, 2015 6:03 AM, Ján Adamčák jadam...@gmail.com wrote:

 Hi guys,

 Can I ask you for something like best practice with auto doc tools for
 parsing Julia code? I try use Doxygen and Sphinx, but I think this is not
 good solutions in this timeversion(0.3.6). And/Or some tool for generate
 UML diagrams from julia code?

 Thanks.

 P.S.:
 My idea with this thread is generate something like big manual of knowlege
 how to use auto doc tools in Julia.



Re: [julia-users] sum array

2015-03-12 Thread Mauro
 Can I still sum?


Maybe it's clearer like this:
julia [ x[i-4:i-1] for i = [6,7,8]]
3-element Array{Array{Float64,1},1}:
 [0.392471,0.775959,0.314272,0.390463]
 [0.775959,0.314272,0.390463,0.180162]
 [0.314272,0.390463,0.180162,0.656762]

julia sum(ans)
4-element Array{Float64,1}:
 1.4827  
 1.48069 
 0.884897
 1.22739 

So 1.4827 = ans[1][1]+ans[2][1]+ans[3][1]

 On Thursday, 12 March 2015 09:50:19 UTC, Mauro wrote:

  const x = rand(8) 
  [ x[i-4:i-1] for i = 6] .. this gives me a 4 element array. 

 This seems a bit odd, what are you trying to achieve here?  Anyway it 
 produces a Array{Array{Float64,1},1}, i.e. an array of arrays containing 
 one array. 

  I now want to sum the ouput - this is what I tried ... 
  sum([ x[i-4:i-1] for i = 6]) ... what am I doing wrong? 

 This sums all first elements, second elements, etc.  As there is only on 
 array in the array, it doesn't do all that much. 




[julia-users] Different ways to create a vector of strings in Julia

2015-03-12 Thread Charles Novaes de Santana
Dear all,

I was trying to create a vector of strings in Julia and I didn't understand
why the following ways give me different results.

Everything is fine If I try the following two approaches:
names = String[];
push!(names,word1);

names = Array(String,0);
push!(names,word1);

However, I supposed this other way should work too, but it didn't:
names = Array{String};
push!(names,word2);

It gives me the following error: ERROR: `push!` has no method matching
push!(::Type{Array{String,N}}, ::ASCIIString)

Why is String[] and Array(String,0) different from Array{String}?

Thanks for any help!

Best,

Charles

-- 
Um axé! :)

--
Charles Novaes de Santana, PhD
http://www.imedea.uib-csic.es/~charles


Re: [julia-users] Automatic doc tools for Julia

2015-03-12 Thread Julia User
@
 tshort 

Thanks a lot for your examples. I just started with Mkdocs and it looks 
quite nice - but some adittional examples like yours are very helpful 
indeed.



On Thursday, March 12, 2015 at 12:26:08 PM UTC-3, tshort wrote:

 Here is an example of documentation for a package I maintain:

 https://tshort.github.io/Sims.jl/

 Here are examples of docstrings:

 https://github.com/tshort/Sims.jl/blob/master/src/sim.jl#L1-L96

 Here is the config file for Mkdocs:

 https://github.com/tshort/Sims.jl/blob/master/mkdocs.yml

 Here is a Julia script that uses the Lexicon package to build the API 
 documentation from the docstrings:

 https://github.com/tshort/Sims.jl/blob/master/docs/build.jl

 Here are other packages that use Mkdocs:


 https://www.google.com/search?q=mkdocs.yml+jl+site:github.comie=utf-8oe=utf-8


 On Thu, Mar 12, 2015 at 10:28 AM, Ismael VC ismael...@gmail.com 
 javascript: wrote:

 tshort, could you provide us an example please?

 El jueves, 12 de marzo de 2015, 4:59:14 (UTC-6), tshort escribió:

 The Lexicon package works well for me along with Mkdocs.
 On Mar 12, 2015 6:03 AM, Ján Adamčák jada...@gmail.com wrote:

 Hi guys,

 Can I ask you for something like best practice with auto doc tools for 
 parsing Julia code? I try use Doxygen and Sphinx, but I think this is not 
 good solutions in this timeversion(0.3.6). And/Or some tool for generate 
 UML diagrams from julia code?

 Thanks.

 P.S.:
 My idea with this thread is generate something like big manual of 
 knowlege how to use auto doc tools in Julia.




[julia-users] Best practices for migrating 0.3 code to 0.4? (specifically constructors)

2015-03-12 Thread Phil Tomson
I thought I'd give 0.4 a spin to try out the new garbage collector.

On my current codebase developed with 0.3 I ran into several warnings 
(*float32() 
should now be Float32()* - that sort of thing)

And then this error:






*ERROR: LoadError: LoadError: LoadError: LoadError: LoadError: MethodError: 
`convert` has no method matching convert(::Type{Img.ImgHSV}, 
::UTF8String)This may have arisen from a call to the constructor 
Img.ImgHSV(...),since type constructors fall back to convert 
methods.Closest candidates are:  convert{T}(::Type{T}, ::T)*
After poking around New Language Features and the list here a bit it seems 
that there are changes to how overloaded constructors work.

In my case I've got:

type ImgHSV
  name::ASCIIString
  data::Array{HSV{Float32},2}  
  #data::Array{IntHSV,2}  
  height::Int64
  wid::Int64
  h_mean::Float32
  s_mean::Float32
  v_mean::Float32
  h_std::Float32
  s_std::Float32
  v_std::Float32
end

# Given a filename of an image file, construct an ImgHSV
function ImgHSV(fn::ASCIIString)
  name,ext = splitext(basename(fn))
  source_img_hsv = Images.data(convert(Image{HSV{Float64}},imread(fn)))
  #scale all the values up from (0-1) to (0-255)
  source_img_scaled = map(x- HSV( ((x.h/360)*255),(x.s*255),(x.v*255)),
  source_img_hsv)
  img_ht  = size(source_img_hsv,2)
  img_wid = size(source_img_hsv,1)
  h_mean = (mean(map(x- x.h,source_img_hsv)/360)*255)
  s_mean = (mean(map(x- x.s,source_img_hsv))*255)
  v_mean = (mean(map(x- x.v,source_img_hsv))*255)
  h_std  = (std(map(x- x.h,source_img_hsv)/360)*255)
  s_std  = (std(map(x- x.s,source_img_hsv))*255)
  v_std  = (std(map(x- x.v,source_img_hsv))*255)
  ImgHSV(
name,
float32(source_img_scaled),
img_ht,
img_wid,
h_mean,
s_mean,
v_mean,
h_std,
s_std,
v_std
  )
end

Should I rename this function to something like buildImgHSV so it's not 
actually a constructor and *convert* doesn't enter the picture?

Phil



[julia-users] Indenting by 2 spaces in ESS[Julia]

2015-03-12 Thread Shivkumar Chandrasekaran
I have tried all possible combinations of suggestions from the ESS manual 
to get the ESS[Julia] mode to convert indentation to 2 spaces rather than 4 
with no luck. Has anybody else succeeded, and if so could you please post 
your magic sauce? Thanks. --shiv--


  1   2   >