[julia-users] Carriage return without new line

2014-04-23 Thread RecentConvert
I would like to periodically output a progress percentage but don't want a 
huge list of numbers cluttering the rest of the outputs. How can this be 
accomplished?

print("hello")
print("\rworld")


This doesn't work because \r adds a new line.


Re: [julia-users] Carriage return without new line

2014-04-23 Thread Andreas Noack Jensen
Not on my machine

julia> print("hello");print("\rworld!")
world!

Which operation system and which version of Julia are you running?


2014-04-23 10:00 GMT+02:00 RecentConvert :
>
> I would like to periodically output a progress percentage but don't want
a huge list of numbers cluttering the rest of the outputs. How can this be
accomplished?
>
> print("hello")
> print("\rworld")
>
>
> This doesn't work because \r adds a new line.




--
Med venlig hilsen

Andreas Noack Jensen


Re: [julia-users] Carriage return without new line

2014-04-23 Thread RecentConvert
Julia Version 0.2.1 

Commit e44b593* (2014-02-11 06:30 UTC)

Platform Info:

  System: Windows (x86_64-w64-mingw32)

Windows 7



Re: [julia-users] Carriage return without new line

2014-04-23 Thread Tobias Knopp
I can confirm this behavior under 0.3 prerelease on windows. The following 
works however

julia> print("hello");print("\b\b\b\b\bworld!")
world!

I think this should be filed as an issue at 
https://github.com/JuliaLang/julia 


Am Mittwoch, 23. April 2014 10:35:09 UTC+2 schrieb RecentConvert:
>
> Julia Version 0.2.1 
>
> Commit e44b593* (2014-02-11 06:30 UTC)
>
> Platform Info:
>
>   System: Windows (x86_64-w64-mingw32)
>
> Windows 7
>
>

Re: [julia-users] Carriage return without new line

2014-04-23 Thread RecentConvert
print("hello");print("\b\b\b\
b\bworld!")


This works in the basic terminal but *not* in Julia Studio.


Re: [julia-users] Carriage return without new line

2014-04-23 Thread Tobias Knopp
Julia Studio is an entirely different issue as they might have implemented 
their own REPL (terminal).
The issue should be reported here: 
https://github.com/forio/julia-studio/issues.

Am Mittwoch, 23. April 2014 11:39:48 UTC+2 schrieb RecentConvert:
>
> print("hello");print("\b\b\b\
> b\bworld!")
>
>
> This works in the basic terminal but *not* in Julia Studio.
>


Re: [julia-users] Carriage return without new line

2014-04-23 Thread Tim Holy
Pkg.add("ProgressMeter")

--Tim

On Wednesday, April 23, 2014 01:00:50 AM RecentConvert wrote:
> I would like to periodically output a progress percentage but don't want a
> huge list of numbers cluttering the rest of the outputs. How can this be
> accomplished?
> 
> print("hello")
> print("\rworld")
> 
> 
> This doesn't work because \r adds a new line.


[julia-users] julia nightlies current version

2014-04-23 Thread Földes László
Is the Julia nightlies update every day as it used to do around 
January/February? I don't receive updates for a long time now, and I want 
to investigate whether it is a failed Xubuntu upgrade that killed the 
Sources, or the package really don't update.

Thanks

> versioninfo()
Julia Version 0.3.0-prerelease
Platform Info:
  System: Linux (i686-linux-gnu)
  CPU: Intel(R) Core(TM) i5-3320M CPU @ 2.60GHz
  WORD_SIZE: 32
  BLAS: libblas.so.3
  LAPACK: liblapack.so.3
  LIBM: libopenlibm



Re: [julia-users] Carriage return without new line

2014-04-23 Thread Tobias Knopp
Awesome! Did not know that.

Am Mittwoch, 23. April 2014 12:22:25 UTC+2 schrieb Tim Holy:
>
> Pkg.add("ProgressMeter") 
>
> --Tim 
>
> On Wednesday, April 23, 2014 01:00:50 AM RecentConvert wrote: 
> > I would like to periodically output a progress percentage but don't want 
> a 
> > huge list of numbers cluttering the rest of the outputs. How can this be 
> > accomplished? 
> > 
> > print("hello") 
> > print("\rworld") 
> > 
> > 
> > This doesn't work because \r adds a new line. 
>


Re: [julia-users] julia nightlies current version

2014-04-23 Thread cnbiz850
I think you are running Xubuntu Raring, which reached end of life, and 
so they stopped building nightly for that OS.  I am on Trusty and am 
getting the nightly for Saucy.


On 04/23/2014 06:26 PM, Földes László wrote:
Is the Julia nightlies update every day as it used to do around 
January/February? I don't receive updates for a long time now, and I 
want to investigate whether it is a failed Xubuntu upgrade that killed 
the Sources, or the package really don't update.


Thanks

> versioninfo()
Julia Version 0.3.0-prerelease
Platform Info:
  System: Linux (i686-linux-gnu)
  CPU: Intel(R) Core(TM) i5-3320M CPU @ 2.60GHz
  WORD_SIZE: 32
  BLAS: libblas.so.3
  LAPACK: liblapack.so.3
  LIBM: libopenlibm





Re: [julia-users] All packages for numerical math

2014-04-23 Thread Tomas Lycken
The trapezoidal rule (http://en.wikipedia.org/wiki/Trapezoidal_rule) would 
probably be almost trivial to implement.

function trapz{T<:Real}(x::Vector{T}, y::Vector{T})
   if (length(y) != length(x))
   error("Vectors must be of same length")
   end
   sum( (x[2:end] .- x[1:end-1]).*(y[2:end].+y[1:end-1]) ) / 2
end

x = [0:0.01:pi]
y = sin(x)

trapz(x,y) # 1.820650436642

This, of course, only currently works on vectors of real numbers, but it's 
easy to extend it if you want.

And there might be more accurate methods as well, of course (see 
e.g. http://en.wikipedia.org/wiki/Simpson%27s_rule) but this one's often 
"good enough".

// T

On Wednesday, April 23, 2014 8:43:48 AM UTC+2, Evgeny Shevchenko wrote:
>
> Hi, John. 
> No, I didn't. I didn't find it and it seems to be not what i need:
>
> "no method quadgk(Array{Float64,1}, Array{Float64,1})"
>
> quadgk(f,a,b,...) expects a function as its first argument but I mean the 
> case when y = f(x), but i don't have f, e.g. obtained experimental data, so 
> x and y are 1-D arrays of floats.
>
>
> On Tue, Apr 22, 2014 at 7:49 PM, John Myles White 
> 
> > wrote:
>
>> Have you tried the quadgk function?
>>
>>  -- John
>>
>> On Apr 22, 2014, at 7:32 AM, Evgeny Shevchenko > 
>> wrote:
>>
>> Hi 
>>
>> Is there a package for numeric integration of obtained arrays, say x and 
>> y? Couldn't find one, the led me to use @pyimport and numpy.trapz.
>>
>> --
>> pupoque@IRC
>>
>>
>>
>

[julia-users] Inconsistency with single-line use of the "do" keyword?

2014-04-23 Thread Klaus-Dieter Bauer
I noticed that the "do" keyword behaves weirdly when trying to use it on a 
single line, e.g. in the REPL:

When the "do" keyword is used to create a function with one or more 
arguments everything works fine. 
julia> map([1,2,3]) do x 1+x end
map([1,2,3]) do x 1+x end
3-element Array{Int64,1}:
 2
 3
 4

Without arguments, multi-line usage works fine ...
julia> cd("c:/tmp") do
cd("c:/tmp") do

   println(1)
   end
1

... but single line usage fails ...
julia> cd("c:/tmp") do println(1) end
cd("c:/tmp") do println(1) end
ERROR: syntax: malformed function arguments (call println 1)

... and a ; cannot be used to fix it:
julia> cd("c:/tmp") do; println(1) end
cd("c:/tmp") do; println(1) end
ERROR: syntax: unexpected ;

Is this an actual inconsistency in the parsing or am I getting something 
wrong about how julia handles single-line code? So far it is the first 
example where I found a line-break to be both significant and not 
replaceable by a semi-colon. 

I am using the julia-0.2.1-win64 binary release. 


Re: [julia-users] Inconsistency with single-line use of the "do" keyword?

2014-04-23 Thread Stefan Karpinski
Oops: https://github.com/JuliaLang/julia/issues/new – that's what I get for
entering URLs by hand.


On Wed, Apr 23, 2014 at 10:09 AM, Stefan Karpinski wrote:

> Would you mind opening an issue about it?
> https://github.com/JuliaLang/issues/new
>
>
> On Wed, Apr 23, 2014 at 8:46 AM, Klaus-Dieter Bauer <
> bauer.klaus.die...@gmail.com> wrote:
>
>> I noticed that the "do" keyword behaves weirdly when trying to use it on
>> a single line, e.g. in the REPL:
>>
>> When the "do" keyword is used to create a function with one or more
>> arguments everything works fine.
>> julia> map([1,2,3]) do x 1+x end
>> map([1,2,3]) do x 1+x end
>> 3-element Array{Int64,1}:
>>  2
>>  3
>>  4
>>
>> Without arguments, multi-line usage works fine ...
>> julia> cd("c:/tmp") do
>> cd("c:/tmp") do
>>
>>println(1)
>>end
>> 1
>>
>> ... but single line usage fails ...
>> julia> cd("c:/tmp") do println(1) end
>> cd("c:/tmp") do println(1) end
>> ERROR: syntax: malformed function arguments (call println 1)
>>
>> ... and a ; cannot be used to fix it:
>> julia> cd("c:/tmp") do; println(1) end
>> cd("c:/tmp") do; println(1) end
>> ERROR: syntax: unexpected ;
>>
>> Is this an actual inconsistency in the parsing or am I getting something
>> wrong about how julia handles single-line code? So far it is the first
>> example where I found a line-break to be both significant and not
>> replaceable by a semi-colon.
>>
>> I am using the julia-0.2.1-win64 binary release.
>>
>
>


Re: [julia-users] Inconsistency with single-line use of the "do" keyword?

2014-04-23 Thread Stefan Karpinski
Would you mind opening an issue about it?
https://github.com/JuliaLang/issues/new


On Wed, Apr 23, 2014 at 8:46 AM, Klaus-Dieter Bauer <
bauer.klaus.die...@gmail.com> wrote:

> I noticed that the "do" keyword behaves weirdly when trying to use it on a
> single line, e.g. in the REPL:
>
> When the "do" keyword is used to create a function with one or more
> arguments everything works fine.
> julia> map([1,2,3]) do x 1+x end
> map([1,2,3]) do x 1+x end
> 3-element Array{Int64,1}:
>  2
>  3
>  4
>
> Without arguments, multi-line usage works fine ...
> julia> cd("c:/tmp") do
> cd("c:/tmp") do
>
>println(1)
>end
> 1
>
> ... but single line usage fails ...
> julia> cd("c:/tmp") do println(1) end
> cd("c:/tmp") do println(1) end
> ERROR: syntax: malformed function arguments (call println 1)
>
> ... and a ; cannot be used to fix it:
> julia> cd("c:/tmp") do; println(1) end
> cd("c:/tmp") do; println(1) end
> ERROR: syntax: unexpected ;
>
> Is this an actual inconsistency in the parsing or am I getting something
> wrong about how julia handles single-line code? So far it is the first
> example where I found a line-break to be both significant and not
> replaceable by a semi-colon.
>
> I am using the julia-0.2.1-win64 binary release.
>


[julia-users] output sharing memory with input

2014-04-23 Thread Ethan Anderes
Ok, so I've got not hits on this question. Let me try to make it more concrete:

Is there a command which can tell me the variables `a` and `b` in the following 
commands are refering to the same space in memory:

a = rand(2,2)
b = vec(a)

The command is(a,b) returns false. The documentation for vec doesn't give an 
indication that `a` and `b` are coupled. 

The reason I ask is that as array-views get implimented, I'm worried that 
functions like diagm, hcat and transpose will return values which are subtly 
coupled with the arguments used to call these functions. To avoid this sutle 
coupling between varibales in my code, do I need to append `copy` each time I 
call these? At the very least it would be nice to have something like whos() 
which can show me which variables are coupled in my current namespace.

Thanks,
Ethan


[julia-users] Re: output sharing memory with input

2014-04-23 Thread Steven G. Johnson


On Wednesday, April 23, 2014 12:11:50 PM UTC-4, Ethan Anderes wrote:
>
> Ok, so I've got not hits on this question. Let me try to make it more 
> concrete:
>
> Is there a command which can tell me the variables `a` and `b` in the 
> following commands are refering to the same space in memory:
>
> a = rand(2,2)
> b = vec(a)
>
> The command is(a,b) returns false.
>
pointer(a) == pointer(b) returns true.


[julia-users] Re: output sharing memory with input

2014-04-23 Thread Ethan Anderes
Thanks Steven. That helps. So I can infer that no two variables can share 
overlapping memory without their pointers being the same?


Re: [julia-users] output sharing memory with input

2014-04-23 Thread Jameson Nash
No, that is not necessarily true. For example, a subarray could point
somewhere inside the array, and most objects don't have a pointer method.

The easiest assumption (which is probably also typically correct) is that
the output of all functions share part of the memory of its inputs.
Therefore, after passing an object into a function, it is generally best to
make a copy if you want to mutate the object. The ! convention
perhaps actually indicates the opposite: after the function call you
maintain full freedom to mutate the input arguments.


On Wednesday, April 23, 2014, Ethan Anderes  wrote:

> Thanks Steven. That helps. So I can infer that no two variables can share
> overlapping memory without their pointers being the same?
>


Re: [julia-users] output sharing memory with input

2014-04-23 Thread Tim Holy
Still, if you're using Arrays, then under typical conditions there is 
absolutely no overlap between a and b unless pointer(a)==pointer(b). You can 
violate that yourself if you want to (using pointer_to_array), but in such 
circumstances you presumably know what you are doing.

--Tim

On Wednesday, April 23, 2014 02:17:07 PM Jameson Nash wrote:
> No, that is not necessarily true. For example, a subarray could point
> somewhere inside the array, and most objects don't have a pointer method.
> 
> The easiest assumption (which is probably also typically correct) is that
> the output of all functions share part of the memory of its inputs.
> Therefore, after passing an object into a function, it is generally best to
> make a copy if you want to mutate the object. The ! convention
> perhaps actually indicates the opposite: after the function call you
> maintain full freedom to mutate the input arguments.
> 
> On Wednesday, April 23, 2014, Ethan Anderes  wrote:
> > Thanks Steven. That helps. So I can infer that no two variables can share
> > overlapping memory without their pointers being the same?


Re: [julia-users] output sharing memory with input

2014-04-23 Thread Ethan Anderes
Ok, I'm trying to reconcile Jameson's suggestion to generally assume an 
overlap, and Tim's to expect no overlap with arrays if pointer(a) != 
pointer(b). Doesn't this imply that pointer(input) == pointer(output) a typical 
array function? With subarray's I fully expect sharing memory, just by the type 
of the output. I guess I had hoped that vec would have returned an array-view 
type or sub-array type to indicate what is happening. 

Side note: I've been preaching Julia to everyone I meet. I usually start the 
sermon with "when you start using Julia it will initially behave/feel very much 
like R or Matlab,  but then when you need the full power of a fast modern 
language it is there for you". Can I really say the first part of the sermon if 
I need to tell them to generally expect outputs sharing memory with inputs 
(which, to my knowledge, is very different from how matlab and R behave)? 

I hope I'm not sounding critical (cuz I love the language). Mainly, I want to 
really understand how things work so I can better sell it to students and 
colleagues.

Cheers,
Ethan


Re: [julia-users] output sharing memory with input

2014-04-23 Thread Tim Holy
Jameson said if you're using subarrays, you can't count on it; I said if 
they're both arrays, then you can rely on it (unless you've done something 
sneaky). So depending on the types, either could be true.

In my personal opinion, the risk of memory sharing is quite a lot lower, in 
practice, than Jameson's comment might have suggested. Unless I'm missing 
something, in common usage there are three main situations where this is an 
issue:
- reshaping (reshape, vec, etc)
- reinterpret
- allocation, e.g., mistakenly initializing a Vector{Vector{T}} with the same 
underlying array for each element.

The first two don't currently return views, presumably because views don't 
currently have the performance we want them to have. But I agree it might 
become less confusing when we can return a view. Of course, once we switch to 
returning  views from indexing operations (rather than copies, like we 
currently do), there will be new cases that might cause confusion.

--Tim

On Wednesday, April 23, 2014 11:51:56 AM Ethan Anderes wrote:
> Ok, I'm trying to reconcile Jameson's suggestion to generally assume an
> overlap, and Tim's to expect no overlap with arrays if pointer(a) !=
> pointer(b). Doesn't this imply that pointer(input) == pointer(output) a
> typical array function? With subarray's I fully expect sharing memory, just
> by the type of the output. I guess I had hoped that vec would have returned
> an array-view type or sub-array type to indicate what is happening.
> 
> Side note: I've been preaching Julia to everyone I meet. I usually start the
> sermon with "when you start using Julia it will initially behave/feel very
> much like R or Matlab,  but then when you need the full power of a fast
> modern language it is there for you". Can I really say the first part of
> the sermon if I need to tell them to generally expect outputs sharing
> memory with inputs (which, to my knowledge, is very different from how
> matlab and R behave)?
> 
> I hope I'm not sounding critical (cuz I love the language). Mainly, I want
> to really understand how things work so I can better sell it to students
> and colleagues.
> 
> Cheers,
> Ethan


Re: [julia-users] output sharing memory with input

2014-04-23 Thread Tobias Knopp
While Julia shares a lot with Matlab in terms of syntax, there are several 
differences. So I cannot see being different a drawback.
For me it is more important that Julia scales well to large projects and 
using array views can help reducing the memory consumption in several 
situations. But you are of course right that it can be a little confusing 
for beginners.

By the way, for a Numpy user Julias memory sharing concept is not 
surprising.

Am Mittwoch, 23. April 2014 20:51:56 UTC+2 schrieb Ethan Anderes:
>
> Ok, I'm trying to reconcile Jameson's suggestion to generally assume an 
> overlap, and Tim's to expect no overlap with arrays if pointer(a) != 
> pointer(b). Doesn't this imply that pointer(input) == pointer(output) a 
> typical array function? With subarray's I fully expect sharing memory, just 
> by the type of the output. I guess I had hoped that vec would have returned 
> an array-view type or sub-array type to indicate what is happening. 
>
> Side note: I've been preaching Julia to everyone I meet. I usually start 
> the sermon with "when you start using Julia it will initially behave/feel 
> very much like R or Matlab,  but then when you need the full power of a 
> fast modern language it is there for you". Can I really say the first part 
> of the sermon if I need to tell them to generally expect outputs sharing 
> memory with inputs (which, to my knowledge, is very different from how 
> matlab and R behave)? 
>
> I hope I'm not sounding critical (cuz I love the language). Mainly, I want 
> to really understand how things work so I can better sell it to students 
> and colleagues.
>
> Cheers,
> Ethan
>
>

Re: [julia-users] Re: How to use GLPK.exact ?

2014-04-23 Thread Stéphane Laurent
Right, it works. Thank you. 
If I don't call GLPKMathProgInterface, does JuMP use an internal solver ? 


Le mardi 22 avril 2014 23:25:07 UTC+2, Carlo Baldassi a écrit :
>
> Note that you can still use GLPK.exact with JuMP, you just need to add 
> change the m=Model() line to this:
>
> using GLPKMathProgInterface
> m = Model(solver=GLPKSolverLP(method=:Exact))
>
> while all the rest stays the same.
>
> As an aside, it's really kind of annoying that GLPK.exact uses (basically) 
> Rational{BigInt} internally, but the interface does not allow to access 
> this. Seems a waste.
>
>
> On Tuesday, April 22, 2014 8:28:01 PM UTC+2, Stéphane Laurent wrote:
>>
>> Miles, I have successfully installed JuMP and GLPKMathProgInterface on 
>> Windows 32-bit. 
>>
>> Your code works very well, this is really awesome !! However the result 
>> is not as precise as the one given by *GLPK.exact*.
>>
>> using JuMP 
>>
>>  mu = [1/7, 2/7, 4/7]
>>  nu = [1/4, 1/4, 1/2]
>>  n = length(mu)
>>  
>>  m = Model()
>>  @defVar(m, p[1:n,1:n] >= 0)
>>  @setObjective(m, Min, sum{p[i,j], i in 1:n, j in 1:n; i != j})
>>  
>>  for k in 1:n
>>  @addConstraint(m, sum(p[k,:]) == mu[k])
>>  @addConstraint(m, sum(p[:,k]) == nu[k])
>>  end
>>  solve(m)
>>
>>
>> julia> println("Optimal objective value is:", getObjectiveValue(m))
>> Optimal objective value is:0.10714285714285715
>>
>> julia> 3/28
>> 0.10714285714285714
>>
>>
>>
>>
>>
>>
>> Le jeudi 10 avril 2014 01:28:41 UTC+2, Miles Lubin a écrit :
>>>
>>> When we have a simplex solver (either in Julia or external) that 
>>> supports rational inputs, we could consider making this work with JuMP, but 
>>> for now JuMP stores all data as floating-point as well. 
>>>
>>> Stephane, nice work. LP definitely needs more exposure in the 
>>> probability community. Please please write your LPs algebraically, there's 
>>> really no excuse not to do this in Julia when your original model is in 
>>> this form.
>>>
>>> Compare this:
>>>
>>> using JuMP
>>> m = Model()
>>> @defVar(m, p[1:n,1:n] >= 0)
>>> @setObjective(m, Max, sum{p[i,j], i in 1:n; i != j})
>>>
>>> for k in 1:n
>>> @addConstraint(m, sum(p[k,:]) == μ[k])
>>> @addConstraint(m, sum(p[:,k]) == ν[k])
>>> end
>>> solve(m)
>>> println("Optimal objective value is:", getObjectiveValue(m))
>>>
>>>
>>> with the matrix gymnastics that you had to do to use the low-level GLPK 
>>> interface. Writing down a linear programming problem shouldn't be that 
>>> hard! (Note: I haven't tested that JuMP code).
>>>
>>> Miles
>>>
>>>
>>>
>>> On Wednesday, April 9, 2014 11:18:26 PM UTC+1, Carlo Baldassi wrote:



 About GLPK.exact it is not possible to get the rational number 3/28 
> instead of a decimal approximation ? 
>

 No, unfortunately. Also, for that to happen/make sense, you'd also need 
 to be able to pass all the *inputs* as exact rational values, i.e. as 
 "1//7" instead of "1/7". This would be possible if we had a native generic 
 Julia linear programming solver, but it's not possible with GLPK, which 
 can 
 only use exact arithmetic internally.
  
>>>

[julia-users] Error: type non-boolean (BitArray(1)) used in boolean context

2014-04-23 Thread Isaac

Hi All,

 I am a new Julia-user and meet a problem when I transfer the Matlab code 
to Julia. I always get the error:type non-boolean (BitArray(1)) used in 
boolean context. Can anyone help me check the code and solve this problem? 
The codes have been attached.
I am using the julia-0.3.0-win64 binary release. 
Any suggestions and comments would be highly appreciated.

Cheers,
Isaac




sample.jl
Description: Binary data


main (2).jl
Description: Binary data


Re: [julia-users] output sharing memory with input

2014-04-23 Thread Ethan Anderes
Thanks everyone... it's super helpful to read your comments. 

@Tim: ok, that makes sense and is clear. I think I was worried the language 
would have a jumble of commands (not just in those categories you list) which 
subtly fused variables in memory. Your comment helps me reason about it.


@Tobias: Yep, I agree the scalability of the language is key. I was just hoping 
that we could do this while also keeping the ability of non-programmers to 
start reasoning about Julia code immediately. I definitely don't want to 
suggesting Julia look like Matlab or R ... I just want variable assignment and 
functions (at the high-level prototype stage) to behave in a way that a 
scientist/mathematician/statistician would expect. BTW: when I first tried to 
break from Matlab and use Numpy I spent a full two days on a small project that 
ended up being completely wrong because of the shared memory issue that I 
didn't realize at first. I was scarred, hence my sensitivity to the issue:)

Again, thanks a ton and keep up the good work.


[julia-users] Re: Error: type non-boolean (BitArray(1)) used in boolean context

2014-04-23 Thread Matt Bauman
In general, Julia is much more picky about what types of things can be used 
in if statements and && or || conditions than Matlab is.  They must be 
Bools in Julia, whereas Matlab tries to convert things to a scalar logical 
value.

Crazily enough, one of the things that Matlab converts to a scalar logical 
value (albeit not in && or || expressions) is a logical mask.  In Julia, 
you must explicitly convert the mask (a BitArray) to a Bool by calling 
`all` (Matlab's behavior) or `any`.  Check main, line 49.  See the 
documentation for more 
details: 
http://docs.julialang.org/en/latest/manual/control-flow/?highlight=bool#man-conditional-evaluation

Also note that braces don't denote scope, but rather create Any-typed 
arrays.  You don't want to be using them within your if blocks.

On Wednesday, April 23, 2014 4:16:52 PM UTC-4, Isaac wrote:
>
>
> Hi All,
>
>  I am a new Julia-user and meet a problem when I transfer the Matlab code 
> to Julia. I always get the error:type non-boolean (BitArray(1)) used in 
> boolean context. Can anyone help me check the code and solve this problem? 
> The codes have been attached.
> I am using the julia-0.3.0-win64 binary release. 
> Any suggestions and comments would be highly appreciated.
>
> Cheers,
> Isaac
>
> 
>


Re: [julia-users] Error: type non-boolean (BitArray(1)) used in boolean context

2014-04-23 Thread Stefan Karpinski
In Matlab, arrays of booleans can be used in conditionals and are, I
believe, considered true if all the values in them are true and false
otherwise. In Julia only actual boolean values (true or false) can be used
in conditionals. You're using a 1-d boolean array somewhere in a
conditional. Not sure where – I took a quick look, but nothing popped out
at me.


On Wed, Apr 23, 2014 at 4:16 PM, Isaac  wrote:

>
> Hi All,
>
>  I am a new Julia-user and meet a problem when I transfer the Matlab code
> to Julia. I always get the error:type non-boolean (BitArray(1)) used in
> boolean context. Can anyone help me check the code and solve this problem?
> The codes have been attached.
> I am using the julia-0.3.0-win64 binary release.
> Any suggestions and comments would be highly appreciated.
>
> Cheers,
> Isaac
>
> 
>


Re: [julia-users] output sharing memory with input

2014-04-23 Thread Tobias Knopp
Hi Ethan,

I think it would be great if you could come up with a typical example where 
the memory sharing is an issue and leads to hard to find bugs. I ask 
because I have not seen a lot of questions on this on the mailing list and 
the bug tracker.

In practice the convention to use the suffix "!" for mutating functions 
works rather well.

Note that the reference semantic is fundamental to Julia and not limited to 
arrays (unless one uses an immutable type).
The following might be also surprising to you:

type MyType
  x::Int
end

a = MyType(1)
b = a
a.x = 2 
print(b.x) # this prints 2


Am Mittwoch, 23. April 2014 22:28:06 UTC+2 schrieb Ethan Anderes:
>
> Thanks everyone... it's super helpful to read your comments. 
>
> @Tim: ok, that makes sense and is clear. I think I was worried the 
> language would have a jumble of commands (not just in those categories you 
> list) which subtly fused variables in memory. Your comment helps me reason 
> about it.
>
>
> @Tobias: Yep, I agree the scalability of the language is key. I was just 
> hoping that we could do this while also keeping the ability of 
> non-programmers to start reasoning about Julia code immediately. I 
> definitely don't want to suggesting Julia look like Matlab or R ... I just 
> want variable assignment and functions (at the high-level prototype stage) 
> to behave in a way that a scientist/mathematician/statistician would 
> expect. BTW: when I first tried to break from Matlab and use Numpy I spent 
> a full two days on a small project that ended up being completely wrong 
> because of the shared memory issue that I didn't realize at first. I was 
> scarred, hence my sensitivity to the issue:)
>
> Again, thanks a ton and keep up the good work.
>
>

Re: [julia-users] Re: How to use GLPK.exact ?

2014-04-23 Thread Miles Lubin
On Wednesday, April 23, 2014 3:40:02 PM UTC-4, Stéphane Laurent wrote:
>
> If I don't call GLPKMathProgInterface, does JuMP use an internal solver ?
>

If a solver isn't specified, JuMP (actually MathProgBase) will search for 
an available solver and pick one by default. JuMP does not have an internal 
solver.

By the way, future discussions on linear programming etc. should take place 
on the julia-opt mailing 
list: https://groups.google.com/forum/#!forum/julia-opt.

Thanks,
Miles


Re: [julia-users] All packages for numerical math

2014-04-23 Thread Cameron McBride
Or you can use the non-vectorized version and save the overhead of the
temporary arrays being created by the addition and multiplication steps.

function trapz{T<:Real}(x::Vector{T}, y::Vector{T})
local len = length(y)
if (len != length(x))
error("Vectors must be of same length")
end
r = 0.0
for i in 2:len
r += (x[i] - x[i-1]) * (y[i] + y[i-1])
end
r/2.0
end

BTW, another possibility is to use a spline interpolation on the original
data and integrate the spline evaluation  with quadgk().  (Ideally,
integrations can be incorporated into a spline interface.) This could be
useful depending on the functional shape between the grid points.

Cameron


On Wed, Apr 23, 2014 at 7:52 AM, Tomas Lycken wrote:

> The trapezoidal rule (http://en.wikipedia.org/wiki/Trapezoidal_rule)
> would probably be almost trivial to implement.
>
> function trapz{T<:Real}(x::Vector{T}, y::Vector{T})
>if (length(y) != length(x))
>error("Vectors must be of same length")
>end
>sum( (x[2:end] .- x[1:end-1]).*(y[2:end].+y[1:end-1]) ) / 2
> end
>
> x = [0:0.01:pi]
> y = sin(x)
>
> trapz(x,y) # 1.820650436642
>
> This, of course, only currently works on vectors of real numbers, but it's
> easy to extend it if you want.
>
> And there might be more accurate methods as well, of course (see e.g.
> http://en.wikipedia.org/wiki/Simpson%27s_rule) but this one's often "good
> enough".
>
> // T
>
> On Wednesday, April 23, 2014 8:43:48 AM UTC+2, Evgeny Shevchenko wrote:
>
>> Hi, John.
>> No, I didn't. I didn't find it and it seems to be not what i need:
>>
>> "no method quadgk(Array{Float64,1}, Array{Float64,1})"
>>
>> quadgk(f,a,b,...) expects a function as its first argument but I mean the
>> case when y = f(x), but i don't have f, e.g. obtained experimental data, so
>> x and y are 1-D arrays of floats.
>>
>>
>> On Tue, Apr 22, 2014 at 7:49 PM, John Myles White 
>> wrote:
>>
>>> Have you tried the quadgk function?
>>>
>>>  -- John
>>>
>>> On Apr 22, 2014, at 7:32 AM, Evgeny Shevchenko  wrote:
>>>
>>> Hi
>>>
>>> Is there a package for numeric integration of obtained arrays, say x and
>>> y? Couldn't find one, the led me to use @pyimport and numpy.trapz.
>>>
>>> --
>>> pupoque@IRC
>>>
>>>
>>>
>>


Re: [julia-users] All packages for numerical math

2014-04-23 Thread Tomas Lycken


On Wednesday, April 23, 2014 11:10:15 PM UTC+2, Cameron McBride wrote:
>
> Or you can use the non-vectorized version and save the overhead of the 
> temporary arrays being created by the addition and multiplication steps.
>

There's really no way I can hide that I learnt scientific computing in 
Matlab, is there? :P
 

>
> On Wed, Apr 23, 2014 at 7:52 AM, Tomas Lycken 
> 
> > wrote:
>
>> The trapezoidal rule (http://en.wikipedia.org/wiki/Trapezoidal_rule) 
>> would probably be almost trivial to implement.
>>
>> function trapz{T<:Real}(x::Vector{T}, y::Vector{T})
>>if (length(y) != length(x))
>>error("Vectors must be of same length")
>>end
>>sum( (x[2:end] .- x[1:end-1]).*(y[2:end].+y[1:end-1]) ) / 2
>> end
>> 
>> x = [0:0.01:pi]
>> y = sin(x)
>>
>> trapz(x,y) # 1.820650436642
>>
>> This, of course, only currently works on vectors of real numbers, but 
>> it's easy to extend it if you want.
>>
>> And there might be more accurate methods as well, of course (see e.g. 
>> http://en.wikipedia.org/wiki/Simpson%27s_rule) but this one's often 
>> "good enough".
>>
>> // T
>>
>> On Wednesday, April 23, 2014 8:43:48 AM UTC+2, Evgeny Shevchenko wrote:
>>
>>> Hi, John. 
>>> No, I didn't. I didn't find it and it seems to be not what i need:
>>>
>>> "no method quadgk(Array{Float64,1}, Array{Float64,1})"
>>>
>>> quadgk(f,a,b,...) expects a function as its first argument but I mean 
>>> the case when y = f(x), but i don't have f, e.g. obtained experimental 
>>> data, so x and y are 1-D arrays of floats.
>>>
>>>
>>> On Tue, Apr 22, 2014 at 7:49 PM, John Myles White 
>>> wrote:
>>>
 Have you tried the quadgk function?
  
  -- John

 On Apr 22, 2014, at 7:32 AM, Evgeny Shevchenko  wrote:

 Hi 

 Is there a package for numeric integration of obtained arrays, say x 
 and y? Couldn't find one, the led me to use @pyimport and numpy.trapz.

 --
 pupoque@IRC



>>>
>

[julia-users] Surprising range behavior

2014-04-23 Thread Peter Simon
The first three results below are what I expected.  The fourth result 
surprised me:

julia> (0:pi:pi)[end] 
3.141592653589793 
  
julia> (0:pi/2:pi)[end]   
3.141592653589793 
  
julia> (0:pi/3:pi)[end]   
3.141592653589793 
  
julia> (0:pi/100:pi)[end] 
3.1101767270538954 

Is this behavior correct? 

Version info:
julia> versioninfo() 
Julia Version 0.3.0-prerelease+2703  
Commit 942ae42* (2014-04-22 18:57 UTC)   
Platform Info:   
  System: Windows (x86_64-w64-mingw32)   
  CPU: Intel(R) Core(TM) i7 CPU 860  @ 2.80GHz   
  WORD_SIZE: 64  
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY)   
  LAPACK: libopenblas
  LIBM: libopenlibm  


--Peter



Re: [julia-users] Surprising range behavior

2014-04-23 Thread Stefan Karpinski
The issue is that float(pi) < 100*(pi/100). The fact that pi is not
rational – or rather, that float64(pi) cannot be expressed as the division
of two 24-bit integers as a 64-bit float – prevents rational lifting of the
range from kicking in. I worried about this kind of issue when I was
working on FloatRanges, but I'm not sure what you can really do, aside from
hacks where you just decide that things are "close enough" based on some ad
hoc notion of close enough (Matlab uses 3 ulps). For example, you can't
notice that pi/(pi/100) is an integer – because it isn't:

julia> pi/(pi/100)
99.99


One approach is to try to find a real value x such that float64(x/100) ==
float64(pi)/100 and float64(x) == float64(pi). If any such value exists, it
makes sense to do a lifted FloatRange instead of the default naive stepping
seen here. In this case there obviously exists such a real number – π
itself is one such value. However, that doesn't quite solve the problem
since many such values exist and they don't necessarily all produce the
same range values – which one should be used? In this case, π is a good
guess, but only because we know that's a special and important number.
Adding in ad hoc special values isn't really satisfying or acceptable. It
would be nice to give the right behavior in cases where there is only one
possible range that could have been intended (despite there being many
values of x), but I haven't figured out how determine if that is the case
or not. The current code handles the relatively straightforward case where
the start, step and stop values are all rational.


On Wed, Apr 23, 2014 at 5:59 PM, Peter Simon  wrote:

> The first three results below are what I expected.  The fourth result
> surprised me:
>
> julia> (0:pi:pi)[end]
> 3.141592653589793
>
> julia> (0:pi/2:pi)[end]
> 3.141592653589793
>
> julia> (0:pi/3:pi)[end]
> 3.141592653589793
>
> julia> (0:pi/100:pi)[end]
> 3.1101767270538954
>
> Is this behavior correct?
>
> Version info:
> julia> versioninfo()
> Julia Version 0.3.0-prerelease+2703
> Commit 942ae42* (2014-04-22 18:57 UTC)
> Platform Info:
>   System: Windows (x86_64-w64-mingw32)
>   CPU: Intel(R) Core(TM) i7 CPU 860  @ 2.80GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY)
>   LAPACK: libopenblas
>   LIBM: libopenlibm
>
>
> --Peter
>
>


[julia-users] Is Julia ready to use for Computer Vision and Machine LEarning problems or still proceeding to improve its native beings?

2014-04-23 Thread Eren Gölge
I am aware of Julia's computational benefits compared to other programming 
languages but most important inability is about the libraries, especially 
for computer vision problems. Is there any group already dealing to 
implement some basic image manipulation functions like in Matlab or do I 
need to reinvent the wheel if I suppose to use it in my research ?


Re: [julia-users] Surprising range behavior

2014-04-23 Thread Peter Simon
Thanks for the explanation--it makes sense now.  This question arose for me 
because of the example presented 
in https://groups.google.com/d/msg/julia-users/CNYaDUYog8w/QH9L_Q9Su9YJ :

x = [0:0.01:pi]

used as the set of x-coordinates for sampling a function to be integrated 
(ideally over the interval (0,pi)).  But the range defined in x has a last 
entry of 3.14, which will contribute to the inaccuracy of the integral 
being sought in that example.  As an exercise, I was trying to implement 
the interpolation solution described later in that thread by Cameron 
McBride:  "BTW, another possibility is to use a spline interpolation on the 
original data and integrate the spline evaluation  with quadgk()".  It 
seems that one cannot use e.g. linspace(0,pi,200) for the x values, because 
CoordInterpGrid will not accept an array as its first argument, so you have 
to use a range object.  But the range object has a built-in error for the 
last point because of the present issue.  Any suggestions?

Thanks,

--Peter

On Wednesday, April 23, 2014 3:24:10 PM UTC-7, Stefan Karpinski wrote:
>
> The issue is that float(pi) < 100*(pi/100). The fact that pi is not 
> rational – or rather, that float64(pi) cannot be expressed as the division 
> of two 24-bit integers as a 64-bit float – prevents rational lifting of the 
> range from kicking in. I worried about this kind of issue when I was 
> working on FloatRanges, but I'm not sure what you can really do, aside from 
> hacks where you just decide that things are "close enough" based on some ad 
> hoc notion of close enough (Matlab uses 3 ulps). For example, you can't 
> notice that pi/(pi/100) is an integer – because it isn't:
>
> julia> pi/(pi/100)
> 99.99
>
>
> One approach is to try to find a real value x such that float64(x/100) == 
> float64(pi)/100 and float64(x) == float64(pi). If any such value exists, it 
> makes sense to do a lifted FloatRange instead of the default naive stepping 
> seen here. In this case there obviously exists such a real number – π 
> itself is one such value. However, that doesn't quite solve the problem 
> since many such values exist and they don't necessarily all produce the 
> same range values – which one should be used? In this case, π is a good 
> guess, but only because we know that's a special and important number. 
> Adding in ad hoc special values isn't really satisfying or acceptable. It 
> would be nice to give the right behavior in cases where there is only one 
> possible range that could have been intended (despite there being many 
> values of x), but I haven't figured out how determine if that is the case 
> or not. The current code handles the relatively straightforward case where 
> the start, step and stop values are all rational.
>
>
> On Wed, Apr 23, 2014 at 5:59 PM, Peter Simon 
> > wrote:
>
>> The first three results below are what I expected.  The fourth result 
>> surprised me:
>>
>> julia> (0:pi:pi)[end] 
>> 3.141592653589793 
>>   
>> julia> (0:pi/2:pi)[end]   
>> 3.141592653589793 
>>   
>> julia> (0:pi/3:pi)[end]   
>> 3.141592653589793 
>>   
>> julia> (0:pi/100:pi)[end] 
>> 3.1101767270538954 
>>
>> Is this behavior correct? 
>>
>> Version info:
>> julia> versioninfo() 
>> Julia Version 0.3.0-prerelease+2703  
>> Commit 942ae42* (2014-04-22 18:57 UTC)   
>> Platform Info:   
>>   System: Windows (x86_64-w64-mingw32)   
>>   CPU: Intel(R) Core(TM) i7 CPU 860  @ 2.80GHz   
>>   WORD_SIZE: 64  
>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY)   
>>   LAPACK: libopenblas
>>   LIBM: libopenlibm  
>>
>>
>> --Peter
>>
>>
>

Re: [julia-users] Pkg.add("Winston") does not work on Julia v0.2.1 or v0.3.0-prerelease

2014-04-23 Thread Cameron McBride
try pyplot or Gaston?

I had a number of issues with older versions of OSX (I used 10.6 until
recently).  None of them were Winton.jl per se, but the dependencies. I've
had no problems on 10.9 and all is working well v0.3 for at least the past
month.

Cameron


On Mon, Apr 21, 2014 at 4:40 PM, Andrew McKinlay
wrote:

> Following the instructions on the Julia download 
> page
> :
>
>
>>1. Pkg.add("Winston")
>>2. using Winston
>>3. plot( cumsum(randn(1000)) ) # (plot a random walk)
>>
>>
> I cannot get Winston working.
>
> The result of running `Pkg.add("Winston")`, `using Winston`, and
> `versioninfo()` in Julia v0.3.0-prerelease (I had the same issue with
> v0.2.1) on Mac OS X 10.7.5:
>
> julia> Pkg.add("Winston")
>> INFO: Initializing package repository /Users/andrew/.julia/v0.3
>> INFO: Cloning METADATA from git://github.com/JuliaLang/METADATA.jl
>> INFO: Cloning cache of BinDeps from git://
>> github.com/JuliaLang/BinDeps.jl.git
>> INFO: Cloning cache of Cairo from git://github.com/JuliaLang/Cairo.jl.git
>> INFO: Cloning cache of Color from git://github.com/JuliaLang/Color.jl.git
>> INFO: Cloning cache of Homebrew from git://
>> github.com/JuliaLang/Homebrew.jl.git
>> INFO: Cloning cache of IniFile from git://
>> github.com/JuliaLang/IniFile.jl.git
>> INFO: Cloning cache of Tk from git://github.com/JuliaLang/Tk.jl.git
>> INFO: Cloning cache of URIParser from git://
>> github.com/loladiro/URIParser.jl.git
>> INFO: Cloning cache of Winston from git://github.com/nolta/Winston.jl.git
>> INFO: Installing BinDeps v0.2.12
>> INFO: Installing Cairo v0.2.12
>> INFO: Installing Color v0.2.9
>> INFO: Installing Homebrew v0.0.6
>> INFO: Installing IniFile v0.2.2
>> INFO: Installing Tk v0.2.11
>> INFO: Installing URIParser v0.0.1
>> INFO: Installing Winston v0.10.2
>> INFO: Building Homebrew
>> INFO: Cloning brew from https://github.com/staticfloat/homebrew.git
>> Cloning into '/Users/andrew/.julia/v0.3/Homebrew/deps/usr'...
>> remote: Counting objects: 3019, done.
>> remote: Compressing objects: 100% (2894/2894), done.
>> remote: Total 3019 (delta 44), reused 909 (delta 10)
>> Receiving objects: 100% (3019/3019), 1.56 MiB | 2.36 MiB/s, done.
>> Resolving deltas: 100% (44/44), done.
>>   % Total% Received % Xferd  Average Speed   TimeTime Time
>>  Current
>>  Dload  Upload   Total   SpentLeft
>>  Speed
>> 100  258k  100  258k0 0   394k  0 --:--:-- --:--:-- --:--:--
>> 1104k
>> Cloning into
>> '/Users/andrew/.julia/v0.3/Homebrew/deps/usr/Library/Taps/staticfloat-juliadeps'...
>> remote: Reusing existing pack: 437, done.
>> remote: Total 437 (delta 0), reused 0 (delta 0)
>> Receiving objects: 100% (437/437), 92.35 KiB, done.
>> Resolving deltas: 100% (255/255), done.
>> Tapped 30 formula
>> HEAD is now at c588ffb Remove git rebasing code that slipped through
>> HEAD is now at 53d9d57 Bump coinmp bottle
>> INFO: Building Cairo
>> ==> Downloading
>> http://archive.org/download/julialang/bottles/gettext-0.18.3.2.s
>> Already downloaded:
>> /Users/andrew/Library/Caches/Homebrew.jl/gettext-0.18.3.2.snow_leopard_or_later.bottle.tar.gz
>> ==> Pouring gettext-0.18.3.2.snow_leopard_or_later.bottle.tar.gz
>> 🍺  /Users/andrew/.julia/v0.3/Homebrew/deps/usr/Cellar/gettext/0.18.3.2:
>> 375 files, 12M
>> ==> Installing glib dependency: libffi
>> ==> Downloading
>> http://archive.org/download/julialang/bottles/libffi-3.0.13.snow
>> Already downloaded:
>> /Users/andrew/Library/Caches/Homebrew.jl/libffi-3.0.13.snow_leopard_or_later.bottle.1.tar.gz
>> ==> Pouring libffi-3.0.13.snow_leopard_or_later.bottle.1.tar.gz
>> 🍺  /Users/andrew/.julia/v0.3/Homebrew/deps/usr/Cellar/libffi/3.0.13: 13
>> files, 388K
>> ==> Installing glib
>> ==> Downloading
>> http://archive.org/download/julialang/bottles/glib-2.38.2.snow_l
>> Already downloaded:
>> /Users/andrew/Library/Caches/Homebrew.jl/glib-2.38.2.snow_leopard_or_later.bottle.tar.gz
>> ==> Pouring glib-2.38.2.snow_leopard_or_later.bottle.tar.gz
>> 🍺  /Users/andrew/.julia/v0.3/Homebrew/deps/usr/Cellar/glib/2.38.2: 413
>> files, 17M
>> ==> Installing dependencies for cairo: staticfloat/juliadeps/libpng, stat
>> ==> Installing cairo dependency: libpng
>> ==> Downloading
>> https://downloads.sf.net/project/machomebrew/Bottles/libpng-1.5.
>> Already downloaded:
>> /Users/andrew/Library/Caches/Homebrew.jl/libpng-1.5.17.lion.bottle.1.tar.gz
>> ==> Pouring libpng-1.5.17.lion.bottle.1.tar.gz
>> 🍺  /Users/andrew/.julia/v0.3/Homebrew/deps/usr/Cellar/libpng/1.5.17: 15
>> files, 1.0M
>> ==> Installing cairo dependency: freetype
>> ==> Downloading
>> https://downloads.sf.net/project/machomebrew/Bottles/freetype-2.
>> Already downloaded:
>> /Users/andrew/Library/Caches/Homebrew.jl/freetype-2.5.2.lion.bottle.tar.gz
>> ==> Pouring freetype-2.5.2.lion.bottle.tar.gz
>> 🍺  /Users/andrew/.julia/v0.3/Homebrew/deps/usr/Cellar/freetype/2.5.2: 59
>> files, 2.7M
>> ==> Installing cairo dependency: pixman
>> ==> Downloa

[julia-users] Re: Is Julia ready to use for Computer Vision and Machine LEarning problems or still proceeding to improve its native beings?

2014-04-23 Thread Freddy Chua
I believe it is easy to write a wrapper to call existing C/C++/Python 
codes. In fact, that is what most Julia packages does.

On Thursday, April 24, 2014 6:24:08 AM UTC+8, Eren Gölge wrote:
>
> I am aware of Julia's computational benefits compared to other programming 
> languages but most important inability is about the libraries, especially 
> for computer vision problems. Is there any group already dealing to 
> implement some basic image manipulation functions like in Matlab or do I 
> need to reinvent the wheel if I suppose to use it in my research ?
>


Re: [julia-users] Surprising range behavior

2014-04-23 Thread Simon Kornblith
pi*(0:0.01:1) or similar should work.

On Wednesday, April 23, 2014 7:12:58 PM UTC-4, Peter Simon wrote:
>
> Thanks for the explanation--it makes sense now.  This question arose for 
> me because of the example presented in 
> https://groups.google.com/d/msg/julia-users/CNYaDUYog8w/QH9L_Q9Su9YJ :
>
> x = [0:0.01:pi]
>
> used as the set of x-coordinates for sampling a function to be integrated 
> (ideally over the interval (0,pi)).  But the range defined in x has a last 
> entry of 3.14, which will contribute to the inaccuracy of the integral 
> being sought in that example.  As an exercise, I was trying to implement 
> the interpolation solution described later in that thread by Cameron 
> McBride:  "BTW, another possibility is to use a spline interpolation on the 
> original data and integrate the spline evaluation  with quadgk()".  It 
> seems that one cannot use e.g. linspace(0,pi,200) for the x values, because 
> CoordInterpGrid will not accept an array as its first argument, so you have 
> to use a range object.  But the range object has a built-in error for the 
> last point because of the present issue.  Any suggestions?
>
> Thanks,
>
> --Peter
>
> On Wednesday, April 23, 2014 3:24:10 PM UTC-7, Stefan Karpinski wrote:
>>
>> The issue is that float(pi) < 100*(pi/100). The fact that pi is not 
>> rational – or rather, that float64(pi) cannot be expressed as the division 
>> of two 24-bit integers as a 64-bit float – prevents rational lifting of the 
>> range from kicking in. I worried about this kind of issue when I was 
>> working on FloatRanges, but I'm not sure what you can really do, aside from 
>> hacks where you just decide that things are "close enough" based on some ad 
>> hoc notion of close enough (Matlab uses 3 ulps). For example, you can't 
>> notice that pi/(pi/100) is an integer – because it isn't:
>>
>> julia> pi/(pi/100)
>> 99.99
>>
>>
>> One approach is to try to find a real value x such that float64(x/100) == 
>> float64(pi)/100 and float64(x) == float64(pi). If any such value exists, it 
>> makes sense to do a lifted FloatRange instead of the default naive stepping 
>> seen here. In this case there obviously exists such a real number – π 
>> itself is one such value. However, that doesn't quite solve the problem 
>> since many such values exist and they don't necessarily all produce the 
>> same range values – which one should be used? In this case, π is a good 
>> guess, but only because we know that's a special and important number. 
>> Adding in ad hoc special values isn't really satisfying or acceptable. It 
>> would be nice to give the right behavior in cases where there is only one 
>> possible range that could have been intended (despite there being many 
>> values of x), but I haven't figured out how determine if that is the case 
>> or not. The current code handles the relatively straightforward case where 
>> the start, step and stop values are all rational.
>>
>>
>> On Wed, Apr 23, 2014 at 5:59 PM, Peter Simon  wrote:
>>
>>> The first three results below are what I expected.  The fourth result 
>>> surprised me:
>>>
>>> julia> (0:pi:pi)[end] 
>>> 3.141592653589793 
>>>   
>>> julia> (0:pi/2:pi)[end]   
>>> 3.141592653589793 
>>>   
>>> julia> (0:pi/3:pi)[end]   
>>> 3.141592653589793 
>>>   
>>> julia> (0:pi/100:pi)[end] 
>>> 3.1101767270538954 
>>>
>>> Is this behavior correct? 
>>>
>>> Version info:
>>> julia> versioninfo() 
>>> Julia Version 0.3.0-prerelease+2703  
>>> Commit 942ae42* (2014-04-22 18:57 UTC)   
>>> Platform Info:   
>>>   System: Windows (x86_64-w64-mingw32)   
>>>   CPU: Intel(R) Core(TM) i7 CPU 860  @ 2.80GHz   
>>>   WORD_SIZE: 64  
>>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY)   
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm  
>>>
>>>
>>> --Peter
>>>
>>>
>>

Re: [julia-users] Surprising range behavior

2014-04-23 Thread Steven G. Johnson


On Wednesday, April 23, 2014 10:17:23 PM UTC-4, Simon Kornblith wrote:
>
> pi*(0:0.01:1) or similar should work.
>

Actually, that may not work because of 
https://github.com/JuliaLang/julia/issues/6364

However, this should be fixable (and in fact I just submitted a PR for it). 


Re: [julia-users] Surprising range behavior

2014-04-23 Thread Simon Kornblith
I believe that bug only applies to multiplication/division of integer 
ranges, but it will certainly be good to have it fixed.

On Wednesday, April 23, 2014 10:50:57 PM UTC-4, Steven G. Johnson wrote:
>
>
>
> On Wednesday, April 23, 2014 10:17:23 PM UTC-4, Simon Kornblith wrote:
>>
>> pi*(0:0.01:1) or similar should work.
>>
>
> Actually, that may not work because of 
> https://github.com/JuliaLang/julia/issues/6364
>
> However, this should be fixable (and in fact I just submitted a PR for 
> it). 
>


Re: [julia-users] Surprising range behavior

2014-04-23 Thread Steven G. Johnson


On Wednesday, April 23, 2014 10:50:57 PM UTC-4, Steven G. Johnson wrote:

> On Wednesday, April 23, 2014 10:17:23 PM UTC-4, Simon Kornblith wrote:
>>
>> pi*(0:0.01:1) or similar should work.
>>
>
> Actually, that may not work because of 
> https://github.com/JuliaLang/julia/issues/6364
>

...which of course you know about because you submitted that bug report, 
sorry.   (People should just use their Github usernames everywhere, rather 
than polluting the namespace with all of these "real name" aliases.)


Re: [julia-users] Surprising range behavior

2014-04-23 Thread Steven G. Johnson
(Simon, you may also be amused to learn that Google thinks that your post 
is written in Latin and offers to translate it.   Mirabile dictu!)


[julia-users] Re: Surprising range behavior

2014-04-23 Thread Freddy Chua
I think it's correct because the next value in the range would exceed PI. 
If you try 0:pi/101:pi, you would get 3.14 again.

On Thursday, April 24, 2014 5:59:10 AM UTC+8, Peter Simon wrote:
>
> The first three results below are what I expected.  The fourth result 
> surprised me:
>
> julia> (0:pi:pi)[end] 
> 3.141592653589793 
>   
> julia> (0:pi/2:pi)[end]   
> 3.141592653589793 
>   
> julia> (0:pi/3:pi)[end]   
> 3.141592653589793 
>   
> julia> (0:pi/100:pi)[end] 
> 3.1101767270538954 
>
> Is this behavior correct? 
>
> Version info:
> julia> versioninfo() 
> Julia Version 0.3.0-prerelease+2703  
> Commit 942ae42* (2014-04-22 18:57 UTC)   
> Platform Info:   
>   System: Windows (x86_64-w64-mingw32)   
>   CPU: Intel(R) Core(TM) i7 CPU 860  @ 2.80GHz   
>   WORD_SIZE: 64  
>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY)   
>   LAPACK: libopenblas
>   LIBM: libopenlibm  
>
>
> --Peter
>
>

Re: [julia-users] output sharing memory with input

2014-04-23 Thread Ethan Anderes
Hi Tobias:

My 'hard to find bug' statement should really have read 'hard to find bug from 
a newbie perspective'. These bugs would be obvious to someone familiar with 
basic programming. My numpy example is too big to give now but here is a simple 
metropolis hasting markov chain short example:

theta_curr = 0.0
for k=1:10_000
theta_prop = theta_curr 
theta_prop += randperturbation()
likelihoodratio = computelr(theta_prop, theta_curr)
if rand() < minimum([1,likelihoodratio])
theta_curr = theta_prop
else
theta_curr = theta_curr
end
end

The variable theta_prop is only supposed to be set to theta_curr if the first 
conditional is true. I guess in python += acts inplace, so that line 3 actually 
changes theta_curr, which would be a bug. In my numpy excursion, once I 
realized this (after many more lines of code) my stomach turned and I suddenly 
decided I didn't know what the heck I was doing and ended up running back to 
safe matlab. I had a similar feeling with vec(a) where after learning the 
output was the same as input I suddenly had to review all my code that used 
vec(a) to make sure I wasn't spuriously fusing variables. The thing about Julia 
that I like is that I could start out reasoning about code immediately (if I 
avoided functions with ! at the end) and then slowly venture into the more 
modern techniquesbut the point is I was scientifically productive immediate 
without knowing much about programing.

I definitely agree that the suffix "!" works well. In fact I like it so much I 
was just hoping that there would be something similar to indicate when output 
shared input member. I was thinking there would be reshape(A,dims), 
reshape!(A,dims) and reshape*(A,dims) where reshape(A,dims) returned a copy, 
reshape!(A,dims) mutated A and reshape*(A,dims) left the arguments alone but 
returned a variable which shared memory with A. Anyway, that suggestion now 
seems silly after hearing Tim say that there are really only corner, easy to 
understand, cases where I need to worry about it.

Again, thanks all for the discussion!
Ethan


Re: [julia-users] output sharing memory with input

2014-04-23 Thread Jameson Nash
Coming from other languages, I find MATLAB's flagrant copy behavior to
be surprising, and very slow. However, it is an interesting choice.

The ! convention specifically declares that a function mutates an
input. Since this is a violation of the previously stated assumption
of many function that the inputs will not be changed, the convention
helps to keep the user aware of the effect of these functions.
However, it should be noted that a.b = x and a[b] = x are also muting
operations in this same class (but without the visible !).

Julia does not allow += to operate inplace for the reason you described

To write fast code, I think programmers have reached an informal
consensus that a function should not modify it's inputs, leading to
the optimization that the function can also expect that its inputs
will remain unmodified. This helps to make functions more easily
composable (since a function only changes data it just created). If
the user wants to modify the inputs later, the user is expected to
make a copy.

Some languages make everything immutable to enforce this convention.
This is cool from a pure CS standpoint. However, Julia does not do
this since it forces a very different way of writing many common
algorithms.

Note, that I will tend to explicitly call out functions that make
copies in their documentation (copy, unsafe_ref, bytestring come to
mind). My default assumption is that the inputs are shared by the
output of the function, and should no longer be mutated.

On Thu, Apr 24, 2014 at 12:04 AM, Ethan Anderes  wrote:
> Hi Tobias:
>
> My 'hard to find bug' statement should really have read 'hard to find bug 
> from a newbie perspective'. These bugs would be obvious to someone familiar 
> with basic programming. My numpy example is too big to give now but here is a 
> simple metropolis hasting markov chain short example:
>
> theta_curr = 0.0
> for k=1:10_000
> theta_prop = theta_curr
> theta_prop += randperturbation()
> likelihoodratio = computelr(theta_prop, theta_curr)
> if rand() < minimum([1,likelihoodratio])
> theta_curr = theta_prop
> else
> theta_curr = theta_curr
> end
> end
>
> The variable theta_prop is only supposed to be set to theta_curr if the first 
> conditional is true. I guess in python += acts inplace, so that line 3 
> actually changes theta_curr, which would be a bug. In my numpy excursion, 
> once I realized this (after many more lines of code) my stomach turned and I 
> suddenly decided I didn't know what the heck I was doing and ended up running 
> back to safe matlab. I had a similar feeling with vec(a) where after learning 
> the output was the same as input I suddenly had to review all my code that 
> used vec(a) to make sure I wasn't spuriously fusing variables. The thing 
> about Julia that I like is that I could start out reasoning about code 
> immediately (if I avoided functions with ! at the end) and then slowly 
> venture into the more modern techniquesbut the point is I was 
> scientifically productive immediate without knowing much about programing.
>
> I definitely agree that the suffix "!" works well. In fact I like it so much 
> I was just hoping that there would be something similar to indicate when 
> output shared input member. I was thinking there would be reshape(A,dims), 
> reshape!(A,dims) and reshape*(A,dims) where reshape(A,dims) returned a copy, 
> reshape!(A,dims) mutated A and reshape*(A,dims) left the arguments alone but 
> returned a variable which shared memory with A. Anyway, that suggestion now 
> seems silly after hearing Tim say that there are really only corner, easy to 
> understand, cases where I need to worry about it.
>
> Again, thanks all for the discussion!
> Ethan


Re: [julia-users] output sharing memory with input

2014-04-23 Thread Ethan Anderes
Jameson:

Yes, the Matlab choice is slow and doesn't scale, but it's very easy to reason 
about. I think it was instructive for me to try and think of a real life bug 
that realized my worries. I came to realize that most of the code where I was 
using vec() was embedded in a chain of function calls like

b = M * exp(vec(a))

so I see your point about fast easily composible functions when output and 
input share memory.

I recently had a discussion with a colleague that commented that R was 
essentially developed by statisticians which is why it doesn't scale well (not 
sure if that is actually true but thats beside my point). On the other hand, 
things like python are written by CS folks which hinders access for 
mathy-science folks who just want to prototype an idea every now and again 
without investing the time to upgrade their programming skills. I think julia 
can have the best of both: easy to learn + modern CS. 




Re: [julia-users] Surprising range behavior

2014-04-23 Thread Peter Simon
Thanks, Simon, that construct works nicely to solve the problem I posed.  

I have to say, though, that I find Matlab's colon range behavior more 
intuitive and generally useful, even if it isn't as "exact" as Julia's.

--Peter

On Wednesday, April 23, 2014 7:17:23 PM UTC-7, Simon Kornblith wrote:
>
> pi*(0:0.01:1) or similar should work.
>
> On Wednesday, April 23, 2014 7:12:58 PM UTC-4, Peter Simon wrote:
>>
>> Thanks for the explanation--it makes sense now.  This question arose for 
>> me because of the example presented in 
>> https://groups.google.com/d/msg/julia-users/CNYaDUYog8w/QH9L_Q9Su9YJ :
>>
>> x = [0:0.01:pi]
>>
>> used as the set of x-coordinates for sampling a function to be integrated 
>> (ideally over the interval (0,pi)).  But the range defined in x has a last 
>> entry of 3.14, which will contribute to the inaccuracy of the integral 
>> being sought in that example.  As an exercise, I was trying to implement 
>> the interpolation solution described later in that thread by Cameron 
>> McBride:  "BTW, another possibility is to use a spline interpolation on the 
>> original data and integrate the spline evaluation  with quadgk()".  It 
>> seems that one cannot use e.g. linspace(0,pi,200) for the x values, because 
>> CoordInterpGrid will not accept an array as its first argument, so you have 
>> to use a range object.  But the range object has a built-in error for the 
>> last point because of the present issue.  Any suggestions?
>>
>> Thanks,
>>
>> --Peter
>>
>> On Wednesday, April 23, 2014 3:24:10 PM UTC-7, Stefan Karpinski wrote:
>>>
>>> The issue is that float(pi) < 100*(pi/100). The fact that pi is not 
>>> rational – or rather, that float64(pi) cannot be expressed as the division 
>>> of two 24-bit integers as a 64-bit float – prevents rational lifting of the 
>>> range from kicking in. I worried about this kind of issue when I was 
>>> working on FloatRanges, but I'm not sure what you can really do, aside from 
>>> hacks where you just decide that things are "close enough" based on some ad 
>>> hoc notion of close enough (Matlab uses 3 ulps). For example, you can't 
>>> notice that pi/(pi/100) is an integer – because it isn't:
>>>
>>> julia> pi/(pi/100)
>>> 99.99
>>>
>>>
>>> One approach is to try to find a real value x such that float64(x/100) 
>>> == float64(pi)/100 and float64(x) == float64(pi). If any such value exists, 
>>> it makes sense to do a lifted FloatRange instead of the default naive 
>>> stepping seen here. In this case there obviously exists such a real number 
>>> – π itself is one such value. However, that doesn't quite solve the problem 
>>> since many such values exist and they don't necessarily all produce the 
>>> same range values – which one should be used? In this case, π is a good 
>>> guess, but only because we know that's a special and important number. 
>>> Adding in ad hoc special values isn't really satisfying or acceptable. It 
>>> would be nice to give the right behavior in cases where there is only one 
>>> possible range that could have been intended (despite there being many 
>>> values of x), but I haven't figured out how determine if that is the case 
>>> or not. The current code handles the relatively straightforward case where 
>>> the start, step and stop values are all rational.
>>>
>>>
>>> On Wed, Apr 23, 2014 at 5:59 PM, Peter Simon  wrote:
>>>
 The first three results below are what I expected.  The fourth result 
 surprised me:

 julia> (0:pi:pi)[end] 
 3.141592653589793 
   
 julia> (0:pi/2:pi)[end]   
 3.141592653589793 
   
 julia> (0:pi/3:pi)[end]   
 3.141592653589793 
   
 julia> (0:pi/100:pi)[end] 
 3.1101767270538954 

 Is this behavior correct? 

 Version info:
 julia> versioninfo() 
 Julia Version 0.3.0-prerelease+2703  
 Commit 942ae42* (2014-04-22 18:57 UTC)   
 Platform Info:   
   System: Windows (x86_64-w64-mingw32)   
   CPU: Intel(R) Core(TM) i7 CPU 860  @ 2.80GHz   
   WORD_SIZE: 64  
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY)   
   LAPACK: libopenblas
   LIBM: libopenlibm  


 --Peter


>>>