[julia-users] Is something interesting happening with Julia pulse; 0.6 breaking changes?

2016-11-25 Thread Páll Haraldsson


http://pkg.julialang.org/pulse.html

Not only red dipped a lot, also blue (0.4) (and black (0.5)) for a while..


[Red there may not be the best color, while I understand to be 0.6, not 0.2..]


Anyway, I was just looking since I saw tests for both versions, currently, 
failing for:

https://github.com/JuliaComputing/ArrayFire.jl


There wheren't that many braking changes in 0.5; I've advocated for Julia, and 
that is a good thing. There seem to be fewer and fewer, as would be normal; are 
there any big/ger ones planned for 0.6 or 1.0?

-- 
Palli.


Re: [julia-users] Multiple dispatch algorithm.

2016-11-08 Thread Páll Haraldsson
On Friday, November 4, 2016 at 8:05:30 PM UTC, Matt Bauman wrote:
>
> I posted an answer to this a year ago on Stack Overflow: 
> http://stackoverflow.com/a/32148113/176071
>

Thanks.

I see "so it's just a linear search to check if the type of the argument 
tuple is a subtype of the signature. So that's just O(n), right? The 
trouble is that checking subtypes with full generality (including Unions 
and TypeVars, etc) is hard. Very hard, in fact. Worse than NP-complete [..] 
that is, even if P=NP, this problem would still take non-polynomial time! 
It might even be PSPACE or worse." That sounds bad..

I'm not to worried about PSPACE, only "worse" and/or non-polynomial time. 
But I assume that is also only a problem with functions with many 
arguments, and if you hit it you know.. You'll never be surprised at 
runtime.

[This kind of reminds me, SQL query tuning is exponential in general, not a 
problem in practice, or you just deal with it by simplifying your query or 
give hints; and PostgreSQL at least has genetic algorithm as a fallback: 
https://www.postgresql.org/docs/9.1/static/geqo-pg-intro.html ]


And thankfully you add "But, really, your question was about the *runtime* 
complexity of dispatch. In that case, the answer is quite often *"what 
dispatch?"* — because it has been *entirely eliminated*!"

as I expected.



If Julia would need:
https://en.wikipedia.org/wiki/EXPSPACE

I would get a little worried, not for:

https://en.wikipedia.org/wiki/PSPACE

"PSPACE is also equal to PCTC, problems solvable by classical computers 
using closed timelike curves 
,[6] 
 as well as to BQPCTC, 
problems solvable by quantum computers 
 using closed timelike 
curves.[7] "

https://en.wikipedia.org/wiki/Closed_timelike_curve
"who discovered a solution to the equations of general relativity 
 (GR) allowing CTCs known 
as the Gödel metric ; and 
since then other GR solutions containing CTCs have been found, such as the 
Tipler 
cylinder  and traversable 
wormholes . 
[..] 
Some physicists speculate that the CTCs which appear in certain GR 
solutions might be ruled out by a future theory of quantum gravity 
 which would replace GR, an 
idea which Stephen Hawking  
has labeled the chronology protection conjecture 
. Others 
note that if every closed timelike curve in a given space-time passes 
through an event horizon , a 
property which can be called chronological censorship, then that space-time 
with event horizons excised would still be causally well behaved and an 
observer might not be able to detect the causal violation.[2] 





> The internal implementation of the method caches has since changed, but 
> the concepts should still apply.  If I remember right, Stefan's remarks 
> were about the addition of triangular subtyping, which will plug into the 
> dispatch system seamlessly since it's "just" an extension to the type 
> system.
>
> On Friday, November 4, 2016 at 10:44:28 AM UTC-5, Mauro wrote:
>>
>> Have a read of: 
>> https://github.com/JeffBezanson/phdthesis/blob/master/main.pdf 
>>
>> (Note that multiple dispatch is not a 1.0 thing, it was there from the 
>> beginning.) 
>>
>> On Fri, 2016-11-04 at 16:22, Ford O.  wrote: 
>> > Hi, 
>> > 
>> > I have watched the Julia 1.0 video where Stefan briefly mentions new 
>> > multiple dispatch algorithm. I don't have much insight into this so I 
>> would 
>> > like to ask: 
>> > 
>> > What is the cost of multiple dispatch? ( What is the complexity of 
>> naive 
>> > implementation? What is the complexity of current implementation in 
>> julia? ) 
>> > 
>> > Could you briefly highlight the difficulties of creating an efficient 
>> > implementation? 
>> > 
>> > Thank you 
>>
>

[julia-users] Julia 0.5.0 together with Codec.jl (Base64) slower than on 0.4.5

2016-11-08 Thread Páll Haraldsson


I was running [not my code..]:

https://github.com/kostya/benchmarks/blob/master/base64/test.jl

[and looking into why Base64-benchmark was slower than in Ruby.. and then 
even slower under 0.5]


and lines 12, 13 and 21 (e.g. here add 2 to what profile says) seem 
predictable slow.


A. Why is it slower than Ruby in the first place? Codec.jl must not be as 
optimized; no good reason for it; at least not Julia's fault.

B. Why is it slower under 0.5? I changed ASCIIString->String (the usual 
recommendation, but not here?):

I see it now..

Lines 12-13:
  str2 = ASCIIString(encode(Base64, str)) 
  s += length(str2)


I was then thinking, would it be unfair to other languages (e.g. C), to get 
the byte-length directly instead of scanning. Then I realized, that's 
exactly what happens in 0.4, because of ASCIIString, as it can. 0.5 no 
longer can (unless you use LegacyEncoding.jl), it seemed.


I see other languages do it:

https://github.com/kostya/benchmarks/blob/master/base64/test.cr [Crystal 
language]

str2 = Base64.strict_encode(str)
s += str2.bytesize

[not sure how this with them or should be defined, returns an ASCIIString?]


This seemed the obvious change:;
  str2 = String(encode(Base64, str))
  s += length(str2)


This solved the speed (at least B.) problem:
  str2 = encode(Base64, str)
  s += length(str2)

[This is a slight semantic difference, if you would print out str2? That 
never happens..]


In line with the sample code:

using Codecs

data = "Hello World!"
encoded = encode(Base64, encode(Zlib, data))println(bytestring(encoded))


[that is however broken, gives an error]

In general data that encode gives is an UInt8 Vector, as it should be, e.g. for 
Zlib; is that also for sure meant for Base64? Should it then return UTF-8 
strings, that happen to be ASCII strings? This may be by design. What is 
appropriate on decode?



Are these lines for sure correct in the code, do they work for all string 
types?:

function encode{T <: Codec}(codec::Type{T}, s::AbstractString) 
encode(codec, convert(Vector{UInt8}, s)) 
end 


function decode{T <: Codec}(codec::Type{T}, s::AbstractString) 
decode(codec, convert(Vector{UInt8}, s)) 
end



julia> @profile x = @timed main(100)
encode: 133600, 4.1400511264801025
decode: 10, 2.7664570808410645
(nothing,7.185494864,2343364208,0.027672998,Base.GC_Diff(2343364208,201,0,417,0,0,27672998,84,0))

julia> Profile.print()
6987 ./event.jl:68; (::Base.REPL.##3#4{Base.REPL.REPLBackend})()
 6987 ./REPL.jl:95; macro expansion
  6987 ./REPL.jl:64; eval_user_input(::Any, ::Base.REPL.REPLBackend)
   6987 ./boot.jl:234; eval(::Module, ::Any)
6987 ./:?; anonymous
 6987 ./profile.jl:16; macro expansion;
  6987 ./util.jl:278; macro expansion;
   88   ./REPL[17]:3; main(::Int64)
1  ./strings/types.jl:172; repeat(::String, ::Int64)
84 ./strings/types.jl:173; repeat(::String, ::Int64)
 31 ./array.jl:0; copy!(::Array{UInt8,1}, ::Int64, 
::Array{UInt8,1}, ::Int64, ::Int64)
 3  ./array.jl:60; copy!(::Array{UInt8,1}, ::Int64, 
::Array{UInt8,1}, ::Int64, ::Int64)
 5  ./array.jl:62; copy!(::Array{UInt8,1}, ::Int64, 
::Array{UInt8,1}, ::Int64, ::Int64)
 33 ./array.jl:65; copy!(::Array{UInt8,1}, ::Int64, 
::Array{UInt8,1}, ::Int64, ::Int64)
  7  ./array.jl:0; unsafe_copy!(::Array{UInt8,1}, ::Int64, 
::Array{UInt8,1}, ::Int64, ::Int64)
  19 ./array.jl:51; unsafe_copy!(::Array{UInt8,1}, ::Int64, 
::Array{UInt8,1}, ::Int64, ::Int64)
   1  ./abstractarray.jl:737; pointer
   18 ./array.jl:44; unsafe_copy!
  1  ./array.jl:56; unsafe_copy!(::Array{UInt8,1}, ::Int64, 
::Array{UInt8,1}, ::Int64, ::Int64)
   2206 ./REPL[17]:10; main(::Int64)
16  /home/qwerty/.julia/v0.5/Codecs/src/Codecs.jl:60; 
encode(::Type{Codecs.Base64}, ::Array{UInt8,1})
264 /home/qwerty/.julia/v0.5/Codecs/src/Codecs.jl:63; 
encode(::Type{Codecs.Base64}, ::Array{UInt8,1})
416 /home/qwerty/.julia/v0.5/Codecs/src/Codecs.jl:64; 
encode(::Type{Codecs.Base64}, ::Array{UInt8,1})
270 /home/qwerty/.julia/v0.5/Codecs/src/Codecs.jl:65; 
encode(::Type{Codecs.Base64}, ::Array{UInt8,1})
313 /home/qwerty/.julia/v0.5/Codecs/src/Codecs.jl:66; 
encode(::Type{Codecs.Base64}, ::Array{UInt8,1})
608 /home/qwerty/.julia/v0.5/Codecs/src/Codecs.jl:67; 
encode(::Type{Codecs.Base64}, ::Array{UInt8,1})
319 /home/qwerty/.julia/v0.5/Codecs/src/Codecs.jl:68; 
encode(::Type{Codecs.Base64}, ::Array{UInt8,1})
   1929 ./REPL[17]:11; main(::Int64)
4./strings/string.jl:48; length(::String)
1925 ./strings/string.jl:49; length(::String)
   2764 ./REPL[17]:19; main(::Int64)
1./strings/string.jl:48; length(::String)
1438 ./strings/string.jl:49; length(::String)
9/home/qwerty/.julia/v0.5/Codecs/src/Codecs.jl:106; 
decode(::Type{Codecs.Base64}, ::Array{UInt8,1})
121  /home/qwerty/.julia/v0.5/Codecs/src/Codecs.jl:109; 
decode(:

[julia-users] Re: ANN: new blog post on array indexing, iteration, and multidimensional algorithms

2016-11-07 Thread Páll Haraldsson
On Monday, February 1, 2016 at 6:55:06 PM UTC, Tim Holy wrote:
>
> This new blog post 
>
> http://julialang.org/blog/2016/02/iteration/


For others to know, it's

http://julialang.org/blog/2016/02/iteration

without the /

Should servers ignore extra / ? Do some?



[julia-users] Re: Recursive data structures with Julia

2016-11-04 Thread Páll Haraldsson
On Thursday, October 27, 2016 at 2:03:27 PM UTC, Ángel de Vicente wrote:
>
> Hi, 
>
> I've been trying to implement some code to build Binary Search Trees.


Is this a genuine need or an exercise? I quickly looked at 
Datastructures.jl:

"The containers internally use a 2-3 tree, which is a kind of balanced tree 
and is described in many elementary data structure textbooks."

Maybe you use their code or learn from it (assuming they best solve your 
Nullable problem; that I didn't look into), or maybe you can help them if 
your data structure is really needed.


> -- 
> Ángel de Vicente 
> http://www.iac.es/galeria/angelv/   
>


[julia-users] Re: Call Julia from Pyspark

2016-11-04 Thread Páll Haraldsson
I'm not sure this is helpful but could you use?:

https://github.com/dfdx/Spark.jl

css.csail.mit.edu/6.824/2014/projects/dennisw.pdf

I'm not familiar with PySpark or the above, so I'm not sure what the 
problem with scalability is or if this helps..


On Thursday, November 3, 2016 at 1:45:31 AM UTC, Harish Kumar wrote:
>
> I have a RDD with 10K columns and 70 million rows,  70 MM rows will be 
> grouped into 2000-3000 groups based on a key attribute. I followed below 
> steps 
>
> 1. Julia and Pyspark linked using pyjulia package
> 2. 70 MM rd is groupByKey
> def juliaCall(x):
>   <>
>j = julia.Julia()
>jcode = """ """
>calc= j.eval(jcode )
>   result = calc(inputdata)
>
>   RDD.groupBy(key).map(lambda x: juliaCall(x))
>
> It works fine foe Key (or group) with 50K records, but my each group got 
> 100K to 3M records. in such cases Shuffle will be more and it will fail. 
> Can anyoone guide me to over code this issue
> I have cluster of 10 nodes, each node is of 116GB and 16cores. Standalone 
> mode and i allocated only 10 cores per node. 
>
> Any help?
>


Re: [julia-users] Re: canonical binary representation for Array

2016-10-21 Thread Páll Haraldsson


On Wednesday, October 19, 2016 at 7:03:49 AM UTC, Michele Zaffalon wrote:
>
> On Thursday, October 13, 2016 at 3:38:36 PM UTC+2, Steven G. Johnson wrote:
>>
>> write on a numeric array will output the raw bytes, i.e. Column-major 
>> data in the native byte order. 
>>
>>
>> Would it be a reasonable assumption that reshaping will not change the 
> ordering in future Julia implementations?
>

[PyCall allows already changing to row-major, but there's a penalty then.]


>From memory, I recall some talk of row-major could be choosable (in 
theory), can now only find:

https://groups.google.com/forum/#!topic/julia-dev/Q-LFNapBdb0
"I like the way of Numpy of choose at creation time if order in memory has 
to be "fortran-style" (column-major) or "c-style" (row-major). If you now 
that operations are going to be intensive over columns, you can choice 
create a fortran-style matrix (and this avoid the time of create a c-style 
and do and transposition of the matrix)."



[julia-users] Re: I am getting a stack over flow error when using the optimize function from Optim package

2016-10-21 Thread Páll Haraldsson
On Friday, October 21, 2016 at 1:50:13 PM UTC, SajeelBongale wrote:
>
>
> *This is my cost function:*
> "This function calculates the cost and gradient at a given theta"
> function costFunction( initial_theta,X, y)
>

I find it likely, this should go to julia-opt.

See: http://julialang.org/community/

I can't see anything obviously wring with the function, like recursively 
calling itself (I guess Optim, may do that to it..).



[julia-users] Re: ImageView very slow

2016-10-16 Thread Páll Haraldsson
On Sunday, October 16, 2016 at 4:45:00 PM UTC, Paul B. wrote: 

> into some issues with deprecated functionality
>

I think they can make slower, not sure if by this much. 

I see: "fix julia 0.5 deprecations   19 days ago"

Maybe if you update the package this will be ok?



Re: [julia-users] Some questions on array comprehensions; e.g. disabling bounds checking possible? And correct way for non-1-based

2016-10-16 Thread Páll Haraldsson


On Sunday, October 16, 2016 at 4:54:45 PM UTC, Yichao Yu wrote:
>
>
>
> On Sun, Oct 16, 2016 at 12:01 PM, Páll Haraldsson  > wrote:
>
>>
>> I was prototyping:
>>
>> julia> a=[1,2,3,1,2]
>>
>> julia> b=[a[i]> 4-element Array{Bool,1}:
>>   true
>>   true
>>  false
>>   true
>>
>>
>> In the beginning when trying stuff out I used:
>>
>> for i in a[1:end-1]
>>
>> or
>>
>> for i in a[2:end]
>>
>> and it got me thinking, end-1 works for any kind of array, but 1 as the 
>> start (or 2) is not correct in general. For e.g. general (e.g. zero-based 
>> arrays now allowed), what do you do? [If I need all: for i in a just works]
>>
>>
> Not decided yet. Ref https://github.com/JuliaLang/julia/pull/15750
>

Thanks for answering. And from this I see: 
https://github.com/JuliaArrays/EndpointRanges.jl

[I'm not clear on (proposed?) "last" vs. "end"; maybe it's just for a demo, 
as it would else conflict, if "first" (or "ibegin") and "last" or "iend" 
ends up as decided then hopefully soon as a breaking change.. This could 
stay in a package forever, but not good as a replacement for "1" is 
needed..]


To get one thing clear, "end" continues to work, for the last element, but

since "1" should now be "considered harmful".. "end" may not be very useful 
("end:end" is still possible and "end-1:end" etc.. but not much else).

I know about eachindex (or ways for whole arrays/collections), but do not 
need each.. What works now to get the second (or first)?

Is something like

view(a, 2:length(a)) # for 1D..

a workaround?


> https://github.com/JuliaLang/julia/pull/15558 
<https://github.com/JuliaLang/julia/pull/15558>
> You can always use an assignment in a composite expression.

Not sure what you mean, but may not be important as @inbounds isn't a big 
deal [for me] as I think I'll always need a for loop anyway (no big 
problem).



[julia-users] Re: eachline() work with pmap() is slow

2016-10-16 Thread Páll Haraldsson
On Sunday, October 16, 2016 at 5:57:49 PM UTC, Páll Haraldsson wrote:
>
>
>
> On Saturday, October 15, 2016 at 2:54:53 AM UTC, love...@gmail.com wrote:
>>
>> @Páll, do you mean that pmap will first do a ``collect`` operation,
>>
>
> yes and no..
>

Should have said here, just no.

I meant, yes, collect seems to be used, just not first.


[julia-users] Re: eachline() work with pmap() is slow

2016-10-16 Thread Páll Haraldsson


On Saturday, October 15, 2016 at 2:54:53 AM UTC, love...@gmail.com wrote:
>
> I have change the code to parallel on files rather than lines. codes are 
> available here 
>  if 
> anyone have interests.
> However, the speed is not satisfactory still (total processing speed 
> approx. 10M/s, ideally it should be 100M/s, the network speed). 
> CPU not full, IO not full, and I cannot find the bottleneck...
>
> @Jeremy, thanks for the reply. The bottleneck is IO. You need days just to 
> stream all files at full speed. Thus waiting to load the whole file will 
> waste a lot of time. Ideally it will be that when I streamed the data one 
> pass, the processing is also done without extra time.
> @Páll, do you mean that pmap will first do a ``collect`` operation,
>

yes and no..
 

> then processing? So even you give pmap an iterator, it will not benefit 
> from it? That will be sad.
>

I was thinking what needs to happen, I'm still learning Julia and if I 
understand collect or what you think, then you mean does Julia first have 
to get a collection (DenseArray) of everything before starting processing?


I find it very cool to learn, to prepend @edit, to see what Julia does so I 
took a look (and I think this all means that it can start applying your 
function as you go):

 if batch_size == 1
[..]
return collect(AsyncGenerator(f, c; ntasks=()->nworkers(p)))
else
batches = batchsplit(c, min_batch_count = length(p) * 3,
max_batch_size = batch_size)

results = collect(flatten(AsyncGenerator(f, batches; 
ntasks=()->nworkers(p


Yes, collect is used, but while the doc says:

Transform collection c by applying f


help?> Base.AsyncGenerator
  AsyncGenerator(f, c...; ntasks=0) -> iterator

  Apply f to each element of c using at most ntasks asynchronous tasks. If 
ntasks is unspecified, uses max(100, nworkers()) tasks. [..]




[julia-users] Some questions on array comprehensions; e.g. disabling bounds checking possible? And correct way for non-1-based

2016-10-16 Thread Páll Haraldsson

I was prototyping:

julia> a=[1,2,3,1,2]

julia> b=[a[i] b=[@inbounds a[i] b=[(@inbounds (a[i]

Re: [julia-users] linspace question; bug?

2016-10-14 Thread Páll Haraldsson
2016-10-14 22:58 GMT+00:00 Stefan Karpinski :

> I'll answer your question with a riddle: How can you have a length-one
> collection whose first and last value are different?
>

I know that; yes, it's a degenerate case, and as I said "may not matter too
much"

[Nobody makes a one element linspace intentionally, but would it be bad to
allow it? I'm not sure it would happen, usefully anywhere, but if it does,
then your code errors out; that may be intentional.]

I wouldn't have posted if not for seeing "Real" that is the real question
(bug?).


Re: [julia-users] Most effective way to build a large string?

2016-10-14 Thread Páll Haraldsson


On Friday, October 14, 2016 at 10:44:47 PM UTC, Páll Haraldsson wrote:
>
> On Friday, October 14, 2016 at 5:17:45 PM UTC, Diego Javier Zea wrote:
>>
>> Hi!
>> I have a function that uses `IOBuffer` for this creating one `String` 
>> like the example. 
>> Is it needed or recommended `close` the IOBuffer after `takebuf_string`?
>>
>
> I find it unlikely.
>
>  help?> takebuf_string
> search: takebuf_string
>
>   takebuf_string(b::IOBuffer)
>
>   Obtain the contents of an IOBuffer as a string, without copying. 
> Afterwards, the IOBuffer is reset to its initial state.
>
> reset means they take action, and could have closed if needed; IOBuffer is 
> an in-memory thing, even if freeing memory was the issue, then garbage 
> collection should take care of that.
>

Note, IOBuffer (in RAM) is not like a file in non-volatile memory (unlike 
RAM).
 

>
>
> Since this thread was necromanced:
>
> @Karpinski: "The takebuf_string function really needs a new name."
>
> I do not see clearly that that has happened, shouldn't 
>
> help?> takebuf_string
>
> show then?
>
> What would be a good name? Changing and/or documenting the above could be 
> an "up-for-grabs" issue.
>

@Steven: "Further, in this case, the "takebuf_string" function (or 
takebuf_array) isn't just conversion, it is mutation because it empties the 
buffer.  So, arguably it should follow the Julia convention and append a ! 
to the name."



> New function would just call the old function..
>
>

Re: [julia-users] Most effective way to build a large string?

2016-10-14 Thread Páll Haraldsson
On Friday, October 14, 2016 at 5:17:45 PM UTC, Diego Javier Zea wrote:
>
> Hi!
> I have a function that uses `IOBuffer` for this creating one `String` like 
> the example. 
> Is it needed or recommended `close` the IOBuffer after `takebuf_string`?
>

I find it unlikely.

 help?> takebuf_string
search: takebuf_string

  takebuf_string(b::IOBuffer)

  Obtain the contents of an IOBuffer as a string, without copying. 
Afterwards, the IOBuffer is reset to its initial state.

reset means they take action, and could have closed if needed; IOBuffer is 
an in-memory thing, even if freeing memory was the issue, then garbage 
collection should take care of that.


Since this thread was necromanced:

@Karpinski: "The takebuf_string function really needs a new name."

I do not see clearly that that has happened, shouldn't 

help?> takebuf_string

show then?

What would be a good name? Changing and/or documenting the above could be 
an "up-for-grabs" issue.

New function would just call the old function..



[julia-users] linspace question; bug?

2016-10-14 Thread Páll Haraldsson

I mistook third param for step, and got confusing error:


julia> linspace(1, 2, 1)
ERROR: linspace(1.0, 2.0, 1.0): endpoints differ
 in linspace(::Float64, ::Float64, ::Float64) at ./range.jl:212
 in linspace(::Float64, ::Float64, ::Int64) at ./range.jl:251
 in linspace(::Int64, ::Int64, ::Int64) at ./range.jl:253

julia> linspace(1.0, 2.0, 1)
ERROR: linspace(1.0, 2.0, 1.0): endpoints differ
 in linspace(::Float64, ::Float64, ::Float64) at ./range.jl:212
 in linspace(::Float64, ::Float64, ::Int64) at ./range.jl:251


It may not matter too much to get this to work (or give helpful error); I 
went to debug and found (should num/len be Integer? see inline comments):


immutable LinSpace{T<:AbstractFloat} <: Range{T}
start::T
stop::T
len::T # len::Integer, only countable..
divisor::T
end

function linspace{T<:AbstractFloat}(start::T, stop::T, len::T)

#long function omitted

function linspace{T<:AbstractFloat}(start::T, stop::T, len::Real) # change 
to len::Integer, is this for type stability reasons, or to handle Rationals 
somehow?
T_len = convert(T, len)
T_len == len || throw(InexactError())
linspace(start, stop, T_len)
end
linspace(start::Real, stop::Real, len::Real=50) =  # change to 
len::Integer=50
linspace(promote(AbstractFloat(start), AbstractFloat(stop))..., len)



[julia-users] Re: eachline() work with pmap() is slow

2016-10-14 Thread Páll Haraldsson
On Friday, October 14, 2016 at 3:45:36 AM UTC, love...@gmail.com wrote:
>
> I want to process each line of a large text file (100G) in parallel using 
> the following code
>
> pmap(process_fun, eachline(the_file))
>
> however, it seems that pmap is slow. following is a dummy experiment:
>
>  

> the goal is to process those files (300+) as fast as possible. and maybe 
> there are better ways to call pmap?
>

I'm not sure, there's much gain to process *each* file in parallel, on top 
of these many files (at least if they are similar size, and say one not 
much bigger).

help?> pmap
[..]
  By default, pmap distributes the computation over all specified workers.
[..]

I'm not sure how this works, since lines in a file may not each be the same 
length, I THINK you need to read the file serially (there are probably 
workaround, but pmap wouldn't be responsible for that).

The computations, would however be distributed, if they take a long time 
(compared to the I/O, well the read; else distributed=false might be a 
win?) and are independent (I guess pmap requires that) then pmap could be a 
win, but see above. Note also, parameters (batch_size=1) seem to me to be 
tuning parameters?


That's some big file.. I'm kind of interested in big [1D] arrays (see other 
thread), seems to me, this is streaming work and while file bigger, [each 
process] doesn't need more than 2 GB (a limit I'm interested in).


 


[julia-users] Re: What is really "big data" for Julia (or otherwise), 1D or multi-dimensional?

2016-10-14 Thread Páll Haraldsson
On Thursday, October 13, 2016 at 7:49:51 PM UTC, cdm wrote:
>
> from CloudArray.jl:
>
> "If you are dealing with big data, i.e., your RAM memory is not enough to 
> store your data, you can create a CloudArray from a file."
>
>
> https://github.com/gsd-ufal/CloudArray.jl#creating-a-cloudarray-from-a-file
>

Good to know, and seems cool.. (like CatViews.jl) indexes could need to be 
bigger than 32-bit this way.. even for 2D.

But has anyone worked with more than 70 terabyte arrays, that would 
otherwise have been a limitation?

Anyone know biggest (or just big over 2 GB) one-dimensional array people 
are working with?


Re: [julia-users] ANN: RawArray.jl

2016-10-14 Thread Páll Haraldsson
On Thursday, October 13, 2016 at 6:00:30 PM UTC, Tim Holy wrote:
>
> If you just want a raw dump of memory, you can get that, and if it's big 
> it uses `Mmap.mmap` when it reads the data back in. So you can read 
> terabyte-sized arrays.
>

[Not clear on mmap.. just a possibility, kind or requirement when arrays 
are this big?]

Good to know, I have another thread on array sizes (and 2 GB limit).

You mean you could read terabyte-sized, not that it's common or that you 
know of for 1D arrays?

[Would that be arrays of big structs? Fewer than 2G, e.g. 32-bit index 
would do?]

I'm not at all worried for 2D (or more dimensions).



[julia-users] return Bool [and casting to Bool]

2016-10-14 Thread Páll Haraldsson

It's great to see explicit return types in the language as of 0.5.

About return [or it's implicit nature, I only read about half as very 
long..]:

https://groups.google.com/forum/#!topic/julia-users/4RVR8qQDrUg


Disallowinging implicit return, would be a breaking change.


Is there some room for adding special handling of [return] true or [return] 
false to the language?

I'm kind of worried, that if anyone changes code, you get a Union.

Since Boll[eans] are so fundamental to computing, it seems you should be 
returning them with no possibility of error.


In general for Bool, and especially for return, it seems not good that 0.0 
gets cast to it (at least e.g. casting Strings fail), and 0 etc.

[In the unlikely situation, that is really wanted.. then the new return 
type opens up that possibility of an explicit Union return type.]


[Trivia: Go language, has a kind of Union of the error code and the 
otherwise meant return type, instead of C's error model or exception 
handling. I'm deeply skeptical of this, but a Union would almost allow 
that.. or GoTypeError type..]



Re: [julia-users] ANN: RawArray.jl

2016-10-13 Thread Páll Haraldsson

[Explaining more, and correcting typo..]

On Sunday, October 9, 2016 at 12:04:35 AM UTC, Páll Haraldsson wrote:

> FLIF is not a replacement for all uses (multidimensional, would be 
> interesting to know if could to be extended to..), but seem to be the best 
> option for non-lossy image compression:
>

Of course, say three dimensional, could be trivially, concatenated FLIF 
files/bytestreams of 2D slices.

But I really meant, could there be a way to compress whole array/"image", 
similar to how wavelet (and fft) can be generalized to more dimensions?


[julia-users] Re: canonical binary representation for Array

2016-10-13 Thread Páll Haraldsson

RawArray.jl (or alternatives..) may be what you need, at least 
helpful/informative discussion (and looking at code, maybe telling you what 
you need to know):

https://groups.google.com/forum/#!searchin/julia-users/rawarray%7Csort:relevance/julia-users/ulkiPhGcv-0/TqyX8g9LBwAJ

On Thursday, October 13, 2016 at 6:24:27 AM UTC, Michele Zaffalon wrote:
>
> I need to write a 4 dimensional a array to file and use
>
> write(f, a).
>

For sure need that?
 

> What is the canonical binary representation of a 
> ? 
> It looks like the the line above is equivalent to
>
 

> Is the canonical binary representation going to be machine and OS 
> independent
>

Why shouldn't it be?
 

> (except for the endianness)? What about reshape?
> I am porting code from MATLAB and the specs for the file format are 
> defined by the MATLAB code implementation.
>

I'm not sure I'm answering you question, but I'm pretty sure, DenseArray s 
as packed in memory as possible (not sure if your values would be 
three-byte, if padded to 4-byte):

julia> write(STDOUT, [1 2])
16

julia> write(STDOUT, [0x1 0x2])
2

I'm pretty sure multidimensional changes nothing, you get Julia order (no 
gaps), ColumnMajor order wasn't it?

Complex numbers are a problem for some reason I didn't look into (see 
thread), that RawArray handles, but HDF5 (strangel) doesn't.

Pointers in arrays would be dangerous..


In the docs, yo point to:

You can write multiple values with the same write call. i.e. the following 
are equivalent:


write(stream, x, y...)write(stream, x) + write(stream, y...)


Is this for sure true? Not:

write(stream, x); write(stream, y...)



I like looking up directly what Julia does, e.g.:
@edit prod([]) # I do first: ENV["EDITOR"] = "vi"

in your case seems to be:
prod(f::Callable, a) = mapreduce(f, *, a)

 * Why do I find SparseArrays (only plural), that I think you do not care 
about, but DenseArray singular as expected?

help?> Sparse


[julia-users] Re: How do I move an array of strings from C to Julia?

2016-10-13 Thread Páll Haraldsson

I find it likely that this supports my case (there could be other reasons.. 
but Java's FFI is built on JNI, and ultimately on ccall).

http://juliainterop.github.io/JavaCall.jl/
"Multidimensional arrays, either as arguments or return values are not 
supported. Since Java uses Array-of-Arrays, unlike Julia's true 
multidimensional arrays, supporting them is non-trivial, though certainly 
feasible."

Your case is array (one allocation) of strings (many allocation), just as 
how Java allocates "2D" arrays (only really supporting 1D). Unlike 2D (or 
3D etc.) of arrays, allocated in Julia and C/C++ (and some other languages, 
Fortran, Haskell(?)) as one allocation, e.g. true multidimensional arrays. 
Julia does them however better than C for other reasons.

On Thursday, October 13, 2016 at 12:39:21 PM UTC, Páll Haraldsson wrote:
>
> On Wednesday, October 12, 2016 at 4:19:17 PM UTC, Michael Eastwood wrote:
>>
>> So what is the best way to get an array of strings from C back into 
>> Julia? I want Julia to take ownership of the memory. Do I need to query the 
>> length of each string ahead of time?
>>
>
> I'm kind of guessing here, but I know Julia can take over A memory 
> allocation by C, e.g. if you get back a pointer to a [C] string.
>
> I doubt you can take over an array of pointers [to heap allocations] (or 
> just A pointer to a pointer). Julia would have to delete that array (ok), 
> but [first] go through it and delete what it points to (and recursively if 
> that applies).
>
> It would be cool, if Julia can do that, I just very much doubt it. I guess 
> you'll have to let C deallocate, and use a callback, or provide all the 
> strings.
>
> About the length of the strings however, Julia has a Cstring type that 
> assumes 0 (NULL) ending, and you do not have to worry about that (Julia 
> takes care of "length" issues then if needed). [You could also take over 
> other types of string], free doesn't really care about the length of thre 
> strings, allocations can be somewhat bigger, it takes care of the padding 
> to.
>
> Hope this helps and I'm not telling you that Julia can do less than it 
> actually does (or you think it does)..
>
>

[julia-users] Re: How do I move an array of strings from C to Julia?

2016-10-13 Thread Páll Haraldsson
On Wednesday, October 12, 2016 at 4:19:17 PM UTC, Michael Eastwood wrote:
>
> So what is the best way to get an array of strings from C back into Julia? 
> I want Julia to take ownership of the memory. Do I need to query the length 
> of each string ahead of time?
>

I'm kind of guessing here, but I know Julia can take over A memory 
allocation by C, e.g. if you get back a pointer to a [C] string.

I doubt you can take over an array of pointers [to heap allocations] (or 
just A pointer to a pointer). Julia would have to delete that array (ok), 
but [first] go through it and delete what it points to (and recursively if 
that applies).

It would be cool, if Julia can do that, I just very much doubt it. I guess 
you'll have to let C deallocate, and use a callback, or provide all the 
strings.

About the length of the strings however, Julia has a Cstring type that 
assumes 0 (NULL) ending, and you do not have to worry about that (Julia 
takes care of "length" issues then if needed). [You could also take over 
other types of string], free doesn't really care about the length of thre 
strings, allocations can be somewhat bigger, it takes care of the padding 
to.

Hope this helps and I'm not telling you that Julia can do less than it 
actually does (or you think it does)..



Re: [julia-users] Re: Julia and the Tower of Babel

2016-10-13 Thread Páll Haraldsson
On Sunday, October 9, 2016 at 9:59:12 AM UTC, Michael Borregaard wrote:
>
>
> So when I came to julia I was struck by how structured the package 
> ecosystem appears to be, yet, in spite of the micropackaging. [..] I think 
> there are a number of reasons for this difference, but I also believe that 
> a primary reason is the reliance on github for developing the package 
> ecosystem from the bottom up, and the use of organizations.
>

Could be; my feeling is that Julia allows for better

https://en.wikipedia.org/wiki/Separation_of_concerns [term "was probably 
coined by Edsger W. Dijkstra 
 in his 1974 paper "On 
the role of scientific thought" "; synonym for "modularity"?]

that other languages, OO (and information hiding) has been credited as 
helping, but my feeling is that multiple dispatch is even better, for it.


That is, leads to low:

https://en.wikipedia.org/wiki/Coupling_(computer_programming)
"Coupling is usually contrasted with cohesion . Low 
coupling  often correlates 
with high cohesion, and vice versa. Low coupling is often a sign of a 
well-structured computer system  
and a good design"


https://en.wikipedia.org/wiki/Cohesion_(computer_science)

Now, as an outsider looking in, e.g. on:

https://en.wikipedia.org/wiki/Automatic_differentiation

There seems to be lots of redundant packages with e.g.

https://github.com/denizyuret/AutoGrad.jl


Maybe it's just my limited math skills showing, are there subtle 
differences, explaining are requiring all these packages?

Do you expect some/many packages to just die?

One solution to many similar packages is a:

https://en.wikipedia.org/wiki/Facade_pattern

e.g. Plots.jl and then backends (you may care less about(?)).


Not sure when you use all these similar (or complementary?) packages 
together.. if it applies.


In my other answer I misquoted (making clear original user's comment is 
quoting

Style Insensitive?
https://github.com/nim-lang/Nim/issues/521
>Nimrod is a style-insensitive language. This means that it is not 
case-sensitive and even underscores are ignored: type is a reserved word, 
and so is TYPE or T_Y_P_E. The idea behind this is that this allows 
programmers to use their own preferred spelling style and libraries written 
by different programmers cannot use incompatible conventions. [..]

Please *rethink* about that or at least give us an option to disable both: case 
insensitive and also underscore ignored

[another user]:

Also a consistent style for code bases is VASTLY overrated, in fact I 
almost never had the luxury of it and yet it was never a problem."


[julia-users] Re: Julia and the Tower of Babel

2016-10-13 Thread Páll Haraldsson
On Friday, October 7, 2016 at 3:35:46 PM UTC, Gabriel Gellner wrote: 

> `atol/rtol` versus
>

 

> `abstol/reltol` versus `abs_tol/rel_tol`
>

For the latter "versus" at least (and other examples), this would be solved 
by style-insensitivity, as in Nimrod (or Nim) language, the only one I've 
heard that does this; not sure of status of it, maybe they dropped it with 
the name-change).

I hesitated to propose this for Julia, when I first discovered this, 
I'm/was conflicted; I thought this would break code, as it's a breaking 
change, but would in fact help(?)

This could in theory be done with a macro(?)



Style Insensitive?
https://github.com/nim-lang/Nim/issues/521
"Nimrod is a style-insensitive language. This means that it is not 
case-sensitive and even underscores are ignored: type is a reserved word, 
and so is TYPE or T_Y_P_E. The idea behind this is that this allows 
programmers to use their own preferred spelling style and libraries written 
by different programmers cannot use incompatible conventions.

Please *rethink* about that or at least give us an option to disable both: case 
insensitive and also underscore ignored

[another user]:

Also a consistent style for code bases is VASTLY overrated, in fact I 
almost never had the luxury of it and yet it was never a problem."


Trivia on Nim[rod], D and upcoming(?) C++ below, I was just looking up hard 
to find above info..):

http://nim-lang.org/docs/nep1.html
Naming Conventions 

* Type identifiers should be in PascalCase. All other identifiers should be 
in camelCase with the exception of constants which *may* use PascalCase but 
are not required to.

[..]

For constants coming from a C/C++ wrapper, ALL_UPPERCASE are allowed, but 
ugly. (Why shout CONSTANT? Constants do no harm, variables do!)



http://nim-lang.org/


- * A fast *non-tracing* garbage collector that supports soft real-time 
systems (like games). 
- * System programming features: Ability to manage your own memory and 
access the hardware directly. Pointers to garbage collected memory are 
distinguished from pointers to manually managed memory.
[..]

* Macros can modify the abstract syntax tree at compile time.
[..] 
- * Macros cannot change Nim's syntax because there is no need for it. 
Nim's syntax is flexible enough. 
- * Statements are grouped by indentation but can span multiple lines. 
Indentation must not contain tabulators so the compiler always sees the 
code the same way as you do.

https://en.wikipedia.org/wiki/Nim_(programming_language)

*"Nim* (formerly named *Nimrod*)
[..]
Language designInfluenced by[..]
Lisp : Macro system, embrace the AST, homoiconicity
[..]


UFCS , a feature supported by Nim" [and 
D]:


https://en.wikipedia.org/wiki/Uniform_Function_Call_Syntax


"It has been proposed (as of 2016) for addition to C++ by Bjarne Stroustrup 
[3] 

 
and Herb Sutter , to reduce the 
ambiguous decision between

[..]

// All the followings are correct and equivalent
int b = first(a);
int c = a.first();
int d = a.first;
"



[julia-users] Re: What is really "big data" for Julia (or otherwise), 1D or multi-dimensional?

2016-10-13 Thread Páll Haraldsson
On Thursday, October 13, 2016 at 1:17:32 AM UTC, cdm wrote:
>
>
> do you have traditional main memory RAM in mind here ... ?
>

Yes (how big arrays people are working with; but also if bigger files, how 
big), and no:
 

> with flash memory facilitating tremendous advances
> in (near) in-memory processing, the lines between
> traditional RAM and flash memory have become
> considerably blurred.
>

I know, and the distinction will I guess disappear in the future (but yes, 
I'm thinking what you need to address, as RAM or looks like, including 
virtual memory) at least with:

https://en.wikipedia.org/wiki/Resistive_random-access_memory

[already available] etc.

I'm thinking how big do pointers need to be, e.g. 64-bit seems to be 
overkill.. or should I say indexes into arrays need not be.

Yes, there's also memory mapped I/O.

We had 64-bit file systems before 64-bit [x86] CPUs, so bitness of CPU 
doesn't (didn't, yes better(?) fro memory mapped I/O..) have to align with 
big files (and we already have 128-bit ZFS is 278 but individual files are 
still limited to 64-bit).

>
> ~ cdm
>
>
> On Wednesday, October 12, 2016 at 3:23:58 PM UTC-7, Páll Haraldsson wrote:
>>
>>
>> I'm most concerned, about how much needs to fit in *RAM*, and curious 
>> what is considered big, in RAM (or not..).
>>
>>

[julia-users] What is really "big data" for Julia (or otherwise), 1D or multi-dimensional?

2016-10-12 Thread Páll Haraldsson

I'm thinking of a new algorithm for Julia..

I'm most concerned, about how much needs to fit in *RAM*, and curious what 
is considered big, in RAM (or not..).

A.
For 2D (or more), dense or sparse (including non-square), is at most a 2 
billion for any highest dimensional a big limit? Note for square dense 
array you can't get more than 8.4 million × 8.4 million (with 2015/2015 era 
x86 CPUs as address busses are capped at 46-bit; while theoretical 4 
billion × 4 billion could fit if 64-bit addressing was available) to fit in 
RAM (one byte per entry).. and in practice much lower.. limited by actual 
RAM..


I see, however, a map-reduce way:

http://infolab.stanford.edu/~ullman/mmds/book.pdf
2.6.7 Case Study: Matrix Multiplication


Would that use much less RAM? At any point?


B.

I'm aware of billion row tables, but you usually query them (or kind of 
"stream" them), how much would be limiting to fit in RAM? Would a 2 GB (or 
say 8 or 16 GB) be limiting?


https://books.google.is/books?id=BKEoDAAAQBAJ&pg=PA145&lpg=PA145&dq=big+one+dimensional+dataset&source=bl&ots=qkbpp3Ks_T&sig=ewWSbdVp8MUhQHjMqMWfnQh4Rfs&hl=en&sa=X&redir_esc=y#v=onepage&q=big%20one%20dimensional%20dataset&f=false


Three billion DNA  base pairs 
, seem to blow 2 GB limit, but not 
if you need less than one byte per base. I also doubt all chromosomes would 
be kept in the same array.

Can't imagine 2 GB being limiting for UFT-8..



[julia-users] Julia Unicode (UTF-8) support (vs. Perl..); Also includes humourous, educational, list (part of, adviced to read all if you program [in Perl]..)

2016-10-12 Thread Páll Haraldsson



I'm aware of UTF-8 only in Julia 0.5 and LegacyEncodings.jl (and some of 
the proposed changes in 0.6, still I think only for basic UTF-8 support, 
not full Unicode, e.g. collation).


[What/which language would have gold-standard Unicode (UTF-8) support, if 
not Perl; Rust (or Go)? Julia? Python? Other?]


I'm hoping there will never be a huge boilerplate header needed for good 
Unicode support, as in Perl (I was under the mistaken impression that Perl 
had good Unicode support; still might be the gold-standard for Unicode (and 
regex and string handling in general) support). At worst, if needed, then:


using ICU # any other needed? Maybe:


https://github.com/nolta/UnicodeExtras.jl



See list at the bottom (or full answer at stackoverflow), at least for 
education, on the can-of-worms that is full Unicode (UTF-8) support.



http://iaindunning.com/blog/julia-unicode.html

"The Julia  programming language has excellent 
support for Unicode."


For sure? If not, what is needed the most?



https://github.com/JuliaLang/julia/issues/774

"Titlecase info is provided by UTF8proc, but it would be nice to have a 
little wrapper routine like utf8proc_uppercase to make it easier to access."


E.g. Titlecase (see below) was interesting to me, that there was a third 
case; and that numbers can be upper and lower case(?) or does he mean sub- 
super-script? I knew some of what I quote below, but note full list, 
includes more of the non-obscure issues.



There are some other optional Unicode packages, at least what I'm aware of:


https://github.com/randy3k/UnicodeCompletion



http://stackoverflow.com/questions/6162484/why-does-modern-perl-avoid-utf-8-by-default


🌴 🐪🐫🐪🐫🐪 🌞 *𝕲𝖔  𝕿𝖍𝖔𝖚  𝖆𝖓𝖉  𝕯𝖔  𝕷𝖎𝖐𝖊𝖜𝖎𝖘𝖊*
 🌞 🐪🐫🐪 🐁 
--
𝓔𝓭𝓲𝓽 :  𝙎𝙞𝙢𝙥𝙡𝙚𝙨𝙩 *℞*:  𝟕 
𝘿𝙞𝙨𝙘𝙧𝙚𝙩𝙚  𝙍𝙚𝙘𝙤𝙢𝙢𝙚𝙣𝙙𝙖𝙩𝙞𝙤𝙣𝙨


[Skipped list, that is for Perl; *What would be similar for Julia 0.5?*]


🎅𝕹 𝖔   𝕸 𝖆 𝖌 𝖎 𝖈   𝕭 𝖚 𝖑 𝖑 𝖊 𝖙   🎅 

Saying that “Perl should [*somehow!*] enable Unicode by default” doesn’t 
even start to begin to think about getting around to saying enough to be 
even marginally useful in some sort of rare and isolated case. Unicode is 
much much more than just a larger character repertoire; it’s also how those 
characters all interact in many, many ways.


Even the simple-minded minimal measures that (some) people seem to think 
they want are guaranteed to miserably break millions of lines of code, code 
that has no chance to “upgrade” to your spiffy new *Brave New World* 
modernity.[..]


💡   𝕴𝖉𝖊𝖆𝖘   𝖋𝖔𝖗  𝖆   𝖀𝖓𝖎𝖈𝖔𝖉𝖊 ⸗ 𝕬𝖜𝖆𝖗𝖊   🐪   
𝕷𝖆𝖚𝖓𝖉𝖗𝖞 𝕷𝖎𝖘𝖙   💡 

At a minimum, here are some things that would appear to be required for 🐪 
to “enable Unicode by default”, as you put it:


[24-item list; again Perl-specific. Some/all(?) apply to Julia, at least 
translated]


11. String comparisons in 🐪 using eq, ne, lc, cmp, sort, &c&cc are always 
wrong. So instead of @a = sort @b, you need @a = 
Unicode::Collate->new->sort(@b). Might as well add that to your export 
PERL5OPTS=-MUnicode::Collate. You can cache the key for binary comparisons.


💩 𝔸 𝕤 𝕤 𝕦 𝕞 𝕖   𝔹 𝕣 𝕠 𝕜 𝕖 𝕟 𝕟 𝕖 𝕤 𝕤 💩 

And that’s not all. There are million broken assumptions that people make 
about Unicode. Until they understand these things, their 🐪 code will be 
broken.


[Applies to Julia and all other languages]


4. Code that assumes Perl uses UTF‑8 internally is wrong.

6. Code that assumes Perl code points are limited to 0x10_ is wrong.

9. Code that assumes every lowercase code point has a distinct uppercase 
one, or vice versa, is broken. For example, "ª" is a lowercase letter with 
no uppercase; whereas both "ᵃ" and "ᴬ" are letters, but they are not 
lowercase letters; however, they are both lowercase code points without 
corresponding uppercase versions. Got that? They are *not* 
\p{Lowercase_Letter}, despite being both \p{Letter} and \p{Lowercase}.

10. Code that assumes changing the case doesn’t change the length of the 
string is broken.

11. Code that assumes there are only two cases is broken. There’s also 
titlecase.

12. Code that assumes only letters have case is broken. Beyond just 
letters, it turns out that numbers, symbols, and even marks have case. In 
fact, changing the case can even make something change its main general 
category, like a \p{Mark} turning into a \p{Letter}. It can also make it 
switch from one script to another.

14. Code that assumes Unicode gives a fig about POSIX locales is broken.

15. Code that assumes you can remove diacritics to get at base ASCII 
letters is evil, still, broken, brain-damaged, wrong, and justification for 
capital punishment.

26. Code that assumes that it cannot use "\x{}" is wrong.

28. Code that transcodes from UTF‐16 or UTF‐32 with leading BOMs into UTF‐8 
is broken if it puts a BOM at the start of the resulting UTF-8. This is so 
stupid the engineer should have their eyelids removed.

29. Code that assumes the CESU-8 is a valid UTF encoding is wrong. 
Likewise,

[julia-users] Re: New SPEC - open to Julia[applications]?

2016-10-10 Thread Páll Haraldsson
On Monday, October 10, 2016 at 2:09:01 PM UTC, Páll Haraldsson wrote:

> In case they update again.. and allow Julia later
>

This could happen, there's a precedent for one "dynamic" language (seem no 
longer in later versions of SPEC):


https://www.spec.org/cpu2000/CINT2000/253.perlbmk/docs/253.perlbmk.html
"253.perlbmk is a cut-down version of Perl v5.005_03, the popular scripting 
language. SPEC's version of Perl has had most of OS-specific features 
removed. In addition to the core Perl interpreter, several third-party 
modules are used: MD5 v1.7, MHonArc v2.3.3, IO-stringy v1.205, MailTools 
v1.11, TimeDate v1.08"

I only new of C and Fortran, in that version of the benchmark, and C++ in 
later.

https://www.spec.org/cpu2006/CFP2006/

Fortran is still in..

Julia is the new Fortran to me..



[julia-users] Re: New SPEC - open to Julia[applications]?

2016-10-10 Thread Páll Haraldsson
On Saturday, October 8, 2016 at 7:31:47 PM UTC, Chris Rackauckas wrote:
>
> From your second link:
>
>
>>- Submissions for the first step in the search program will be 
>>accepted by SPEC beginning 11 November 2008 and ending 30 June 2010 
>> (11:59 
>>pm, Pacific Standard Time).
>>
>> Where's the time-machine when you need one. :-) I could use one now.. I 
still could/should delete these posts, just not from your memory..


In case they update again.. and allow Julia later or you want to support 
the system they require support on:

https://www.spec.org/cpu2006/Docs/system-requirements.html
"You will need a computer system running UNIX, Microsoft Windows, or Mac OS 
X. Pre-compiled versions of the toolset are provided that are expected to 
work with: [List of some UNIX variants, unclear if support for all is 
needed or only any one, then macOS is supported.. Julia could theoretically 
support all]

[..] one unsupported toolset is provided as a courtesy. 
   
   - Alpha Tru64 Unix V5.1B or later"


older info:

https://www.spec.org/cpu2006/Docs/changes-in-v1.2.html
"II.B. Unsupported toolsets: BSD dropped; Alpha updated. 
   
   - 
   
   SPEC CPU2006 V1.1 provided unsupported tools built on BSD. These 
   toolsets are not present in SPEC CPU2006 V1.2.
   - 
   
   SPEC CPU2006 V1.1 provided tools built on Digital Unix V4.0F. For SPEC 
   CPU2006 V1.2, the tools have been rebuilt on Tru64 Unix V5.1B."
   
Seems backwards, with Alpha discontinued as of April 2007 (and "SPEC Ships 
V1.1  (06/03/2008)")


but:

https://en.wikipedia.org/wiki/Tru64_UNIX
"supported until December 2012"


BSD still lives on..



Re: [julia-users] ANN: RawArray.jl

2016-10-08 Thread Páll Haraldsson


On Monday, September 26, 2016 at 1:59:15 PM UTC, David Smith wrote:
>
> Hi, Isaiah. This is a valid question.
>
> 0. As a preface, I'd like to say I'm not trying to replace anything. I 
> wrote RawArray to solve a problem we have in magnetic resonance imaging 
> (quickly saving and loading large complex float arrays), and then I decided 
> to share it so if other people like it and find it useful, then cool beans.
>
> Now for the mild stumping...
>
> 1. I don't think NRRD is as substantially used as you might think. I've 
> worked in imaging science for years on the data processing/file format end, 
> and I've never seen anyone use it, and I've never even heard of it.  (Pity, 
> because it looks nice enough. :-\)
>
> 2. RawArray is simpler to handle and trivial to understand. I believe all 
> you need from an I/O library is I/O.* I don't want my file I/O library 
> performing transformations on my data. 
>
> I also don't need it to read image formats. Part of the reason behind 
> RawArray is to avoid standard image formats because they are not optimized 
> for large complex-float arrays. I just want to save multi-GB data arrays to 
> disk quickly and read them back quickly on a different machine, five years 
> later. 
>
> I have other implementations (https://github.com/davidssmith/ra), and all 
> are super short and platform agnostic.
>
> 3. RawArray is surely faster. All it does is read. It doesn't perform any 
> transformations or encoding, so it can't possibly be slower than NRRD.
>

Maybe not compared to NRRD, but it can be slower than lossless image 
compression.

I did read (short.. good):
https://github.com/davidssmith/ra/blob/master/doc/ra-sedona-abstract.pdf

https://en.wikipedia.org/wiki/Free_Lossless_Image_Format

FLIF is not a replacement for all uses (multidimensional, would be 
interesting to know if could to be extended to..), but seem to be the best 
option for non-lyssy image compression:

http://flif.info/index.html
"
53% smaller than lossless JPEG 2000 compression,
74% smaller than lossless JPEG XR compression.

Even if the best image format was picked out of PNG, JPEG 2000, WebP or BPG 
for a given image corpus, depending on the type of images (photograph, line 
art, 8 bit or higher bit depth, etc), then FLIF still beats that by 12% on 
a median corpus 
[..]
FLIF does away with knowing what image format performs the best at any 
given task.
[..]
Other lossless formats also support progressive decoding (e.g. PNG with 
Adam7 interlacing), but FLIF is better at it. Here is a simple 
demonstration video, which shows an image as it is slowly being downloaded:
[..]
No patents, Free

Unlike some other image formats (e.g. BPG and JPEG 2000), FLIF is 
completely royalty-free and it is not known to be encumbered by software 
patents. At least as far as we know. FLIF is uses arithmetic coding, just 
like FFV1 (which inspired FLIF), but as far as we know, all patents related 
to arithmetic coding are expired. Other than that, we do not think FLIF 
uses any techniques on which patents are claimed. However, we are not 
lawyers. There are a stunning number of software patents, some of which are 
very broad and vague; it is impossible to read them all, let alone 
guarantee that nobody will ever claim part of FLIF to be covered by some 
patent. All we know is that we did not knowingly use any technique which is 
(still) patented, and we did not patent FLIF ourselves either.

The reference implementation of FLIF is Free Software. It is released 
under the terms of the GNU Lesser General Public License (LGPL), version 3 
or any later version.
[..]
The reference FLIF decoder is also available as a shared library, 
released under the more permissive (non-copyleft) terms of the Apache 2.0 
license. Public domain example code is available to illustrate how to use 
the decoder library.

Moreover, the reference implementation is available free of charge 
(gratis) under these terms.
[..]
FLIF currently has the following features:

Lossless compression
Lossy compression (encoder preprocessing option, format itself is 
lossless so no generation loss)
Greyscale, RGB, RGBA (also palette and color-bucket modes)
Color depth: up to 16 bits per channel (high bit depth)"

-- 
Palli.

There is a C library at (https://github.com/davidssmith/ra) if you think a 
> pure Julia implementation isn't fast enough. 
>
> Cheers,
> Dave
>
> [*] That said, I'm not completely ruling out having transformations 
> available in RawArray between the RAM and disk. For example, when I first 
> wrote it, I had included Blosc compression as an option, signaled by a flag 
> in the header. But in general most transformations are best made in RAM 
> after reading or on disk with already existing, battle-proven tools, such 
> as gzip, uunencode, tar, etc. 
>
>
> On Sunday, September 25, 2016 at 9:59:45 PM UTC-5, Isaiah wrote:
>>
>> Is there a reason to use this file format over NRRD [1]? To borrow a w

[julia-users] New SPEC - open to Julia[applications]?

2016-10-08 Thread Páll Haraldsson

https://www.spec.org/cpuv6/

It would be cool (and publicity) if Julia would make it into SPEC version 
6. Anyway, might be of interest to people here.

SPEC used C or Fortran last I looked, I see only references to "languages", 
"C/C++" and "portable":


https://www.spec.org/cpuv6/
"SPEC holds to the principle that better benchmarks can be developed from 
actual applications. With this in mind, SPEC is once again seeking to 
encourage those outside of SPEC to assist us in locating applications that 
could be used in the next CPU-intensive benchmark suite, currently under 
development within SPEC and currently designated as SPEC CPUv6.[..]

Portable or can be ported to multiple hardware architectures and operating 
systems with reasonable effort 


For C/C++ programs:
[..]
for the main routine, take one of these two forms


[..]
the programming(s) language used in the program/application and approximate 
lines of code, 

[..]
Step 4: Complete Code Testing and Benchmark Infrastructure ($1000 upon 
successful completion) 
[..]
SPEC always prefers to use code that conforms to the relevant language 
standards.

[..]
Step 6: Acceptance into the CPU Suite ($2500 if accepted)

If the program/application is recommended to and is accepted by the Open 
Systems Group, in its sole discretion, then the program/application is 
included in the suite and the Submitter will receive $2500 and a license 
for the suite when it is released."


[julia-users] Re: Linux distributions with Julia >= 0.5.0

2016-10-08 Thread Páll Haraldsson
On Saturday, October 8, 2016 at 10:23:00 AM UTC, Femto Trader wrote:
>
> Hello,
>
> my main development environment is under Mac OS X
> but I'm looking for a Linux distribution (that I will run under VirtualBox)
> that have Julia 0.5.0 support (out of the box)
>
> Even Debian Sid is 0.4.7 (October 8th, 2016)
> https://packages.debian.org/fr/sid/julia
>
> So what Linux distribution should I use to simply test my packages with 
> Julia >=0.5.0 ?
>

[I use Ubuntu 16.04, the latest officially supported one, and an LTS one 
(16.10 should be around the corner, but not LTS).]


I believe the Julia binary executables at the download page are recommended 
and the only one they want officially supported.


That said, there are Ubuntu PPAs:

https://launchpad.net/~staticfloat/+archive/ubuntu/juliareleases

[I used this, it's really simple to install]

https://launchpad.net/ubuntu/+ppas?name_filter=Julia


It was great to see Ubuntu binaries maintained, and I see that julia 0.5 
was actually supported with a PPA (I had previously used a PPA with 0.4.x), 
so I installed that.

I *think* that there is really no downside to use the PPA vs. the official 
binaries. They might get updated, let's say the maintainer doesn't (not 
like they are getting paied) then you no worse of than with the official 
download that you would always have to manually download a new minor or 
major version.


I actually have an update waiting..:

Changes for julia versions:
Installed version: 0.5.0-xenial1
Available version: 0.5.0-xenial5

Not sure what it entails.

The download page will have 0.5.1 before the PPA. In my case I'm not 
worried.. and I guess the PPA will follow soon.


[julia-users] Re: julz: a command line utility for creating scalable Julia applications

2016-10-06 Thread Páll Haraldsson
On Thursday, October 6, 2016 at 2:12:21 PM UTC, Dan Segal wrote:
>
> *Code*
>
>- github repo: https://github.com/djsegal/julz
>- (see readme.rst as well as included dummy Julia project)
>
> *Summary*
>
> Julz is a command line utility for creating ambitious Julia applications.
>

I see you changed "ambitious Julia applications" to "scalable Julia 
applications", do you need to add "web"?:


Maybe it's just me this (and ember-cli (and ember.js "ambitious *web* 
applications"), not really familiar with either..) but:

 In this way it channels Ruby on Rails’ mantra of “convention over 
> configuration”: tell people where files should go, but allow them to tweak 
> it if they so desire.
>

implied to me "web applications" (maybe nowadays there isn't any other 
kind..?).


It's getting to be difficult to choose a Julia web application framework 
with so many choices, is this competing with any of them, or can you use 
with and maybe not just for the web?


I see the project is Python (and Ruby ("Gemfile")?! and maybe Julia?) code. 
Maybe that is why:

I see: "Change from 4 spaces to 2 spaces :P 
"
 
[conflicts with Julia's CONTRIBUTING.md, recommendation of 4 spaces, and if 
I recall Julia styleguides elsewhere..]

-- 
Palli.



Re: [julia-users] REPL sometimes really slow in 0.5

2016-10-06 Thread Páll Haraldsson
On Thursday, October 6, 2016 at 1:52:50 PM UTC, Tom Breloff wrote:
>
> The first thing I'd check is whether you're using any swap (next time you 
> notice the slowness)... that will bring a system to its knees instantly.
>

Thanks, I know, has happened, but didn't seem to fit that pattern (can't 
rule out, that being the possibility):

KiB Mem : 16331292 total,  3317048 free, 11967844 used,  1046400 buff/cache
KiB Swap: 32226300 total, 26103780 free,  6122520 used.  3758560 avail Mem 

[I realize now, before I had SSD (and swap on it), not on the recent work 
machine I got..]


> On Thu, Oct 6, 2016 at 9:29 AM, Páll Haraldsson  > wrote:
>
>> Has anyone noticed, for such simple stuff as:
>>
>> a=[1,2,3,1]
>>
>> [something like 10 seconds, but yes, instantly after closing Julia and 
>> opening again.]
>>
>> just now in Version 0.5.0-rc4+0
>>
>>
>> Yes, I do have something else running (a web browser, Firefox, always at 
>> 100% CPU..).
>>
>> I ignored this the first time I noticed, as I thought CPU load the cause. 
>> Julia and Firefox have the safe priority, the default 20 in Linux.
>>
>> I didn't notice this before in 0.4. This could in theory be newer Firefox 
>> (pre-"electrolysis", w/new "Web Content" process). Still doubt that, as 
>> Linux SHOULD multitask well and while the browser is often problematic, it 
>> usually doesn't slow down other non-GUI stuff at least..
>>
>> -- 
>> Palli.
>>
>>
>

Re: [julia-users] Julia vs Seed7, why languages succeed or fail

2016-10-06 Thread Páll Haraldsson
You both have excellent responses, I'm just commenting on those in case 
people find interesting and/or want to compare and contract with Seed7 or 
other languages.

Can anyone look at Julia (and Seed7) at:
https://en.wikipedia.org/wiki/Comparison_of_programming_languages

confirm info is correct, add (or correct) if something is missing (or 
propose here or at Wikipedia's "Talk-page", if you think 
conflict-of-interest applies; shouldn't apply for clear cut-correct info).


In case people like to compare Seed7 to Julia (or Java):
http://seed7.sourceforge.net/faq.htm#java_comparison

"All parameters are call-by-value"

"Arrays are present in many programming languages, but they are usually 
hard-coded into the compiler / interpreter. Seed7 does not follow this 
direction. Instead it introduces abstract data types as common concept 
behind arrays, structs, hashes and other types."

[I knew Dylan was slow and failed despite multiple dispatch (and Common 
Lisp, only other MD language I was familiar with, is well Lisp, with list, 
not arrays as main data structure, not modern/fast/good enough for at least 
Julia's audience, for not just that reason.]

On Thursday, October 6, 2016 at 6:57:26 AM UTC, Tamas Papp wrote:

> Also, this is hard to accept, but languages succeed and fail partly for 
> random or trivial reasons.


Very true..
 

> Timing also matters: a language may fail 
> because the existing technology is not yet ready for it (think of 
> implementing Julia before LLVM), or succeed because they are the first 
> language to scratch a particular itch,


What I was thinking of with Seed7, there are so many similarities (Julia 
seemed redundant with it, but I also see differences when looking more 
closely), maybe it was ahead of the times; maybe the speed wasn't up to 
par, needed LLVM(?), while nothing wrong with the language's syntax. Julia, 
succeeds not just because of the syntax, also in large part because of the 
implementation (e.g. LLVM). I recalled having seen Seed7 was fast (can 
compile to C..), I may misremember/confuse with another language (e.g. 
Nim[rod]). Benchmarks for seed7 are really difficult to google for..

Julia seems to be filling a vacuum in the scientific community. It is 
> fast (Fortran/C), yet interactive/user-friendly (R/Matlab/Octave), and 
>

[Hopefully not just there..]

About "Who's the audience for Seed7? I googled Seed7 BLAS .." Maybe that's 
its flaw, at least Julia's FFI (and included libraries) are one of its 
killer features, while I'm not looking for BLAS etc.


Maybe that was it, not clear goals targeting say scientific (or good enough 
marketing), that really needed a new language?


One thing I forgot to check, REPL in Seed7:

Unclear about a REPL (interpreter must be the same for them?) with "Seed7 
allows the interpretation and compilation of programs with any license. 
There is no restriction on the license of your Seed7 programs." and I only 
saw this using the REPL term:

"Command line utilities. E.g.:
   
   - A calculator that works like a read-eval-print loop (REPL) 
   "



I guess REPL/"dynamic" is now almost required for any new language to 
succeed (I see a REPL is coming to Java 9, not only Beanshell).

This wasn't thought possible in a fast language previously, Julia proved 
wrong, now one of the key "selling points" (solving the "two language" 
problem).


is available for free (which not only saves the cost of a license, but
>

Some languages fail for (cost or) non-Windows support (or only MinGW), or 
maybe (unfairly?) because GPL, none of which seemed to apply (LGPL) to 
Seed7:
 
"Windows is supported with several compilers:

MinGW GCC (the binary Windows release of Seed7 uses MinGW)
Cygwin GCC (the X11 graphics needs Cygwin/X)
MSVC cl.exe (cl.exe is the stand-alone compiler of MSVC)
BDS bcc32.exe (bcc32.exe is the stand-alone compiler of the BDS)"

"Seed7 allows the interpretation and compilation of programs with any 
license. There is no restriction on the license of your Seed7 programs."


http://blog.fourthbit.com/2014/03/01/the-best-programming-language-or-how-to-stop-worrying-and-love-the-code

-- 
Palli.



[julia-users] REPL sometimes really slow in 0.5

2016-10-06 Thread Páll Haraldsson
Has anyone noticed, for such simple stuff as:

a=[1,2,3,1]

[something like 10 seconds, but yes, instantly after closing Julia and 
opening again.]

just now in Version 0.5.0-rc4+0


Yes, I do have something else running (a web browser, Firefox, always at 
100% CPU..).

I ignored this the first time I noticed, as I thought CPU load the cause. 
Julia and Firefox have the safe priority, the default 20 in Linux.

I didn't notice this before in 0.4. This could in theory be newer Firefox 
(pre-"electrolysis", w/new "Web Content" process). Still doubt that, as 
Linux SHOULD multitask well and while the browser is often problematic, it 
usually doesn't slow down other non-GUI stuff at least..

-- 
Palli.



[julia-users] Julia vs Seed7, why languages succeed or fail

2016-10-05 Thread Páll Haraldsson

A.
I just [re?]discovered Seed7 language, one of the few languages with 
multiple dispatch, also extensible (not though macros).

https://groups.google.com/forum/#!topic/comp.programming/_C08U8t4dRg
"Seed7 has more then 90 libraries now." [in 2013, after 7 years]

They hit Top100 (93? top) on TIOBE, but nowhere to be found now.

They seem very similar, except for Pascal like syntax, I guess there must 
be more to it..


B.
Chapel can be faster than Go or competitive, but also much slower (is a 
parallel language, not sure if not working/meant to work always..):

http://benchmarksgame.alioth.debian.org/u64q/chapel.html

fasta 
 


source secs KB gz cpu cpu load 
Chapel 

 20.59 
28,868 1216 20.59 100% 0% 0% 1% 
Go 

 1.97 



Re: [julia-users] Understanding Nullable as immutable

2016-09-21 Thread Páll Haraldsson
On Wednesday, September 21, 2016 at 4:50:26 PM UTC, Fengyang Wang wrote:
>
> but type piracy is bad practice. This method should really be in Base.
>

As always, if something, should be in Base or Julia, then I think a PR is 
welcome.

[Maybe I do not fully understand this (yes, type piracy I guess equals, 
accessing internal variable, that would be Private (to not violate Parnas' 
Principles) in other languages).

I like how Julia avoids Hoare's self-admitted billion dollar mistake. It 
seems to violate Parnas' but since no type can subtype a concrete type, it 
may not, or at least that violation can always be avoided(?).]

-- 
Palli.



[julia-users] http://pkg.julialang.org/ needs updating.. I guess 0.5 is now recommended (and specific packages?)

2016-09-21 Thread Páll Haraldsson

At:

http://pkg.julialang.org/

http://pkg.julialang.org/pulse.html

I see:

"*v0.4.6 (current release)* — v0.5-pre (unstable)"

former is no longer true, and I guess not the latter. Isn't "unstable" 
meant to refer to Julia itself? Or packages..? Both?

This is just a reminder to update..




Great to see: "Listing all 1127 registered packages 
". Then there are some more.. 
unknowns.. but of those registered:



I see at least 5 web frameworks now (this "gluttony" is getting to be a 
problem..), and 29 hits on "web", yes, infrastructure code, e.g. HTTP2.


New ones I didn't know about:
http://github.com/codeneomatrix/Merly.jl

http://github.com/EricForgy/Pages.jl


and what I've yet too look into (thought the newest one):

http://github.com/wookay/Bukdu.jl


Any recommendations? Great to know that there is just not math related.

-- 
Palli.


[julia-users] Re: Problem with 0.4.7 mac os distribution

2016-09-21 Thread Páll Haraldsson
On Tuesday, September 20, 2016 at 7:16:41 PM UTC, Tony Kelman wrote:
>
> try removing the copy of 0.4.7 you downloaded and replacing it with a 
> fresh copy.


Maybe even try out 0.5 anyway, since it is out.

[Not sure what you mean by "distro appears to download OK", "Julia + Juno 
IDE bundles (v0.3.12)", is the only "distro" I can think of, seems there 
wasn't a distro/bundle for (0.4.x or) 0.5. I got Juno separately from 0.5 
as instructed, and that worked (after small workaround, may have been just 
an issue on my machine and/or RC4 at the time).]

-- 
Palli.




[julia-users] Re: Does Julia 0.5 leak memory?

2016-09-21 Thread Páll Haraldsson
On Sunday, September 18, 2016 at 12:53:19 PM UTC, K leo wrote:

> Any thoughts on what might be the culprit?
>
>   | | |_| | | | (_| |  |  Version 0.5.0-rc4+0 (2016-09-09 01:43 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
>

You say 0.5 in the title (when 0.5 wasn't out). Your later posts are after 
0.5 is out. Maybe it doesn't matter, I don't know, maybe the final version 
is identical (except for versioninfo function..), I just do not know that 
for a fact, so might as well upgrade just in case.

-- 
Palli.





[julia-users] Re: Is FMA/Muladd Working Here?

2016-09-21 Thread Páll Haraldsson

On Wednesday, September 21, 2016 at 5:56:45 AM UTC, Chris Rackauckas wrote:

> Julia Version 0.5.0-rc4+0
>
 
I'm not saying it matters here, but is this version know to be identical to 
the released 0.5? Unless you know, in general, bugs should be reported on 
latest version.

-- 
Palli.



[julia-users] Re: ANN: Julia 0.4.7 released

2016-09-20 Thread Páll Haraldsson
On Tuesday, September 20, 2016 at 6:30:46 AM UTC, Tony Kelman wrote:
>
> Hello all! The latest bugfix release of the Julia 0.4.x line has been 
> released. We've been a bit distracted with 0.5 release candidates so this 
> also took longer than our usual monthly target. This may be the last 
> release of the 0.4 line, unless new issues are brought to our attention 
> that need backporting to the release-0.4 branch. Binaries are available 
> from the usual place 
>

"For Julia 0.4, only bugfixes are being supported. Releases older than 0.4 
are now unmaintained."

while http://julialang.org/downloads/oldreleases.html

says (is this contradicting?):

"v0.3.12 (critical bugfixes only)" should it say similar to "v0.2.1 
(unmaintained)"? I made a PR to that effect, some time ago. [I see 0.3 is 
now "frozen" for packages.]

Maybe it's a conscious choice to keep that language (no need to rule out 
critical [security] bugfixes?) but it is some promise we want to keep? I've 
noticed there haven't been any updates to 0.3.x in a while. For any 
release, can you think of any security updates?

[At least I will be moving to 0.5, maybe for some good to have 0.4 
supported for a while, I doubt anyone is stuck on 0.3 (or older)..]

-- 
Palli.


(however the links for 0.4.7 will be moved to the old releases page very 
> soon), and as is typical with such things, please report all issues to 
> either the issue tracker , or 
> email the julia-users list. If you reply to this message on julia-users, 
> please do not cc julia-news which is intended to be low-volume.
>
> This is a bugfix release, see this commit log 
>  for the list 
> of bugs fixed between 0.4.6 and 0.4.7. Bugfix backports to the 0.4.x line 
> may continue if necessary. If you are a package author and want to rely on 
> functionality that did not work in earlier 0.4.x releases but does work in 
> 0.4.7 in your package, please be sure to change the minimum julia version 
> in your REQUIRE file to 0.4.7 accordingly. If you're not sure about this, 
> you can test your package specifically against older 0.4.x releases on 
> Travis and/or locally.
>
> This is a recommended upgrade for anyone using previous releases who 
> intends to keep using Julia 0.4 even after Julia 0.5.0 gets released, and 
> should act as a drop-in replacement for prior 0.4.x releases. If you find 
> any regressions relative to previous releases, please let us know.
>
> -Tony
>


[julia-users] Re: wrapping a julia package

2016-09-16 Thread Páll Haraldsson
On Friday, September 16, 2016 at 4:29:33 PM UTC, Páll Haraldsson wrote:
  

> `ARGB32` uses a `UInt32` representation of color,
>
> [I didn't see RGBA, only alpha in upper bits.. so unless I overlooked, may 
> not work, since bits not in the same order:]
>
>  

> " reinterpret(type, A)
>
> Change the type-interpretation of a block of memory. [..]"
>
>
> for the 32-bit pixel values?
>
> Maybe you can do just that without any wrapper, if that is only needed, to 
> get the new type-name (assuming you can and there are no name classes with 
> importing two packages that define say Color..)?
>

I guess, when alpha is in the low bits, you could do something like:

reinterpret(UInt32, UInt32(rotate_left(pixel))) # given that function was 
available..

but I guess it would be slow.. ROL (I'm justed to from assembly) couldn't 
be optimized away, in most or even any cases..


I'm not sure why, there are just [arithmetic] shift left and right in Julia 
and languages I'm familiar with.. I guess because you can build from that 
from: 

reinterpret(UInt32, UInt32(pixel<<8))

and more.

Not sure that version would be easier for the optimizer..

Best would be if there already is already also a RGBA32 type (or you could 
add it).

-- 
Palli.



[julia-users] Re: wrapping a julia package

2016-09-16 Thread Páll Haraldsson

On Friday, September 16, 2016 at 4:29:33 PM UTC, Páll Haraldsson wrote:
>
> This is an inserting question in general, first for the specific:
>

Obviously "interesting" question.. Didn't mean to rely on spell check..



[julia-users] Re: wrapping a julia package

2016-09-16 Thread Páll Haraldsson
This is an inserting question in general, first for the specific:

A.
I took a quick look at the code:

https://github.com/zyedidia/SFML.jl/commit/2e6ebee0a6dca85486563ea6f99decc595e831fa

type Color
r::Uint8
g::Uint8
b::Uint8
a::Uint8


Then Colors.jl seemed overkill and maybe not with what you wanted, seems 
ColorTypes.jl is it (split off from Colors.jl?); and it by default uses 
floating-point, but I also see:

typealias U8 UFixed8

and:

https://github.com/JuliaGraphics/ColorTypes.jl/blob/be848d6777c71ef36a619be7218083183d0f0d7e/src/types.jl#L269

RGB24(r::UFixed8, g::UFixed8, b::UFixed8) = _RGB24(reinterpret(r), 
reinterpret(g), reinterpret(b))

[..]

`ARGB32` uses a `UInt32` representation of color,

[I didn't see RGBA, only alpha in upper bits.. so unless I overlooked, may 
not work, since bits not in the same order:]


So you're thinking if you can have a wrapper that does:

http://docs.julialang.org/en/release-0.4/stdlib/arrays/?highlight=reinterpret

" reinterpret(type, A)

Change the type-interpretation of a block of memory. [..]"


for the 32-bit pixel values?

Maybe you can do just that without any wrapper, if that is only needed, to 
get the new type-name (assuming you can and there are no name classes with 
importing two packages that define say Color..)?


B. If you want a new wrapper, then in essence you have a new API, and I'm 
not sure it's advised (or to fork the library..) here (or in general); most 
would not know of the other API and/or understandably assume the official 
wrapper is the best.. If you could change the API in SFML.jl (in 
cooperation with the author), I guess that is best. Then you brake code for 
others (unless you that packages keeps two APIs, at least for a while).

-- 
Palli.

On Friday, August 26, 2016 at 8:10:33 AM UTC, jw3126 wrote:
>
> I want to use a julia package (SFML.jl), which defines its own types for 
> lots of basic things like 2d vectors, rectangles colors etc. 
> It would be cool however if one could use its functionality with "standard 
> types", like colors from Colors.jl, StaticArrays.jl vectors, 
> GeometryTypes.jl rectangles etc.
> Now SFML.jl is an interface to the C++ library SFML with zero julia 
> dependencies and its types mirror that of SFML. So I guess changing SFML.jl 
> is not the way to go here. Instead I am thinking about wrapping SFML.jl 
> such that function inputs and outputs are automatically converted between 
> "standard types" and "SFML.jl types.
> I guess using meta-programming it should be possible to loop over all 
> methods defined in SFML.jl and compose them with the appropriate 
> conversions? Is this the way to go? Is there an example, where such a thing 
> was applied to another package?
>


[julia-users] Re: electron framework / javascript / LLVM / Julia numerics?

2016-09-14 Thread Páll Haraldsson
"would it be possible to somehow create numeric libraries /  code in Julia, 
and "export" (emscripten?)  asm.js "pure" javascript numerical code"

Yes, in theory, but someone would have to do it/finish (I recall some small 
demo). As Emscripten "is an LLVM 
-to-JavaScript 
compiler. It takes LLVM bitcode - which can be generated from C/C++". 
There's an issue on Github on it, that was stalled, last I checked.

However, some dependencies, e.g. BLAS, are in Fortran and assembly (that 
Emscripten doesn't support), so the unofficial Julia-lite branch (without 
those) should be doable.

JavaScript would also not support Threads (only experimental in 0.5 anyway) 
etc.

I guess you where really after BLAS, linear algebra etc. and then 
WebAssembly would be a better target than asm.js. Or you could reimplement, 
what of BLAS etc. you need.. in pure Julia..

-- 
Palli.

On Tuesday, September 13, 2016 at 5:25:46 PM UTC, Perrin Meyer wrote:
>
> The github "electron" cross platform app framework looks pretty slick upon 
> first inspection (chrome / v8 / node.js /  javascript, llvm)  
>
> However, last time I checked, the javascript numerical libraries i've 
> looked at are alpha quality at best.  
>
> Since julia is also LLVM based, would it be possible to somehow create 
> numeric libraries /  code in Julia, and "export" (emscripten?)  asm.js 
> "pure" javascript numerical code that could be "linked" to code in the 
> electron framework, since that would be a possibly easy way to create cross 
> platform apps (linux, mac, windows, android) with high quality numerics?  I 
> would be more interested in correctness than raw speed, although I've been 
> impressed by V8 / asm.js benchmarks I've seen. 
>
> Thanks
>
> perrin
>
>

Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Páll Haraldsson


On Monday, September 12, 2016 at 7:01:05 PM UTC, Yichao Yu wrote:
>
> On Sep 12, 2016 2:52 PM, "Páll Haraldsson"  > wrote:
> >
> > On Monday, September 12, 2016 at 11:32:48 AM UTC, Neal Becker wrote:
> >>
> >> Anyone care to make suggestions on this code, how to make it faster, or 
> more 
> >> idiomatic Julia?
> >
> >  
> >
> > It may not matter, but this function:
> >
> > function coef_from_func(func, delta, size)
> >center = float(size-1)/2
> >return [func((i - center)*delta) for i in 0:size-1]
> > end
> >
> > returns Array{Any,1} while this could be better:
> >
> > function coef_from_func(func, delta, size)
> >center = float(size-1)/2
> >return Float64[func((i - center)*delta) for i in 0:size-1]
> > end
> >
> > returns Array{Float64,1} (if not, maybe helpful to know elsewhere).
> >
>
> Not applicable on 0.5
>

Good to know (and confirmed) meaning I guess 0.4 is slower (but correct 
results), with the former. Not with the latter, but then you are less 
generic. It seems Compat.jl would not get you out of that dilemma..



[julia-users] Re: Idea: Julia Standard Libraries and Distributions

2016-09-13 Thread Páll Haraldsson
On Tuesday, September 13, 2016 at 8:39:15 AM UTC, Chris Rackauckas wrote:
>
>
> This could be a terrible idea, I don't know.
>
 
I don't think so.
 
In the download page there is already a choice of (two extra.. when 
JuliaBox is thought of as such) "distributions", if you will:

"We provide several ways for you to run Julia:

   - In the terminal using the built-in Julia command line.
   - The Juno  integrated development environment 
   (IDE).
   - In the browser on JuliaBox.com" 

That is Juno includes Julia (from memory), so you could have a 
combinatorial explosion.. core packages in a distributions x possible IDEs 
x possible debuggers..

I'm all for a MATLAB like distribution that can compete, with Juno I guess 
(JuliaComputing promotes Ecplise IDE integration..). I hope people can 
agree on something, and avoid a combinatorial explosion.

For myself, I really like julia-lite to be available (it is unofficially).

-- 
Palli.


I think it's great that


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-12 Thread Páll Haraldsson
On Monday, September 12, 2016 at 11:32:48 AM UTC, Neal Becker wrote:

> Anyone care to make suggestions on this code, how to make it faster, or 
> more 
> idiomatic Julia?
>
 

It may not matter, but this function:

function coef_from_func(func, delta, size) 
   center = float(size-1)/2 
   return [func((i - center)*delta) for i in 0:size-1] 
end

returns Array{Any,1} while this could be better:

function coef_from_func(func, delta, size)
   center = float(size-1)/2 
   return Float64[func((i - center)*delta) for i in 0:size-1] 
end

returns Array{Float64,1} (if not, maybe helpful to know elsewhere).


I'm not sure this is more idiomatic, this would be an exception to not 
having to specify types.. for speed (both works..)

center = float(size-1)/2

could however just as well be:

center = (size-1)/2 # / implies float result, just as in Python 3 (not not 
2), and I like that choice.

-- 
Palli.


Re: [julia-users] How does garbage collection really work in Julia?

2016-09-11 Thread Páll Haraldsson
On Sunday, September 11, 2016 at 3:12:37 AM UTC, K leo wrote:
>
> Thanks for clearing.  I see that I used wrong word, "clear" instead of 
> "collect".  Then can I rephrase my questions below:
>
> On Sunday, September 11, 2016 at 9:31:35 AM UTC+8, Yichao Yu wrote:
>>
>>
>>
>> On Sat, Sep 10, 2016 at 8:00 PM, K leo  wrote:
>>
>>>
>>>
>>> On Sunday, September 11, 2016 at 7:35:41 AM UTC+8, Yichao Yu wrote:



 On Sat, Sep 10, 2016 at 6:46 PM, K leo  wrote:

> Thanks for the reply.  A couple questions: 
>

> 1) When I quit Julia and do a fresh start, is the tally set to zero?
>

 Memory in different processes are totally unrelated.

>>> So when I start a new Julia process, the tally starts at zero?
>>>
>>
>> The GC uses a number of counters to decide when to do a collection. They 
>> are maintained differently and "start at zero" is almost a meaningless. 
>> They are independent and almost start in the same state.
>>  
>>
>>>  
>>>
  

> 2) When GC does a pass, does it clear out everything so the tally is 
> set to zero, or does it do a partial clearance?
>

 Neither

>>> How can it be neither?  What can be a third choice with this question?
>>>
>>
>> Nothing is cleared. Dead objects are collected to be reused later without 
>> clearing.
>>  
>>
>  
> When GC does a pass, does it collect everything so the tally is set to 
> zero, or does it do a partial collection?
>  
>
>>  
>>>
  

> 3) I presume it is the latter case in question 2).  So does GC clear 
> out things on first-in-first-out bases or what?
>

 Not applicable.

>>>
> I presume it is the latter case in question 2).  So does GC collect things 
> on first-in-first-out bases or what?
>  
>
>>  

> 4) When the tally becomes big enough, does GC make sure to keep 
> objects that are referenced for future use in (Julia) code?
>

 See the link below
  

> 5) Do local objects (things allocated and only used within functions 
> for instance) get cleared out immediately when the functions terminate so 
> they don't take up quota in the tally?
>
>
 No

>>> Or perhaps those local objects have higher priorities of getting cleared 
>>> out?
>>>
>>
>> No.
>>
>
>  Do local objects (things allocated and only used within functions for 
> instance) get collected immediately when the functions terminate
>

Note, objects allocated (in the heap) get taken care of by GC, but the best 
thing would be stack allocation (as opposed to the heap). One reason C and 
C++ can be faster than many languages (with GC) is because of manual stack 
allocation. However, that is an outdated view:

https://en.wikipedia.org/wiki/Escape_analysis

"A compiler can use the results of escape analysis as a basis for 
optimizations:[1] 
 
   
   - *Converting heap allocations 
    to stack 
   allocations "*

[..]


The popularity of the Java programming language 
 has made 
escape analysis a target of interest. [..]  (see Escape analysis in Java 
).
 
Escape analysis is implemented in Java Standard Edition 6." [I believe not 
before, while it could have been improved in each version since.


I also believe Julia does similar, at least could in theory, but I recall 
they've already done so.



[julia-users] Re: Running Julia in Ubuntu

2016-09-08 Thread Páll Haraldsson
On Thursday, September 8, 2016 at 1:53:17 PM UTC, Lutfullah Tomak wrote:
>
> There are some non-root google play apps(debian noroot 
> <https://github.com/pelya/debian-noroot> by pelya and GNURoot Debian 
> <https://github.com/corbinlc/GNURootDebian> by corbin) that basically 
> provide
> chroot into another linux system by using proot library.
>

I was aware of debian-noroot "Debian running on Android [..] This app is 
NOT full Debian OS - it is a compatibility layer, which allows you to run 
Debian applications."

Are you saying that is needed (it includes a lot)? I'm thinking what you 
could minimally get away with. I'm thinking Android Julia applications, not 
Debian Julia applications. I know other GUI (with complications..), and 
even "Hello world" (CLI style wouldn't work), just putting that aside 
(non-"Linux"/X11 compatibility), what does Julia itself require?

Only 
https://en.wikipedia.org/wiki/Chroot [or proot, that I'm not familiar with]

command required?

-- 
Palli.
 

> `Chroot`ing is not using a VM but using a boxed system under a regular 
> android(or any unix) system.
> AFAIK,it runs programs natively but some filesystem and socket 
> operations(or some privileged operations)
> go through proot(or used chroot library) library to simulate a root system.
>
> On Thursday, September 8, 2016 at 4:26:09 PM UTC+3, Páll Haraldsson wrote:
>>
>> On Tuesday, September 6, 2016 at 8:21:48 PM UTC, Lutfullah Tomak wrote:
>>>
>>> I use julia 0.4.6 package from Debian Stretch in arm cpu(with chroot 
>>> environment in my android phone). It works well and IJulia works too.
>>>
>>
>> You mean you use Julia straight on Android, not on Debian or Ubuntu (in a 
>> VM under Android)? Interesting.. I had heard of the latter in a VM.
>>
>> I think I know what chroot is, not quite a VM, lightweight, and you get 
>> the file system hierarchy you expect. Was that the point?
>>
>> [Android isn't strictly supported, only the Linux [kernel], but this 
>> indicates to me that nothing more in userspace is needed and only file 
>> system changes/chroot is needed (if that..).]
>>
>>
>>
>>> On Tuesday, September 6, 2016 at 11:12:24 PM UTC+3, Angshuman Goswami 
>>> wrote:
>>>>
>>>> Anyone has a working version of Julia running on ARM processors?
>>>>
>>>> On Thursday, September 1, 2016 at 10:15:05 PM UTC-4, Josh Langsfeld 
>>>> wrote:
>>>>>
>>>>> This link is only to an archive of the source code; you would still 
>>>>> have to build julia after downloading this.
>>>>>
>>>>> Ideally what you want is an ARM binary that's version 0.4 instead of a 
>>>>> nightly build but I don't see anywhere obvious where that can be 
>>>>> downloaded.
>>>>>
>>>>> RobotOS will start working on 0.5 and up eventually, but you may still 
>>>>> need to wait a few weeks.
>>>>>
>>>>> On Thursday, September 1, 2016 at 7:52:09 PM UTC-4, Angshuman Goswami 
>>>>> wrote:
>>>>>>
>>>>>> But there is no folder /bin/julia in the one I downloaded from 
>>>>>> https://github.com/JuliaLang/julia/releases/tag/v0.4.6
>>>>>>
>>>>>> What should be the simlink when I try to build with this ??
>>>>>>
>>>>>> On Thursday, September 1, 2016 at 6:52:41 PM UTC-4, Kaj Wiik wrote:
>>>>>>>
>>>>>>> Hi!
>>>>>>>
>>>>>>> You symlink a wrong file, first 
>>>>>>> sudo rm /usr/local/bin/julia.h
>>>>>>>
>>>>>>> The correct symlink line is
>>>>>>> sudo ln -s /opt/julia-0.4.6/bin/julia  /usr/local/bin
>>>>>>>
>>>>>>> On Friday, September 2, 2016 at 1:11:07 AM UTC+3, Angshuman Goswami 
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> I have downloaded the Julia 0.4.6 from the repository: 
>>>>>>>> https://github.com/JuliaLang/julia/releases/tag/v0.4.6
>>>>>>>> I extracted the folder and copied to opt folder
>>>>>>>> sudo ln -s /opt/julia-0.4.6/src/julia.h  /usr/local/bin
>>>>>>>>
>>>>>>>> I made the folder executable using sudo chmod +x *
>>>>>>>>
>>>>>>>> But I am getting the error:
>>>>>>>> bash: julia: c

Re: [julia-users] Re: Is it better to call Julia from C++ or call C++ from Julia?

2016-09-08 Thread Páll Haraldsson


On Tuesday, September 6, 2016 at 8:13:57 PM UTC, Bart Janssens wrote:
>
>
>
> On Tue, Sep 6, 2016 at 5:25 PM Páll Haraldsson  > wrote:
>
>>
>> As far as I understand:
>>
>> https://github.com/barche/CxxWrap.jl
>>
>> should also be [as, not faster or slower] fast. Meaning runtime speed (of 
>> not development speed).
>>
>
> CxxWrap.jl actually has a slightly higher overhead: many calls are 
> diverted to a C++ std::function, which has an inherent overhead. CxxWrap 
> includes a benchmark (in the package test suite) where all elements of a 
> 5000-element Float64 array are divided by 2, using the function in the 
> loop. Timings on Linux with julia0.5-rc3 are:
> - Pure C++ and pure Julia are the same at 0.06 s
> - ccall on a C function or a CxxWrap C++ function that can be called as a 
> C function: 0.09 s
> - CxxWrap function in the general case: 0.14 s
>
> Normally Cxx.jl should be faster since it can inline in this case.
>

Not sure I understand ("this case"), in another thread Tim Holy cautioned 
that C++ would not be inlined into Julia functions, as Julia functions can 
be.

[I understand you can debug Julia and C++ code, with Gallium, e.g. from one 
language to the next, that is across function boundaries.]



[julia-users] Re: Running Julia in Ubuntu

2016-09-08 Thread Páll Haraldsson
On Tuesday, September 6, 2016 at 8:21:48 PM UTC, Lutfullah Tomak wrote:
>
> I use julia 0.4.6 package from Debian Stretch in arm cpu(with chroot 
> environment in my android phone). It works well and IJulia works too.
>

You mean you use Julia straight on Android, not on Debian or Ubuntu (in a 
VM under Android)? Interesting.. I had heard of the latter in a VM.

I think I know what chroot is, not quite a VM, lightweight, and you get the 
file system hierarchy you expect. Was that the point?

[Android isn't strictly supported, only the Linux [kernel], but this 
indicates to me that nothing more in userspace is needed and only file 
system changes/chroot is needed (if that..).]



> On Tuesday, September 6, 2016 at 11:12:24 PM UTC+3, Angshuman Goswami 
> wrote:
>>
>> Anyone has a working version of Julia running on ARM processors?
>>
>> On Thursday, September 1, 2016 at 10:15:05 PM UTC-4, Josh Langsfeld wrote:
>>>
>>> This link is only to an archive of the source code; you would still have 
>>> to build julia after downloading this.
>>>
>>> Ideally what you want is an ARM binary that's version 0.4 instead of a 
>>> nightly build but I don't see anywhere obvious where that can be downloaded.
>>>
>>> RobotOS will start working on 0.5 and up eventually, but you may still 
>>> need to wait a few weeks.
>>>
>>> On Thursday, September 1, 2016 at 7:52:09 PM UTC-4, Angshuman Goswami 
>>> wrote:

 But there is no folder /bin/julia in the one I downloaded from 
 https://github.com/JuliaLang/julia/releases/tag/v0.4.6

 What should be the simlink when I try to build with this ??

 On Thursday, September 1, 2016 at 6:52:41 PM UTC-4, Kaj Wiik wrote:
>
> Hi!
>
> You symlink a wrong file, first 
> sudo rm /usr/local/bin/julia.h
>
> The correct symlink line is
> sudo ln -s /opt/julia-0.4.6/bin/julia  /usr/local/bin
>
> On Friday, September 2, 2016 at 1:11:07 AM UTC+3, Angshuman Goswami 
> wrote:
>>
>> I have downloaded the Julia 0.4.6 from the repository: 
>> https://github.com/JuliaLang/julia/releases/tag/v0.4.6
>> I extracted the folder and copied to opt folder
>> sudo ln -s /opt/julia-0.4.6/src/julia.h  /usr/local/bin
>>
>> I made the folder executable using sudo chmod +x *
>>
>> But I am getting the error:
>> bash: julia: command not found
>>
>>
>>
>>
>> On Thursday, September 1, 2016 at 5:38:10 PM UTC-4, Angshuman Goswami 
>> wrote:
>>>
>>> I want to use Julia 0.4.6. Can you guide me through the process as 
>>> if I am a novice
>>> On Thursday, September 1, 2016 at 2:24:43 AM UTC-4, Lutfullah Tomak 
>>> wrote:

 You've already built julia I guess. You need to install python 
 using ubuntu's package system. In command prompt
 sudo apt-get install `pkg-name`
 will install the package you want to install by asking you your 
 password.
 For python
 sudo apt-get install python
 will install python. Close prompt and open julia and try again 
 building PyCall.jl by Pkg.build().

 On Wednesday, August 31, 2016 at 11:48:32 PM UTC+3, Angshuman 
 Goswami wrote:
>
> I don't get how to do that. 
>
> Can you please tell me the steps. Its all too confusing and I am 
> very new to Ubuntu or Julia. Mostly used to work on Matlab. I have no 
> idea 
> how to install dependancies
>
> On Wednesday, August 31, 2016 at 3:26:40 AM UTC-4, Kaj Wiik wrote:
>>
>> Ah, sorry, I assumed you are using x86_64. Find the arm binary 
>> tarball and follow the instructions otherwise. See
>> https://github.com/JuliaLang/julia/blob/master/README.arm.md
>>
>>
>> On Wednesday, August 31, 2016 at 9:54:38 AM UTC+3, Lutfullah 
>> Tomak wrote:
>>>
>>> You are on an arm cpu so Conda cannot install python for you. 
>>> Also, you tried downloading x86 cpu linux binaries, instead try arm 
>>> nightlies.
>>> To get away with PyCall issues you have to manually install all 
>>> depencies. 
>>>
>>> On Wednesday, August 31, 2016 at 7:53:24 AM UTC+3, Angshuman 
>>> Goswami wrote:

 When i performed build again errors cropped up.

 Pkg.build("PyCall")
 WARNING: unable to determine host cpu name.
 INFO: Building PyCall
 INFO: No system-wide Python was found; got the following error:
 could not spawn `/usr/local/lib/python2.7 -c "import 
 distutils.sysconfig; 
 print(distutils.sysconfig.get_config_var('VERSION'))"`: permission 
 denied 
 (EACCES)
 using the Python distribution in the Conda package
 INFO: Downloading miniconda installer ...
 

[julia-users] Re: RandIP A random IP generator for Large scale network mapping.

2016-09-08 Thread Páll Haraldsson
FYI: There's also:

https://github.com/JuliaWeb/IPNets.jl

[And a thread on it, if I recall just an announcement.]

Didn't look into it much


[julia-users] Re: RandIP A random IP generator for Large scale network mapping.

2016-09-08 Thread Páll Haraldsson
On Tuesday, August 30, 2016 at 3:34:49 PM UTC, Páll Haraldsson wrote:
>
> On Tuesday, August 30, 2016 at 5:26:29 AM UTC, Jacob Yates wrote:
>>
>> I've been working on porting a script I wrote in python to julia and have 
>> been having some issues with the script freezing.
>>
>> So pretty much all this script does is generate a random IP address and 
>> checks to see if its valid(the Python version will give http error codes) 
>> then logs the results for further analysis.
>>
>> function gen_ip()
>> ip = Any[]
>> for i in rand(1:255, 4)
>> push!(ip, i)
>> end
>> global ipaddr = join(ip, ".")
>> end
>>
>  
> [..]
>
> println("Bactrace: ", backtrace())
>>
>
> Note, there is a type for IP addresses, done like: ip"127.0.0.1" (should 
> also work for IPv6) or:
>
> gen_ip() = IPv4(rand(0:256^4-1)) #not sure why you excluded 0 in 1:255 
> (might want to exclude some IPs but not as much as you did?), or used 
> global.
>

gen_ip() = IPv4(begin r=rand(1:254); r >= 10 ? r+1 : r end, rand(0:255), 
rand(0:255), 
rand(0:255))

is possibly what you want. 0.x.x.x and 10.x.x.x are private networks and 
you seemed to want to exclude the former, and if also the latter then the 
new version does that. I forget, you where excluding 0 for the x-es, isn't 
that just plain wrong (except maybe for the last one)?



> http://docs.julialang.org/en/release-0.4/manual/networking-and-streams/
>
> Generally global is bad form, and I'm not sure, but it might have 
> something to do with @async not working, as I guess it's not "thread-safe" 
> or related..
>
> -- 
> Palli.
>
>  
>  
>


[julia-users] Re: julia-python module missing sys.ji julia-0.5.0-0.20160822.fc24.x86_64

2016-09-08 Thread Páll Haraldsson
On Thursday, September 8, 2016 at 12:20:56 PM UTC, Neal Becker wrote:
 

> Are these nightlies incomplete, or has this file changed?  Does 
> https://pypi.python.org/pypi/julia


I wasn't aware of this.. I only knew of pyjulia to call Julia from Python 
(not yet registered at PyPy, butbeen talked about).

Are you sure you need/want IPython support? I'm not sure, maybe pyjulia 
handles that also, or only that package (does it use pyjulia?).

[Then there is PyCall.jl to call in the other direction; and even 
Polyglot.jl]



>
> need update for julia-0.5? 
>
>

[julia-users] Re: Is it better to call Julia from C++ or call C++ from Julia?

2016-09-06 Thread Páll Haraldsson

On Saturday, September 3, 2016 at 1:17:38 AM UTC, Steven G. Johnson wrote:

>
> On Friday, September 2, 2016 at 8:51:21 PM UTC-4, K leo wrote
> Much easier to call C++ from Julia, particularly with the Cxx.jl package 
> .   Performance-wise, it shouldn't 
> matter, but it is always easier to write glue code in a higher-level 
> language than in a lower-level language.
>

As far as I understand:

https://github.com/barche/CxxWrap.jl

should also be [as, not faster or slower] fast. Meaning runtime speed (of 
not development speed).

As I need neither, I've only looked a bit into, interactive C++ seems 
awesome in Cxx.jl, but is it fair to say the other package is [more] 
stable? Cxx.jl requires 0.5, that is just around the corner, I'm not sure 
the state then.

-- 
Palli.



Re: [julia-users] Re: How to remove a column or row from a 2D contiguous array

2016-09-02 Thread Páll Haraldsson
2016-09-02 22:07 GMT+00:00 Steven G. Johnson :

>
>
> On Friday, September 2, 2016 at 4:03:09 PM UTC-4, Páll Haraldsson wrote:
>>
>> I can't see that Julia couldn't be as fast as MATLAB, they chose however
>> to not do this by default, as it is slow (for big arrays).
>>
>
> Deleting a row in Julia is no slower than in Matlab.  Both require you to
> make a copy of the matrix.  It is just that Matlab provided a
> built-in-syntax for this, whereas in Julia you would have to write an
> (easy) function.
>

Right, I was trying to say that, "couldn't" could be misunderstood..

Anyway, do you know about BlockArrays, and if an even faster than MATLAB,
is practical that way? I guess all BLAS functions are out then.. but not
all arrays are for linear algebra..

-- 
Palli.


[julia-users] Re: How to remove a column or row from a 2D contiguous array

2016-09-02 Thread Páll Haraldsson
On Tuesday, August 30, 2016 at 12:56:34 AM UTC, Miguel Goncalves wrote:
>
> There is a 2-year-old post 
>  on 
> julia-dev hinting at potential new syntax for removing columns from a 2D 
> array. What is the current status on that?
>
> What is currently the most efficient way to remove a column or row from a 
> 2D contiguous array?
>

For contiguous arrays ("Array" subtype of "DenseArray"), there seems it 
will never be fast in the general case (edge case for last/first, might 
allow for a view/subarry).

I looked into what you point to, and the issue is closed, but the solutions 
here at least work:

http://stackoverflow.com/questions/17298586/how-to-delete-a-row-of-matrix-in-julia

I can't see that Julia couldn't be as fast as MATLAB, they chose however to 
not do this by default, as it is slow (for big arrays).


[I guess sparse arrays, would also be not as slow.. depending..]

Maybe some other type <: DenseArray would help, could at least in theory, 
be made (but I guess then other operations would have to be slower). I just 
noticed:

https://github.com/KristofferC/BlockArrays.jl

that I do not know enough about yet.. It might be for other purposes..

-- 
Palli.



[julia-users] Re: FYI: Second transpiler to Julia(?), from Ruby, and benchmarks

2016-09-01 Thread Páll Haraldsson
On Thursday, September 1, 2016 at 3:24:21 PM UTC, k...@swd.cc wrote:
>
> The creator of virtual_module and ruby2julia transpiler here, just dropped 
> in to see what's going on now. Thank you for your interest.
>
> > Is it including startup/compilation time? Did they not "run it twice"?
>
> Yes, it includes startup/compilation time.(I'm not sure if I understand 
> "runt it twice" meaning properly though)
>

The first time you run Julia code, it's slower as then you include compile 
time. See: http://docs.julialang.org/en/latest/manual/performance-tips/

> That is https://github.com/Ken-B/RoR_julia_eg
> that uses ZMQ.jl (better for IPC)?

ZMQ sounds promising in order to add more concurrency to virtual_module.

>> And in practice it will probably be slower than the source language 
because Julia is not as heavily optimized for interpreting those semantics.

>True. And my experiment is to gain performance improvements in exchange 
for giving up completeness of accuracy of Ruby syntax. The project goal is 
something like "gain BIG performance improvement with more than 90% Ruby 
Syntax coverage", though not sure yet if I can make this happen. Anyways 
thank you for your comment.

Julia interop (not transpiling), e.g. RoR_julia_eg 
 helps to get speed. Transpiling 
would help to migrate code and/or if you are willing to modify the 
transpiled code, to get more speed than Ruby. As explained by Steven (that 
makes PyCall.jl, with tighter integration than already done for Ruby or any 
other dynamic language), you shouldn't get speedup, but that depends on how 
slow the implementation of Ruby is.. It seems you are not gaining from 
Julia, the *language* per se, only the BLAS functions that are actually 
written in Fortran (could have been fast though in Julia), so you're 
gaining from a library that could (in theory) be used directly from Ruby.

-- 
Palli.



[julia-users] Re: Saving and Loading data (when JLD is not suitable)

2016-09-01 Thread Páll Haraldsson
On Wednesday, August 31, 2016 at 4:47:37 AM UTC, Lyndon White wrote:
>
>
> There are 3 problems with using Base,serialize as a data storage format.
>
>1. *It is not stable* -- the format is evolving as the language 
>evolves, and it breaks the ability to load files
>2. *It is only usable from Julia* -- vs JLD which is, in the end fancy 
>HDF5, anything can read it after a little work
>3. *It is not safe from a security perspective*  -- Maliciously 
>crafted ".jsz" files can allow arbitrary code execution to occur during 
> the 
>deserialize step.
>
>
You just reminded me of:

http://prevayler.org/

that has been ported to other than the first language, Java, and I think 
would also be nice for Julia.. They solved a different problem than what 
you have (or the three above), and pointed to XQuery/XML, if I recall, for 
the query/[export out of "db"]serialization


http://www.onjava.com/pub/a/onjava/2005/06/08/prevayler.html

"A *prevalent* system makes use of serialization, and is again useful only 
when an in-memory data set is feasible. A serialized snapshot of a working 
system can be taken at regular intervals as a first-line storage mechanism. 
[..]
Prevayler 1.0 was awarded a JOLT Productivity  
award in 2004. The recent version, 2.0, has many improvements, including a 
simpler API."

https://github.com/jsampson/prevayler

-- 
Palli.



[julia-users] Re: Announcing TensorFlow.jl, an interface to Google's TensorFlow machine learning library

2016-09-01 Thread Páll Haraldsson
On Wednesday, August 31, 2016 at 10:31:58 PM UTC, Jonathan Malmaud wrote:
>
> Hello,
> I'm pleased to announce the release of TensorFlow.jl, enabling modern 
> GPU-accelerated deep learning for Julia. Simply run Pkg.add("TensorFlow") 
> to install and then read through the documentation at 
> https://malmaud.github.io/tfdocs/index.html to get started. Please file 
> any issues you encounter at https://github.com/malmaud/TensorFlow.jl. 
>

About: "To enable support for GPU usage (Linux only)"

[I'm not complaining.. more asking about the general issue] It seems 
official Tensorflow supports (for *GPU*) OS X (but not Windows).

https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html

Do you not support the same because you simply say do not have OS X or is 
this about some OS support missing from other packages?

>will ensure Julia remain a first-class citizen in world of modern machine 
learning

That would be great! You mean it's already ("remain") with other ANN 
packages (or yours, not your first version)? I guess people/you want 
Tensorflow (at least known in the Python world), I know nothing about it or 
the (Julia-native or otherwise..) competition.

-- 
Palli.



[julia-users] Re: Announcing TensorFlow.jl, an interface to Google's TensorFlow machine learning library

2016-09-01 Thread Páll Haraldsson
On Thursday, September 1, 2016 at 7:45:05 AM UTC, Kyunghun Kim wrote:
>
> Wonderful jobs, Jonathan! 
> I'd better try this version rather than use TensorFlow in python. 
>
> Does it based on PyCall package?
>

Yes, you can always see that by looking in the REQUIRE file (except if used 
indirectly.. then you would have too look at packages recursively..).


I guess you can also see at runtime, by what library (meaning libpython, 
not general Julia packages, maybe strace would then help..) is used..

like [I know there's a command for it and this seems to be it]:

pldd  |grep libpython


[see also lsof command.]

-- 
Palli.



[julia-users] Re: FYI: Second transpiler to Julia(?), from Ruby, and benchmarks

2016-08-30 Thread Páll Haraldsson
On Monday, August 29, 2016 at 6:42:08 PM UTC, Chris Rackauckas wrote:
>
> That's a good showing for Julia for the larger matrices? However, for 
> smaller matrices it's a large constant time. Is it including 
> startup/compilation time? Did they not "run it twice"?
>
> 1. I guess they must include startup time (and for JRuby..), not always 
unfair.., 2. So no.. they could for the benchmarks, while I'm not sure 
about it ideal for the transpiler (didn't look into it much).
 

> On Monday, August 29, 2016 at 8:57:32 AM UTC-7, Páll Haraldsson wrote:
>>
>>
>> I have no relation to this..
>>
>> https://github.com/remore/julializer
>>
>

"But there are very many differences between Ruby and Julia which I have no 
idea how to fix as of now. For example:

   - Julia supports only dec, bin and hex format but in Ruby the decimal 
   system is much powerful.(#to_s(num) and #to_i(num) are supported)
   - Julia doesn't have Class concept(there is workaround though) but Ruby 
   does
   - Julia does not have a "null" value 
   
<http://docs.julialang.org/en/release-0.4/manual/faq/#how-does-null-or-nothingness-work-in-julia>
 
   but Ruby does 
  - For example in Ruby [1,2,3].slice!(4) will return null but in julia 
  there is no nil therefore this raises causes error.
  - Current workaround for this is, do not write this way, instead you 
  need to check boundary by yourself.
   - And many more gaps to be solved 
  - e.g.) In Julia typemax() and typemin() returns Inf but in Ruby 
  Float::MAX, Float::MIN have specific value"
   

A. I like you Julia doesn't have "null" (not strictly true, at least for C 
API..) to not have Hoare's self-admitted billion dollar mistake. Not sure 
thought, how best to help that project..


B. About Classes and


https://en.wikipedia.org/wiki/Composition_over_inheritance


that is I guess best, but maybe not to helpful for that project.. Should 
that be enough, to compile to that, or any other ideas?


C. I'm sure Julia has as good decimal support as possible already, with two 
different packages. I'm not sure what's in Ruby (so can't comment on that 
code), I guess the maker of the project is not aware, only of what is in 
Base.


 

>
>>
>> [may not work to transpile Ruby on Rails to Julia - yet, there is an old 
>> package RoR that allows it to work with Julia though.]
>>
>
That is https://github.com/Ken-B/RoR_julia_eg

that uses ZMQ.jl (better for IPC)?



>>
>> Interesting benchmarks here ("virtual_module" is transpiled, but "Julia 
>> 0.4.6 not, only to compare, and [can be] a little slower than Python..):
>>
>> https://github.com/remore/virtual_module
>>
>>
>> Saw at:
>>
>>
>> https://www.reddit.com/r/Julia/comments/5049pq/a_fresh_approach_to_numerical_computing_with_ruby/
>>
>> [not really about to discuss here much. Fortran to Julia was first, 
>> though.]
>>
>> -- 
>> Palli.
>>
>>
>>

[julia-users] Re: RandIP A random IP generator for Large scale network mapping.

2016-08-30 Thread Páll Haraldsson
On Tuesday, August 30, 2016 at 5:26:29 AM UTC, Jacob Yates wrote:
>
> I've been working on porting a script I wrote in python to julia and have 
> been having some issues with the script freezing.
>
> So pretty much all this script does is generate a random IP address and 
> checks to see if its valid(the Python version will give http error codes) 
> then logs the results for further analysis.
>
> function gen_ip()
> ip = Any[]
> for i in rand(1:255, 4)
> push!(ip, i)
> end
> global ipaddr = join(ip, ".")
> end
>
 
[..]

println("Bactrace: ", backtrace())
>

Note, there is a type for IP addresses, done like: ip"127.0.0.1" (should 
also work for IPv6) or:

gen_ip() = IPv4(rand(0:256^4-1)) #not sure why you excluded 0 in 1:255 
(might want to exclude some IPs but not as much as you did?), or used 
global.

http://docs.julialang.org/en/release-0.4/manual/networking-and-streams/

Generally global is bad form, and I'm not sure, but it might have something 
to do with @async not working, as I guess it's not "thread-safe" or 
related..

-- 
Palli.

 
 


[julia-users] FYI: Second transpiler to Julia(?), from Ruby, and benchmarks

2016-08-29 Thread Páll Haraldsson

I have no relation to this..

https://github.com/remore/julializer

[may not work to transpile Ruby on Rails to Julia - yet, there is an old 
package RoR that allows it to work with Julia though.]


Interesting benchmarks here ("virtual_module" is transpiled, but "Julia 
0.4.6 not, only to compare, and [can be] a little slower than Python..):

https://github.com/remore/virtual_module


Saw at:

https://www.reddit.com/r/Julia/comments/5049pq/a_fresh_approach_to_numerical_computing_with_ruby/

[not really about to discuss here much. Fortran to Julia was first, though.]

-- 
Palli.




[julia-users] Re: [ECCN] What packages are included in the basic distribution of Julia

2016-08-22 Thread Páll Haraldsson
On Thursday, March 17, 2016 at 8:27:33 PM UTC, Naiyuan Chiang wrote:
>
>
> Hi,
>
> I am working in United Technologies Research Center (UTRC), and currently 
> I am interested in using Julia for my researches.
> According to the IT support of UTRC, I wonder if the distribution of 
> Julia comes with certain packages, e.g. Crypto, by default. 
>

Since my discussion here, mbedTLS is a dependency of Julia. It probably 
changes things..

I haven't look closely what is in mbedTLS, or if just parts are compiled 
in, anybody know it not to be a problem?

-- 
Palli.
 

> My understanding is no, but an official reply can help us to accurately 
> identify the ECCN regulations for Julia.
>
> My IT support's original words are "Julia appears to be open source, but 
> does have at least one package that uses SSL (Crypto, see 
> *http://pkg.julialang.org/* ). So without the 
> packages, the language would likely be 5D992, but is open source, so should 
> be not subject to the regulations. If the language comes with the Crypto 
> package by default, then the distribution would be considered 5D002 and we 
> would need to ask about license exception TSU to treat it otherwise."
>
> Thanks, 
>
> Nai-Yuan
>


Re: [julia-users] How to Manipulate each character in of a string using a for loop in Julia ?

2016-08-18 Thread Páll Haraldsson
On Wednesday, August 17, 2016 at 5:52:47 PM UTC, Rishabh Raghunath wrote:
>
> I am a beginner in Julia and I am more familiar with the C language..
>
 

> I am actually learning Julia for use in a programming contest and love the 
> experience so far..
> There are often questions that require you to manipulate the characters 
> comprising the string. Its easy to do in C as strings are nothing but an 
> array of Characters
>

A lot is not as easy as you might think :) I haven't looked up C recently, 
but at least C++14 has now 4 char literals (and 5 string), and C++17 added 
the 5th "utf8" literal that is bizzare as it only supports ASCII.. Things 
used to be simple with fixed-sized length ASCII and EBCDIC 
,
 
with the latter still holding back C++ (or you might be optimisitc and say 
it makes C++ more portable..). Unicode, and e.g. grapheme clusters, made 
strings of letters ("Char"s, not just meaning bytes) a can-of-worm..

Having said that.. I was thinking up a new mutable string type.. (seems 
Julia developers are too with at least similar ideas).

-- 
Palli.

>
> On Wed, Aug 17, 2016 at 10:48 PM, g > 
> wrote:
>
>> Isn't it a red herring to say that strings lack setindex because they are 
>> immutable? I think strings don't have setindex! because it is considered to 
>> be a bad API choice, at least partially because you can't do O(1) indexing 
>> for many string encodings.
>>
>> I can easily define a setindex! method* that does what the OP is 
>> expecting, so how can this be related to strings being immutable? 
>>
>> *julia> **Base.setindex!(s::ASCIIString, v,i) = s.data[i]=v*
>>
>> *setindex! (generic function with 58 methods)*
>>
>> *julia> **a="abc"*
>>
>> *"abc"*
>>
>> *julia> **a[1]='7';a*
>>
>> *"7bc"*
>>
>> *julia> **a[1]+=1;a*
>>
>> *"8bc"*
>>
>> *This works in 0.4, and I'm lead to believe this is considered a bad idea.
>>
>> On Wednesday, August 17, 2016 at 12:40:36 AM UTC-6, Jacob Quinn wrote:
>>>
>>> Strings are immutable (similar to other languages). There are several 
>>> different ways to get what you want, but I tend to utilize IOBuffer a lot:
>>>
>>> a = "abcd"
>>> io = IOBuffer()
>>>
>>> for char in a
>>> write(io, a + 1)
>>> end
>>>
>>> println(takebuf_string(io))
>>>
>>> -Jacob
>>>
>>> On Wed, Aug 17, 2016 at 12:30 AM, Rishabh Raghunath >> > wrote:
>>>

 Hello fellow Julia Users!!

 How do you manipulate the individual characters comprising a string in 
 Julia using a for loop ?
 For example:
 ###

 a = "abcd"

   for i in length(a)
a[i]+=1
  end

 print(a)

 
  I am expecting to get my EXPECTED OUTPUT as" bcde  "

  BUT I get the following error:  
 ##

  ERROR: MethodError: `setindex!` has no method matching 
 setindex!(::ASCIIString, ::Char, ::Int64)
  [inlined code] from ./none:2
  in anonymous at ./no file:4294967295

 ##
 I also tried using:

 for i in eachindex(a) instead of the for loop in the above program .. 
 And I get the same error..

 Please tell me what i should do to get my desired output ..
 Please respond ASAP..
 Thanks..

>>>
>>>
>

Re: [julia-users] What is the equivalent of " return 0; " used in C in Julia during successful completion of program ?

2016-08-18 Thread Páll Haraldsson
On Monday, August 15, 2016 at 6:24:02 AM UTC, Tamas Papp wrote:
>
> You can use exit(0), but the exit code is implicitly 0 anyway when your 
> Julia script runs without errors. See 
> http://docs.julialang.org/en/release-0.4/stdlib/base/


Yes, I did check, the default is 1 on exception/error (there are no known 
errors to me except subtypes of Exception).

julia -e "type MyCustomException <: Exception end; 
throw(MyCustomException)"; echo $?
ERROR: MyCustomException
1

julia -e "throw(UnicodeError)"; echo $?
ERROR: MyCustomException
1


When is there a need to call exit(n) # with non-zero n? Or n>1; would 
throwing an exception always be better than at least exit(1)?


julia> @edit throw(MyCustomException)
ERROR: ArgumentError: argument is not a generic function
 in methods at ./reflection.jl:140

I can't see that anything other than 1 is ever thrown on exception [as 
throw is a keyword and not really a function, so I'm not sure where to 
check..]


Is there a need (or a possibility, seems not, only allows for a message (in 
only one language..)) to throw exceptions with some other non-zero error 
code? [I guess it's bad form to catch exceptions and then call exit.. or 
call exit from atexit handler..]


This is what I found:

exit(n) = ccall(:jl_exit, Void, (Int32,), n)
exit() = exit(0)
quit() = exit()


So if at the end of your script exit() is called you get 0 exit code for 
sure.


bash manual:

"An OR list has the form

  command1 || command2

   command2 is executed if and only if command1 returns a non-zero exit 
status

[..]

?  Expands to the exit status of the most recently executed foreground 
pipeline.

[..]

PIPESTATUS
  An array variable (see Arrays below) containing a list of 
exit status values from the processes in the most-recently-executed 
foreground pipeline (which may contain only a single  com‐
  mand).

[..]

If the name is neither a shell function nor a builtin, and contains no 
slashes, bash searches each element of the PATH for a directory containing 
an executable file by that name.  Bash uses a
   hash table to remember the full pathnames of executable files (see 
hash under SHELL BUILTIN COMMANDS below).  A full search of the directories 
in PATH is performed only if the command is  not
   found in the hash table.  If the search is unsuccessful, the shell 
searches for a defined shell function named command_not_found_handle.  If 
that function exists, it is invoked with the orig‐
   inal command and the original command's arguments as its arguments, 
and the function's exit status becomes the exit status of the shell.  If 
that function is not defined, the shell prints  an
   error message and returns an exit status of 127.

[..]

EXIT STATUS
   The exit status of an executed command is the value returned by the 
waitpid system call or equivalent function.  Exit statuses fall between 0 
and 255, though, as explained  below,  the  shell
   may  use values above 125 specially.  Exit statuses from shell 
builtins and compound commands are also limited to this range. Under 
certain circumstances, the shell will use special values to
   indicate specific failure modes.

   For the shell's purposes, a command which exits with a zero exit 
status has succeeded.  An exit status of zero indicates success.  A 
non-zero exit status indicates failure.   When  a  command
   terminates on a fatal signal N, bash uses the value of 128+N as the 
exit status.

   If a command is not found, the child process created to execute it 
returns a status of 127.  If a command is found but is not executable, the 
return status is 126.

   If a command fails because of an error during expansion or 
redirection, the exit status is greater than zero.

   Shell  builtin  commands  return a status of 0 (true) if successful, 
and non-zero (false) if an error occurs while they execute.  All builtins 
return an exit status of 2 to indicate incorrect
   usage.

   Bash itself returns the exit status of the last command executed, 
unless a syntax error occurs, in which case it exits with a non-zero 
value.  See also the exit builtin command below."

-- 
Palli.


>
> On Mon, Aug 15 2016, Rishabh Raghunath wrote: 
>
> > I'd like to know What the equivalent of "return 0  used in C"  is in 
> Julia 
> > while completion of program .. 
>


[julia-users] Re: Determining L1 cache size?

2016-08-17 Thread Páll Haraldsson
On Tuesday, August 16, 2016 at 9:12:51 AM UTC, Oliver Schulz wrote:
>
> I'd like to determine the optimal block size(s) for an array algorithm.
>

That is done, as I said, e.g. with BLAS. But is it still relevant? Are 
cache-oblivious algorithms supposed to not need to know sizes and details 
and still be fast? They are all divide-and-conquer/recursive (a sufficient 
condition, there are some more if I recall (or not?)..):

http://supertech.csail.mit.edu/papers/Prokop99.pdf

"*Conventional wisdom says that recursive procedures should be converted 
into iterative loops in order to improve performance [8]. While this 
strategy was effective ten years ago, many recursive programs now actually 
run faster than their iterative counterparts.* So far most of the work by 
architects and compiler writers is concentrated on loop-based iterative 
programs."

I think this applies for last-level size, e.g. you need not know it. I'm 
still thinking you can explain knowing L1 size, and if I recall, this paper 
goes not into details such as cache-line size, that I think you can also 
exploit.

I guess the base-case for your recursion can stop at the L1 size and do 
something different.

--
Palli



Re: [julia-users] Determining L1 cache size?

2016-08-17 Thread Páll Haraldsson
On Tuesday, August 16, 2016 at 1:41:54 PM UTC, Erik Schnetter wrote:
>
> ```Julia
> Pkg.add("Hwloc")
> using Hwloc
> topo = load_topology()
> print(topo)
> ```
>
> I just see that Hwloc has problems with Julia 0.5; I'll tag a new release.
>

Great to know about the hwloc package and its wrapper (I was going to post 
some links on dynamically finding out.. or statically for Linux) .

I did notice however:

"Remove non-functional Windows support 
"
 
(just in the wrapper, the base library supports Windows, so you could add 
suport.. and that commit helps knowing how).

[I didn't check FreeBSD (or OS X) support, is it compatible with Linux?]


I was looking into this myself a while back (and getting cache-line size). 
I guess a default fallback of at least 4 KB, possibly 32 KB, might be ok 
(and 16-byte cache lines, probably save and lowest/most common size) on 
case your library finds nothing, e.g. on Windows. [BLAS is compiled, I 
think with cache knowledge, maybe there's a way of knowing dynamically with 
what options? I guess not good to rely on, think it's stipped out in 
Julia-lite branch.]

[I was also thinking ahead to AOT compiled Julia.. be aware of that also.. 
when cross-compiling Julia and C++ would seem to have to be conservative, 
one reason Julia seems better than C++ (even without AOT).]


See here for save lowest-L1 size you want to support (I've never heard of 
lower than 4 KB data, all shared code+data gone and also where 4 KB):

https://en.wikipedia.org/wiki/List_of_ARM_microarchitectures


>is there a (cross-platform) way to get the CPU L1 cache size

You assume you have a [L1] cache..? :) My/the first ARM had none.. than 
came 4 KB (shared, later split and more than L1). Yes, all computers in 
decades have a cache, except for newest fastest TOP500 supercomputer 
(scratchpad, Adapteva's CPU similar) and Tera had just many threads..

-- 
Palli.

 

>
> -erik
>
>
> On Tue, Aug 16, 2016 at 5:12 AM, Oliver Schulz  > wrote:
>
>> Hi,
>>
>> is there a (cross-platform) way to get the CPU L1 cache size from Julia? 
>> I'd like to determine the optimal block size(s) for an array algorithm.
>>
>>
>> Cheers,
>>
>> Oliver
>>
>>
>
>
> -- 
> Erik Schnetter > 
> http://www.perimeterinstitute.ca/personal/eschnetter/
>


[julia-users] Re: Windows subsystem for Linux

2016-08-11 Thread Páll Haraldsson
On Wednesday, August 3, 2016 at 3:07:55 PM UTC, Bill Hart wrote:
>
> I got the Windows 10 anniversary update and turned on the new Windows 
> subsystem for Linux.
>

[You mean the subsystem and bash is no non-beta/Insder program, good to 
know, and useful for software that isn't already portable to Windows.]
 

> The Julia binaries from the website load, but unfortunately don't fully 
> work.
>

Why would it "be great to get this working"? Or at least [of interest] for 
you? I also did wander if it would work but since there is a Windows Julia 
binary, that we would want to maintain for a long time and not drop support 
of Julia that way, I do not see this as a priority, or on the horizon. Only 
that Julia version is supported back to Windows XP. Theoretically using a 
Linux ELF binary in Windows is interesting, but it could be a long time to 
a Windows 10-exclusive world(?).

>There are also double free or corruption errors with Julia 0.5.0.

I don't understand, only in the Windows subsystem, but not same "double 
free" under Linux with same ELF binary? How can that happen, it's Julia 
that does the free (and malloc)?!

>I tried building Julia 0.4.6 from source and the main issue I hit was that 
pcre2 requires a stack limit of 16MB, but WSL is limited to 8MB.

And ulimit refuses to increase the limit.
>

I've never understood why there are limits, [such as on file descriptors 
(except if you want to artificially limit)], except for the fundamental 
memory limit (and in a sense "CPU time"). I guess the stack is a special 
case (but could also in theory be unlimitted?). Different cross-platform 
limits just makes them worse.. [E.g. file descriptors are just memory to 
the kernel in the end?]

> The main problem seems to be the glacially slow filesystem.

Also interesting, just on the subsystem that is, otherwise ok?

-- 
Palli.




[julia-users] Re: Replacement for jl_gc_preserve() and jl_gc_unpreserve()?

2016-08-10 Thread Páll Haraldsson
On Wednesday, August 10, 2016 at 6:57:15 AM UTC, Kit Adams wrote:
>
> I am investigating the feasibility of embedding Julia in a C++ real-time 
> signal processing framework, using Julia-0.4.6 (BTW, the performance is 
> looking amazing).
>

There are other thread[s] on Julia and real-time, I'll not repeat here. You 
are ok with Julia for real-time work? Julia strictly isn't real-time, you 
just have to be careful.

I'm not looking into your embedding/GC issues as I'm not too familiar with. 
Seems to me embedding doesn't fundamentally change that the GC isn't 
real-time. And real-time isn't strictly about the performance.


> However, for this usage I need to retain Julia state variables across c++ 
> function calls, so the stack based JL_GC_PUSH() and JL_GC_POP() are not 
> sufficient. 
> When I injected some jl_gc_collect() calls for testing purposes, to 
> simulate having multiple Julia scripts running (from the same thread), I 
> got crashes, which I was able to fix using e.g. jl_gc_preserve(mMyState); 
> and appropriate matching jl_gc_unpreserve() calls.
>
> I see these functions have been removed from the latest Julia version. 
>
> Is there an alternative that allows Julia values to be retained in a C++ 
> app across gc calls?
>

-- 
Palli.
 


[julia-users] "Recommendadions" at julialang.org [for e.g. [Py]Plot] for say Julia 0.5

2016-08-09 Thread Páll Haraldsson

A.

First a specific matter:

http://julialang.org/downloads/

I see PyPlot. I'm not asking for recommendations, just discussiong what 
should be officially pointed to [in general].

I can understand that you would not want to take side (or maybe you do, 
PyCall.jl is awesome and I guess PyPlot/matplotlib, I just haven't used any 
plots..).


Maybe the plan is or should be to recommend (only?) Plots.jl (and not 
Gadfly since):


https://juliaplots.github.io/backends/

"Deprecated backends
Gadfly"


See also: https://juliaplots.github.io/supported/

It's clearly not a simple choice.. and since Plot.jl is just an abstract 
wrapper (that seems awesome!) it's not taking sides pointing to it.

I also really like UnicodePlots :) that may be the only pure-Julia only 
solution..? Maybe leave that implicit in the Plots.jl recommendation?


[Maybe the homepage just hasn't been changes yet, as Julia 0.5 isn't out 
and Plots.jl turned 0.5 only.]


B.

Juno is also recommended. I expect there is no change, expect the newer 
Atom-version is now ok and recommended? I've also not tried (seriously) in 
a while.. Any other notable [to add to homepage]? I understand if you do 
not want to confuse people with choices, why A. seems safe..


C.

There's another thread on "metapackages" (proposal), should there be more 
[abstract wrapper/meta] package recommendations?

-- 
Palli.



[julia-users] What do do when @less does not work?

2016-08-09 Thread Páll Haraldsson

A.

julia> @less Base.memhash_seed
ERROR: ArgumentError: argument is not a generic function
 in methods at ./reflection.jl:140

I know what this means, but it would be nice for it to look up the source, 
and when it's actually a function (sometimes, I'm not sure of the right 
parameters..). Is there a way for @less, @which @edit (I guess always same 
mechanisim) to work? Maybe already ion 0.5? Easy to add? I like reading the 
source code to learn, and not have to google or search github or ask here..

[apropos got me nowhere.]


B.

Bonus point, I think I know what memhash[_seed] is.. but what is it.


C.

Trying to google it:

https://searchcode.com/codesearch/view/84537847/

Language  Lisp
[Clearly Julia source code, elsewhere I've seen Python autodetected..]

-- 
Palli.



Re: [julia-users] Re: MYSQL/MARIADB Big INSERT

2016-07-06 Thread Páll Haraldsson
On þri  5.júl 2016 23:49, Andrey Stepnov wrote: 

Hi Palli! 

Thanks for reply. 

Here is code with transactions example: 
|| 
|mysql_query(con, "SET autocommit = 0") 

Generally you do not have to change this, rely on the default setting of 
the database(?).

mysql_query(con, "START TRANSACTION;") 

Yes, non-standard in MySQL.. One of the reasons to use some 
beginTransaction(). 

mysql_query(con, "SET autocommit = 1") 

[And you would then not change again. When your connection closes, setting 
are usually lost. I'm sure there are other ways to change globally for all 
users.] 


 > Then do your INSERTs (prepared statments can be better here for 
security but not speed, while can also help for speed in queries), e.g 
in a loop. 
Yes, you right! It works with small data but if I have: 

And "works" with big data.. 

while 

it's extremely slow. 

at least in MySQL (yes, also 4x faster in one statement in PostgreSQL, I 
see from the link I provide below. I've only used COPY, never equivalent(?) 
multi-INSERT that is actually there I see, and COPY only outside of the 
application). 

Solution: string with non-safe big-insert: 


[It can also be made safe, look into escape functions. They work, just more 
brittle to work with, as you have to take care for each instruction. That 
is why prepared statements are better (and some abstraction layers 
auto-prepare for you) and can be also faster in other cases.] 


That confirms this isn't a Julia issue. I'm not an expert on MySQL. I'll go 
into little more detail, but this belongs in some MySQL (or generic db) 
forum. 

I'm glad you're using Julia, less glad that you're not using PostgreSQL. 
That might also have helped, as it is a superior database (MySQL had a few 
things going for it against PostgreSQL, none I think any more, it had 
clustering but not the other, maybe theirs (or really from a third party) 
is better for special cases of clustering (or not)). 


All databases will be slow if they need to commit each INSERT (or say 
UPDATE) to stable storage (disk, less slowdown for SSDs..). You could 
monitor if that is happening (with strace, in Linux/Unix), maybe by slowing 
down your loop. 

All databases will be faster if you really manage to commit more rows at a 
time, in a transaction (each instruction at least is one), but they 
probably also have non-standard ways. 

PostgreSQL has COPY (similar to MySQL's multi-INSERT, neither are 
standardized, so it doesn't need their way, except for compatibility for 
MySQL code 
https://kaiv.wordpress.com/2007/07/19/faster-insert-for-multiple-rows/ ) as 
it's faster than the non-default that pg_dump exports (for compatibility 
with other databases; you can fairly easily migrate to and from it). 
[Informix db has non-standard dbexport and dbimport commands, then also 
HPL, high-performance loader, that I never used..] 


or even 

https://dev.mysql.com/doc/refman/5.5/en/load-data.html 

"LOAD DATA [LOW_PRIORITY | CONCURRENT]" 

might be more similar to COPY. 

https://dev.mysql.com/doc/refman/5.5/en/insert-delayed.html 

"INSERT DELAYED ... 

The DELAYED option for the INSERT statement is a MySQL extension to 
standard SQL that is very useful if you have clients that cannot or need 
not wait for the INSERT to complete." 

Might also be what you're after. 


Possibly, you need to set isolations level(?), if other users are seeing 
your INSERTs trickle in, it's a clue that they are also going to disk. They 
should see them all or none, if transactions are working. 


MySQL, was really bad, while you could declare FOREIGN KEYs (and probably 
open transactions), they where just ignored! It might still be happening. 
Not sure what the default is. Transactions and foreign keys etc. are only 
defined in the sane engines, e.g. InnoDB. I didn't follow what came later, 
those probably are sane also. 

-- 
Palli.



Re: [julia-users] Re: MYSQL/MARIADB Big INSERT

2016-07-05 Thread Páll Haraldsson
On Tuesday, July 5, 2016 at 7:07:09 AM UTC, Andrey Stepnov wrote:
>
> Hi Ivar!
>
> There is no documentation about transactions for MySQL.jl. I understand 
> that it wrap C libs but it's not clear how to call transaction mechanism 
> from julia.
>

I do not know for sure about MySQL, as I'm a PostgreSQL guy, but it (as 
most databases, but not the standard?) starts in autocommit mode.

To start a transaction you do (these commands as any "other" query, not 
like INSERT is a query..):

BEGIN; -- OR BEGIN WORK; -- that may be in the standard way..

Then do your INSERTs (prepared statments can be better here for security 
but not speed, while can also help for speed in queries), e.g in a loop. 
This way you do not need individual multi-inserts.

COMMIT; -- OR COMMIT WORK; -- here ROLLBACK is an alternative (I guess with 
InnoDB, but not traditional MySQL).


This is independent of how you execute queries, but some database 
abstraction layers have something like beginTransation() etc. [This might 
be available in a database abstraction layer, those are a good idea anyway, 
as you do not want to be tied to MySQL.]

That isn't really needed; It doesn't do anything more complicated, unless 
you have nested transactions, that you probably do not need. You may want 
to loop up "mini-batching" also if you have a lot of inserts (pros and 
cons), that is if the database does not handle number of inserts.

I will be glad if you can provide very short explanation or example. 

-- 
Palli.


[julia-users] Re: Using Julia for real time astronomy

2016-06-08 Thread Páll Haraldsson
On Tuesday, May 31, 2016 at 4:44:17 PM UTC, Páll Haraldsson wrote:
>
> On Monday, May 30, 2016 at 8:19:34 PM UTC, Tobias Knopp wrote:
>>
>> If you are prepared to make your code to not perform any heap 
>> allocations, I don't see a reason why there should be any issue. When I 
>> once worked on a very first multi-threading version of Julia I wrote 
>> exactly such functions that won't trigger gc since the later was not thread 
>> safe. This can be hard work but I would assume that its at least not more 
>> work than implementing the application in C/C++ (assuming that you have 
>> some Julia experience)
>>
>
> I would really like to know why the work is hard, is it getting rid of the 
> allocations, or being sure there are no more hidden in your code? I would 
> also like to know then if you can do the same as in D language:
>
> http://wiki.dlang.org/Memory_Management
>
 

> that is would it be possible to make a macro @nogc and mark functions in a 
> similar way?
>

The @nogc macro was made a long time ago, I now see:

https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/Suspending$20Garbage$20Collection$20for$20Performance...good$20idea$20or$20bad$20idea$3F/julia-users/6_XvoLBzN60/nkB30SwmdHQJ

I'm not saying disabling the GC is preferred, just that the macro has been 
done to do it had already been done.

Karpinski has his own exception variant a little down the thread with "you 
really want to put a try-catch around it". I just changed that variant so 
it can be called recursively (and disabled try-catch as it was broken):

macro nogc(ex)
 quote
   #try
 local pref = gc_enable(false)
 local val = $(esc(ex))
   #finally
 gc_enable(pref)
   #end
   val
 end
   end




[julia-users] Re: Julia Memory Management

2016-06-08 Thread Páll Haraldsson
On Thursday, May 26, 2016 at 3:06:04 PM UTC, Chris Rackauckas wrote:
>
> I see mentions like this one every once in awhile:
>
> "D language is a special case, as it has GC, but it's also optional (as 
> with Julia)"
>

I've been saying something like this [and actually googled, it wasn't me..]

I may have been spreading misinformation, only recently understanding D 
better in what (better) sense GC is "optional" there:


D has GC by default, but better handling (@nogc) for being sure only manual 
memory allocation is allowed in a function (and I guess that function is 
disallowed to call a funtion not marked as such).

In D, if not all your functions are marked as such, then it's about as bad 
to disable the GC [for a long time] as doing so in Julia is.

There are two recent thread here on real-time and avoiding allocations.

You can use Libc.malloc and Libc.free directly in Julia (or call C and use 
manual memory management that way indirectly). That will only get you 
bytes, not objects.

You can use Ptr on those bytes and "reinterpret" (does it only allow some 
types, not "objects" in general?).. It's probably not easy(?) to work that 
way outside of the GC; I'm not sure what it involves using those bytes as 
objects to get it to be easier, except by letting the GC take over, just as 
it can with C allocated memory.


Maybe I should try harder to convince other people (and myself) that Julia 
works very well (even for non-hard real-time) just as it is with the GC 
enabled, including for games. I just like pointing out you have the same 
powers as C/C++ without (or with) calling C/C++.

Reference counting is predictably slow, and you can get an avalanche of 
deallocations. Not doing the deallocations/GC activity can be faster when 
it's not happening, but in the end you might have an even bigger avalanche 
with naive variants of GC (I guess in 0.3), but not an avalanche in 
incremental as in 0.4.


Interesting quote from Oscar: "Interestingly a lot of games actually have a 
tracing gc that handles some objects,[..] or even because they have a C++ 
gc (I'm looking at you Unreal Engine)."

I know about GC e.g. with Lua, with games engines, but not within game 
engines/C++ actually being used, even though I know of the possibility: 
Boehm-style GC optional in C/C++ (even allowed by the C++ standard):

https://github.com/ivmai/bdwgc


[Love this quote from Jeff, just hadn't thought of it that way: 
"Overwriting data is a form of manual memory management (you're saying you 
don't need the old 
version of the data any more). In Julia folks get speedups from this all 
the time."]




Re: [julia-users] Re: Can I somehow get Julia without standard library?

2016-06-08 Thread Páll Haraldsson
On Monday, June 6, 2016 at 5:17:16 PM UTC, Stefan Karpinski wrote:
>
> Really useless – it doesn't know about integers or how add them, for 
> example. If you want to trim down the standard library, you can try editing 
> out parts of base/sysimg.jl and rebuilding, but that's kind of a tricky 
> process.
>

Dmitry asked for "somehow", and while Karpinski is of course correct, I 
think Dmitry wanted to get rid of the files, not the "standard library" per 
se.


I think the point of Intel's Julia2C (ParallelAccelerator.jl 
 does similar things 
to C++, but not meant to translate all your code) was to generate C code so 
you do not have to distribute your source code. [Usually you would compile 
that C code and distribute the binary, not C source code. This would not be 
done to gain speed, unlike ParallelAccelerator.jl, that compiles to C++, 
but could do else later, it's just an implementation detail.]

I also think the compilation to C applies to Julia's standard library, the 
part that is Julia source code (there might be other ways do do this 
already..). [Parts of Julia source code will be in C/C++ and will be in the 
binary libjulia.dll that I think you could never avoid. Some part of the 
code is a runtime, not a "standard library", while other parts of it are, 
such as That binary]

See also the AOT Julia article for gory details and exceptions.

Is Julia2C working? I've never tried it. I believe it wasn't fully 
functional/a demo, [and even got broken by later versions of Julia, maybe 
Julia2C has been kept in sync.]

-- 
Palli.


> On Mon, Jun 6, 2016 at 12:43 PM, Avik Sengupta  > wrote:
>
>> Julia is pretty useless without its standard library. 
>>
>> On Monday, 6 June 2016 14:42:02 UTC+1, Dmitry wrote:
>>>
>>> I tried to remove ".jl" library files from Julia installation directory, 
>>> but it did not help. Then I tried to remove "libjulia.dll" but it does not 
>>> want to run without this file.
>>>
>>
>

Re: [julia-users] Re: Using Julia for real time astronomy

2016-06-08 Thread Páll Haraldsson
On Monday, June 6, 2016 at 9:41:29 AM UTC, John leger wrote:
>
> Since it seems you have a good overview in this domain I will give more 
> details:
> We are working in signal processing and especially in image processing. 
> The goal here is just the adaptive optic: we just want to stabilize the 
> image and not get the final image.
> The consequence is that we will not store anything on the hard drive: we 
> read an image, process it and destroy it. We stay in RAM all the time.
> The processing is done by using/coding our algorithms. So for now, no need 
> of any external library (for now, but I don't see any reason for that now)
>

I completely misread/missed reading 3) about the "deformable mirror" seeing 
now it's a down-to-earth project - literally.. :)

Still, glad to help, even if it doesn't get Julia into space. :)



> First I would like to apologize: just after posting my answer I went to 
> wikipedia to search the difference between soft and real time. 
> I should have done it before so that you don't have to spend more time to 
> explain.
>
> In the end I still don't know if I am hard real time or soft real time: 
> the timing is given by the camera speed and the processing should be done 
> between the acquisition of two images.
>


From: 
https://en.wikipedia.org/wiki/Real-time_computing#Criteria_for_real-time_computing

   - *Hard* – missing a deadline is a total system failure.
   - *Firm* – infrequent deadline misses are tolerable, but may degrade the 
   system's quality of service. The usefulness of a result is zero after its 
   deadline.
   - *Soft* – the usefulness of a result degrades after its deadline, 
   thereby degrading the system's quality of service.

[Note also, real-time also applies to doing stuff too early, not only to 
not doing stuff too late.. In some cases, say in games, that is not a [big] 
problem, getting a frame ready earlier isn't a big concern.]


Are you sure "the processing should be done between the acquisition of two 
images" is a strict requirement? I assume the "atmospheric turbulence" to 
not change extremely quickly and you could have some latency with you 
calculation applying for some time/at least a few/many frames after and 
then your project seems not hard real-time at all. Maybe soft or firm, a 
category I had forgotten..


At least your timescale is much longer than the camera speed to capture 
each frame in a video?


You also said "1000 images/sec but the camera may be able to go up to 10 
000 images/sec". I'm aware of very high-speed photography, such as 
capturing a picture of a bullet from a gun, or seeing light literally 
spreading across a room. Still do you need many frames per second for 
(capturing video, that seems not your job) or for correction? Did you mix 
up camera speed for exposure time? Ordinary cameras go up to 1/1000 s 
shutter speed, but might only take video at up to 30, 60 or say 120 fps.



>I like the definition of 95% hard real time; it suits my needs. Thanks for 
this good paper.

The term/title, sounds like firm real-time..

 

> We don't want to miss an image or delay the processing, I still need to 
> clarify the consequences of a delay or if we miss an image.
> For now let's just say that we can miss some images so we want soft real 
> time.
>

You could store with each frame a) how long since the mirror was corrected, 
based on b) the measurement since how long ago. Also can't you [easily] see 
from a picture if it is mirror is maladjusted? Does to then look blurred 
and then high-frequency content missing?

How many "mirrors" are adjusted, or points in the mirror[s]?


> I'm making a benchmark that should match the system in term of complexity, 
> these are my first remarks:
>
> When you say that one allocation is unacceptable, I say it's shockingly 
> true: In my case I had 2 allocations done by:
> A +=1 where A is an array
> and in 7 seconds I had 600k allocations. 
> Morality :In closed loop you cannot accept any alloc and so you have to 
> explicit all loops.
>

I think you mean two (or even one) allocation are bad because they are in a 
loop. And that loop runs for each adjustment.

I meant even just one allocation (per adjustment, or frame of you will) can 
be a problem. Well, not strictly, but say there have been many in the past, 
then it's only the last one that is the problem.
 

>
> I have two problems now:
>
> 1/ Many times, the first run that include the compilation was the fastest 
> and then any other run was slower by a factor 2.
> 2/ If I relaunch many times the main function that is in a module, there 
> are some run that were very different (slower) from the previous.
>
> About 1/, although I find it strange I don't really care.
> 2/ If far more problematic, once the code is compiled I want it to act the 
> same whatever the number of launch.
> I have some ideas why but no certitudes. What bother me the most is that 
> all the runs in the benchmark will be slower, it's not a temporary slowd

[julia-users] Re: Calling Fortran code?

2016-06-06 Thread Páll Haraldsson
On Monday, June 6, 2016 at 3:22:47 AM UTC, Charles Ll wrote:
>
> Dear all,
>
> I am trying to call some Fortran code in Julia, but I have a hard time 
> doing so... I have read the docs, looked at the wrapping of ARPACK and 
> other libraries... But I did not find any way to make it work.
>
> I am trying to wrap a spline function library (gcvspl.f, 
> https://github.com/charlesll/Spectra.jl/tree/master/Dependencies), which 
> I want to use in my project, Spectra.jl.
>
> I already have a wrapper in Python, but this was easier to wrap with using 
> f2py. In Julia, I understand that I have to do it properly. The function I 
> am trying to call is:
>

I think, you may already have gotten the correct answer. At first I was 
expecting fcall, not just ccall keyword in Julia, to call Fortran.. It's 
not strictly needed, but in fact, there is an issue somewhere still open 
about fcall (keyword, or was if for a function?), and it may have been for 
your situation..


[It might not be too helpful to know, you can call Python with PyCall.jl, 
so if you've already wrapped in Python..]

-- 
Palli.




[julia-users] Re: Reconstruct a string from a Clang.jl generated looong type

2016-06-06 Thread Páll Haraldsson
On Sunday, June 5, 2016 at 11:41:34 PM UTC, J Luis wrote:
>
> Hi,
>
> I have one of those types generated from a C struct with Clang.jl that 
> turns a stack variable into a lng list of members (for example (but I 
> have longer ones))
>
> https://github.com/joa-quim/GMT.jl/blob/master/src/libgmt_h.jl#L1246
>
> (an in interlude: isn't yet any better way of representing a C "char 
> str[256];"?)
>
> when executed I get (example)
>
> julia> hdr.x_units
> GMT.Array_80_Uint8(0x6c,0x6f,0x6e,0x67,0x69,0x74,0x75,0x64,0x65,0x20,0x5b,
> 0x64,0x65,0x67,0x72,0x65,0x65,0x73,0x5f,0x65,0x61,0x73,0x74,0x5d,0x00,0x00
> ,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
> 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00
> ,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
> 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00)
>
> but I need to transform it into a string. After some suffering I came out 
> with this solution
>
> julia> join([Char(hdr.x_units.(n)) for n=1:sizeof(hdr.x_units)])
> "longitude 
> [degrees_east]\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
>
> well, it works
>

Well, maybe not!

If you can have multi-byte string encoding, such as UTF8, then indexing by 
bytes will not work (UTF16, is not likely in your case, as \0 can happen in 
the middle of strings, other multi-byte encodings, I know less about, might 
or might not be safe). UTF8 and all the single-byte encodings I know, such 
as ASCII, strictly allow \0 in the middle of a string, while a Cstring of 
them assumes it can't happen.

There should be a function (maybe there is), where you can point to a 
Cstring (note it IS a type in Julia, for the C API.. at least) [or you 
could hack Cstring.data to point to the first byte..] Note, it might assume 
the string ends with \0, but I'm not sure in your case it will happen for 
sure.. Then you need to guard with a function that takes care of finding 
the first \0 with a fallback to max-size=x_units.

Why is:

immutable Cstring <: Any

not a subtype of AbstractString? Anyway, you may want to convert the string 
then to UTF8String.


but it's kind or ugly. Is there a better way of achieving this? Namely, how 
> could I avoid creating a string with all of those \0's? I know I can remove 
> them after, but what about not copying them on first place?
>
> Thanks
>
>

[julia-users] Re: ANN: POSIXClock - writing hard real-time Julia code

2016-06-06 Thread Páll Haraldsson
On Monday, June 6, 2016 at 8:07:22 AM UTC, Islam Badreldin wrote:
>
> Hi,
>
> I'm excited to announce the first release of POSIXClock, a package that 
> provides Julia bindings to clock_*() functions from POSIX real-time 
> extensions (librt on Linux).
> https://github.com/ibadr/POSIXClock.jl
>

Accurate clock might be of interest to not only real-time computing? 
[Unclear to me if librt/clock_*() works without PREEMPT_RT patch.] And of 
interest to e.g. the real-time satellite telescope in Julia.. see link 
below:
 

>
> Special care was devoted to completely avoiding memory allocations in the 
> real-time section of the code by using in-place operations and 
> pre-allocating all the needed variables, as well as by disabling the 
> garbage collector. 
>

I'm not convinced disabling the GC is needed [or even helpful when there 
are allocations, see my rambling at link below and comments from others and 
pointers to Julia issues.]

 

>
> This package should appeal to roboticists interested in Julia (I have 
> successfully tested this package with blinking a GPIO on the BeagleBone 
> Black using the mraa library), as well as to scientists conducting 
> closed-loop experiments with soft or hard real-time requirements. (Hard 
> real-time performance requires a recent Linux kernel with the PREEMPT_RT 
> patch.)
>

Note in the link below, a link to Linus Torvalds on real-time Linux. Caches 
are a problem. Be careful, even with this patch..
 

>
> In addition to this announcement, I have a couple of questions pertaining 
> to best practices for writing real-time Julia code and avoiding memory 
> allocations, as well as to sharing arrays between two Julia instances (one 
> is real-time and the other is regular). Is it best to post these questions 
> in this thread, or to create a new thread for these questions?
>

Feel free to look at this recent link on [soft, hard?] real-time and add 
questions there (since it's for a specific project, you may want to start a 
new thread):

https://groups.google.com/forum/#!topic/julia-users/qgnNbbuwMIY

[If you add to that thread, you could point there to this thread.]

-- 
Palli.




Re: [julia-users] How to build a range of -Inf to Inf

2016-06-05 Thread Páll Haraldsson
On Friday, June 3, 2016 at 2:54:57 AM UTC, Yichao Yu wrote:
>
> A Range is not a interval


Right.
 

> but rather a series of equally spaced numbers

 
Actually, they do not have to be equally spaced (for the straightforward 
definition)? Not the abstract Range, while "equally spaced" seems to fit 
the lower case range-function.

You are thinking of e.g. StepRange, the concrete type you are likely to get.

I do second e.g. the Unum suggestion (that is implemented, and has -Inf and 
+Inf) and Unum version 2.0 (that isn't implemented yet [in Julia], that I 
know of; only has +-Inf) might have few enough bit-patterns that you could 
usefully have a Range that steps through them all.

I'm not sure this is a helpful comment, that you would want go through 
every bit-pattern (or every Nth); I'm kind of asking. Unums are just very 
interesting to me, as an abstraction of the whole real-line (and better 
match for it). [Look into SCORNs to have no gaps on the real-line of Unums 
2.0.]

[At least nzrange, goes though unspaced numbers, while the indexes to them 
are "spaced".]


help?> Range
search: Range range RangeIndex nzrange linrange UnitRange StepRange 
histrange FloatRange ClusterManager trailing_zeros trailing_ones 
OrdinalRange
[..]

[Why is e.g. ClusterManager and trailing_zeros displayed? I can see why 
RangeIndex is listed, based on the name, but actually it's a Union, not a 
Range.]

so a range with `Inf` as start/end is basically meaningless. 
> You should probably find another datastructure. 
>
> > 
> > Cheers, 
> > 
> > Colin 
> > 
> > 
>
>

Re: [julia-users] Re: Using Julia for real time astronomy

2016-06-03 Thread Páll Haraldsson
mented in a less powerful language 
such as C or C++

[..]
About the Author 

As Chief Technology Officer over Java at Atego Systems—a mission- and 
safety-critical solutions provider—Dr. Kelvin Nilsen oversees the design 
and implementation of the Perc Ultra virtual machine and other Atego 
embedded and real-time oriented products. Prior to joining Atego, Dr. 
Nilsen served on the faculty of Iowa State University where he performed 
seminal research on real-time Java that led to the Perc family of virtual 
machine products."



> Thank you for all these ideas !
>
>
> Le 01/06/2016 23:59, Páll Haraldsson a écrit :
>
> On Wednesday, June 1, 2016 at 9:40:54 AM UTC, John leger wrote: 
>>
>> So for now the best is to build a toy that is equivalent in processing 
>> time to the original and see by myself what I'm able to get.
>> We have many ideas, many theories due to the nature of the GC so the best 
>> is to try.
>>
>> Páll -> Thanks for the links
>>
>
> No problem.
>
> While I did say it would be cool to now of Julia in space, I would hate 
> for the project to fail because of Julia (because of my advice).
>
> I endorse Julia for all kinds of uses, hard real-time (and building 
> operating systems) are where I have doubts.
>
> A. I thought a little more about making a macro @nogc to mark functions, 
> and it's probably not possible. You could I guess for one function, as the 
> macro has access to the AST of it. But what you really want to disallow, is 
> that function calling functions that are not similarly marked. I do not 
> know about metadata on functions and if a nogc-bit could be put in, but 
> even then, in theory couldn't that function be changed at runtime..?
>
> What you would want is that this nogc property is statically checked as I 
> guess D does, but Julia isn't separately compiled by default. Note there is 
> Julia2C, and see
>
> http://juliacomputing.com/blog/2016/02/09/static-julia.html
>
> for gory details on compiling Julia.
>
> I haven't looked, I guess Julia2C does not generate malloc and free, only 
> some malloc substitute in libjulia runtime. That substitute will allocate 
> and run the GC when needed. These are the calls you want to avoid in your 
> code and could maybe grep for.. There is a Lint.jl tool, but as memory 
> allocation isn't an error it would not flag it, maybe it could be an 
> option..
>
> B. One idea I just had (in the shower..), if @nogc is used or just on 
> "gc_disable" (note it is deprecated*), it would disallow allocations (with 
> an exception if tried), not just postpone them, it would be much easier to 
> test if your code uses allocations or calls code that would. Still, you 
> would have to check all code-paths..
>
> C. Ada, or the Spark-subset, might be the go-to language for hard 
> real-time. Rust seems also good, just not as tried. D could also be an 
> option with @nogc. And then there is C and especially C++ that I try do 
> avoid recommending.
>
> D. Do tell if you only need soft real-time, it makes the matter so much 
> simpler.. not just programming language choice..
>
> *
> help?> gc_enable
> search: gc_enable
>
>   gc_enable(on::Bool)
>
>   Control whether garbage collection is enabled using a boolean argument 
> (true for enabled, false for disabled). Returns previous GC state. Disabling
>   garbage collection should be used only with extreme caution, as it can 
> cause memory use to grow without bound.
>
>
>  
>
>>
>> Le mardi 31 mai 2016 18:44:17 UTC+2, Páll Haraldsson a écrit : 
>>>
>>> On Monday, May 30, 2016 at 8:19:34 PM UTC, Tobias Knopp wrote: 
>>>>
>>>> If you are prepared to make your code to not perform any heap 
>>>> allocations, I don't see a reason why there should be any issue. When I 
>>>> once worked on a very first multi-threading version of Julia I wrote 
>>>> exactly such functions that won't trigger gc since the later was not 
>>>> thread 
>>>> safe. This can be hard work but I would assume that its at least not more 
>>>> work than implementing the application in C/C++ (assuming that you have 
>>>> some Julia experience)
>>>>
>>>
>>> I would really like to know why the work is hard, is it getting rid of 
>>> the allocations, or being sure there are no more hidden in your code? I 
>>> would also like to know then if you can do the same as in D language:
>>>
>>> http://wiki.dlang.org/Memory_Management 
>>>
>>> The most reliable way to guarantee latency is to preallocate all data 
>>> that will be needed by

[julia-users] Re: Using Julia for real time astronomy

2016-06-01 Thread Páll Haraldsson
On Wednesday, June 1, 2016 at 9:40:54 AM UTC, John leger wrote:
>
> So for now the best is to build a toy that is equivalent in processing 
> time to the original and see by myself what I'm able to get.
> We have many ideas, many theories due to the nature of the GC so the best 
> is to try.
>
> Páll -> Thanks for the links
>

No problem.

While I did say it would be cool to now of Julia in space, I would hate for 
the project to fail because of Julia (because of my advice).

I endorse Julia for all kinds of uses, hard real-time (and building 
operating systems) are where I have doubts.

A. I thought a little more about making a macro @nogc to mark functions, 
and it's probably not possible. You could I guess for one function, as the 
macro has access to the AST of it. But what you really want to disallow, is 
that function calling functions that are not similarly marked. I do not 
know about metadata on functions and if a nogc-bit could be put in, but 
even then, in theory couldn't that function be changed at runtime..?

What you would want is that this nogc property is statically checked as I 
guess D does, but Julia isn't separately compiled by default. Note there is 
Julia2C, and see

http://juliacomputing.com/blog/2016/02/09/static-julia.html

for gory details on compiling Julia.

I haven't looked, I guess Julia2C does not generate malloc and free, only 
some malloc substitute in libjulia runtime. That substitute will allocate 
and run the GC when needed. These are the calls you want to avoid in your 
code and could maybe grep for.. There is a Lint.jl tool, but as memory 
allocation isn't an error it would not flag it, maybe it could be an 
option..

B. One idea I just had (in the shower..), if @nogc is used or just on 
"gc_disable" (note it is deprecated*), it would disallow allocations (with 
an exception if tried), not just postpone them, it would be much easier to 
test if your code uses allocations or calls code that would. Still, you 
would have to check all code-paths..

C. Ada, or the Spark-subset, might be the go-to language for hard 
real-time. Rust seems also good, just not as tried. D could also be an 
option with @nogc. And then there is C and especially C++ that I try do 
avoid recommending.

D. Do tell if you only need soft real-time, it makes the matter so much 
simpler.. not just programming language choice..

*
help?> gc_enable
search: gc_enable

  gc_enable(on::Bool)

  Control whether garbage collection is enabled using a boolean argument 
(true for enabled, false for disabled). Returns previous GC state. Disabling
  garbage collection should be used only with extreme caution, as it can 
cause memory use to grow without bound.


 

>
> Le mardi 31 mai 2016 18:44:17 UTC+2, Páll Haraldsson a écrit :
>>
>> On Monday, May 30, 2016 at 8:19:34 PM UTC, Tobias Knopp wrote:
>>>
>>> If you are prepared to make your code to not perform any heap 
>>> allocations, I don't see a reason why there should be any issue. When I 
>>> once worked on a very first multi-threading version of Julia I wrote 
>>> exactly such functions that won't trigger gc since the later was not thread 
>>> safe. This can be hard work but I would assume that its at least not more 
>>> work than implementing the application in C/C++ (assuming that you have 
>>> some Julia experience)
>>>
>>
>> I would really like to know why the work is hard, is it getting rid of 
>> the allocations, or being sure there are no more hidden in your code? I 
>> would also like to know then if you can do the same as in D language:
>>
>> http://wiki.dlang.org/Memory_Management 
>>
>> The most reliable way to guarantee latency is to preallocate all data 
>> that will be needed by the time critical portion. If no calls to allocate 
>> memory are done, the GC will not run and so will not cause the maximum 
>> latency to be exceeded.
>>
>> It is possible to create a real-time thread by detaching it from the 
>> runtime, marking the thread function @nogc, and ensuring the real-time 
>> thread does not hold any GC roots. GC objects can still be used in the 
>> real-time thread, but they must be referenced from other threads to prevent 
>> them from being collected."
>>
>> that is would it be possible to make a macro @nogc and mark functions in 
>> a similar way? I'm not aware that such a macro is available, to disallow. 
>> There is a macro, e.g. @time, that is not sufficient, that shows GC 
>> actitivy, but knowing there was none could have been an accident; if you 
>> run your code again and memory fills up you see different result.
>>
>> As with D, the GC in Julia is optional. The above @nogc, is really the 
>

[julia-users] Re: can't get pyjulia to work

2016-05-31 Thread Páll Haraldsson
On Friday, May 27, 2016 at 12:07:40 PM UTC, Nirav Shah wrote:
>
> I tried this for first time today and I see same exact error :( 
> I can run PyCall and it's version is 1.6.1+. So I assume problem is in 
> pyjulia
>

>From memory, in some of the issues mentioned:

You muse use/pin version 1.3.0 of PyCall.jl.

At least for now.. I do not know more, do not use pycall, and just tried 
PyCall in the past to experiment.

-- 
Palli.
 

>
>
> On Tuesday, March 1, 2016 at 7:40:03 AM UTC-8, Neal Becker wrote:
>>
>> From time-to-time, I get interested in trying out moving some of my work 
>> from python to julia.  Before I can even start, I need to be able to call 
>> from python to julia.  But I've never gotten pyjulia to work on 
>> linux/fedora 
>> (currently 23).  I've tried the fedora version of julia (0.4.3), and I've 
>> built my own julia today from master, and in both cases I get: 
>>
>> j = julia.Julia (jl_init_path='/home/nbecker/julia') 
>> ERROR: UndefVarError: dlpath not defined 
>>  in eval(::Module, ::Any) at ./boot.jl:267 
>>  [inlined code] from ./sysimg.jl:14 
>>  in process_options(::Base.JLOptions) at ./client.jl:239 
>>  in _start() at ./client.jl:318 
>> Traceback (most recent call last): 
>>   File "/home/nbecker/pyjulia/julia/core.py", line 238, in __init__ 
>> """]) 
>>   File "/usr/lib64/python3.4/subprocess.py", line 620, in check_output 
>> raise CalledProcessError(retcode, process.args, output=output) 
>> subprocess.CalledProcessError: Command 
>> '['/home/nbecker/julia/usr/bin/julia', '-e', '\n 
>> println(JULIA_HOME)\n 
>> println(Sys.dlpath(dlopen("libjulia")))\n ']' 
>> returned 
>> non-zero exit status 1 
>>
>> During handling of the above exception, another exception occurred: 
>>
>> Traceback (most recent call last): 
>>   File "", line 1, in  
>>   File "/home/nbecker/pyjulia/julia/core.py", line 244, in __init__ 
>> raise JuliaError('error starting up the Julia process') 
>> julia.core.JuliaError: error starting up the Julia process 
>>
>>

[julia-users] Re: Using Julia for real time astronomy

2016-05-31 Thread Páll Haraldsson
On Monday, May 30, 2016 at 8:19:34 PM UTC, Tobias Knopp wrote:
>
> If you are prepared to make your code to not perform any heap allocations, 
> I don't see a reason why there should be any issue. When I once worked on a 
> very first multi-threading version of Julia I wrote exactly such functions 
> that won't trigger gc since the later was not thread safe. This can be hard 
> work but I would assume that its at least not more work than implementing 
> the application in C/C++ (assuming that you have some Julia experience)
>

I would really like to know why the work is hard, is it getting rid of the 
allocations, or being sure there are no more hidden in your code? I would 
also like to know then if you can do the same as in D language:

http://wiki.dlang.org/Memory_Management 

The most reliable way to guarantee latency is to preallocate all data that 
will be needed by the time critical portion. If no calls to allocate memory 
are done, the GC will not run and so will not cause the maximum latency to 
be exceeded.

It is possible to create a real-time thread by detaching it from the 
runtime, marking the thread function @nogc, and ensuring the real-time 
thread does not hold any GC roots. GC objects can still be used in the 
real-time thread, but they must be referenced from other threads to prevent 
them from being collected."

that is would it be possible to make a macro @nogc and mark functions in a 
similar way? I'm not aware that such a macro is available, to disallow. 
There is a macro, e.g. @time, that is not sufficient, that shows GC 
actitivy, but knowing there was none could have been an accident; if you 
run your code again and memory fills up you see different result.

As with D, the GC in Julia is optional. The above @nogc, is really the only 
thing different, that I can think of that is better with their optional 
memory management. But I'm no expert on D, and I mey not have looked too 
closely:

https://dlang.org/spec/garbage.html


> Tobi
>
> Am Montag, 30. Mai 2016 12:00:13 UTC+2 schrieb John leger:
>>
>> Hi everyone,
>>
>> I am working in astronomy and we are thinking of using Julia for a real 
>> time, high performance adaptive optics system on a solar telescope.
>>
>> This is how the system is supposed to work: 
>>1) the image is read from the camera
>>2) some correction are applied
>>3) the atmospheric turbulence is numerically estimated in order to 
>> calculate the command to be sent to the deformable mirror
>>
>> The overall process should be executed in less than 1ms so that it can be 
>> integrated to the chain (closed loop).
>>
>> Do you think it is possible to do all the computation in Julia or would 
>> it be better to code some part in C/C++. What I fear the most is the GC but 
>> in our case we can pre-allocate everything, so once we launch the system 
>> there will not be any memory allocated during the experiment and it will 
>> run for days.
>>
>> So, what do you think? Considering the current state of Julia will I be 
>> able to get the performances I need. Will the garbage collector be an 
>> hindrance ?
>>
>> Thank you.
>>
>

[julia-users] Re: Using Julia for real time astronomy

2016-05-31 Thread Páll Haraldsson
On Monday, May 30, 2016 at 12:10:39 PM UTC, Uwe Fechner wrote:
>
> I think, that would be difficult.
>
> As soon as you use any packages for image conversion or estimation you 
> have to assume that they use dynamic memory allocation.
>
> The garbage collector of Julia is fast, but not suitable for hard 
> real-time requirements. Implementing a garbage collector for hard real-time
> applications is possible, but a lot of work and will probably not happen 
> in the near future.
>
> Their was an issue on this topic, that was closed as "won't fix":
> https://github.com/JuliaLang/julia/issues/8543 
> 
>

Well, the "won't fix"-label was later taken off the issue.

Yes, the issue is still closed, but it's unclear to me what has changed 
with the GC, when. I know incremental GC was implemented at some point. No 
hard-real-time GC is available.

It would be cool to know of Julia in space so I gave this some thought..

I recall from MicroPython, that they claimed hard-real-time GC (also 
available for Java with Metronome), that is predictable pause times. I 
remember thinking, how can they do/claim that (and if I recall, didn't 
change the GC)? MicroPython is meant for microcontrollers (at the time only 
one), that has a known amount of memory. I can't locate the information I 
read at the time now, I think they where talking in megabytes range. Then 
worst case, you have to scan a fixed amount of memory, and the speed of the 
CPU is also known. Unlike with MicroPython, you will have an operating 
system (that is not real-time, but Linux can be configured as such, but 
caches are a problem..). Maybe if you can limit the RAM, or just how much 
Julia will try to allocate, it helps in the same way.

Anyway, you may not strictly need hard-real-time. I think, as always (in 
non-real-time/concurrent GC variants)?, the garbage collection only happens 
when you try to allocate memory and it is full. If you preallocate all 
memory and make sure no more is allocated, I can't see the GC being a 
problem (you can also disable it for some period of time).

Libc.malloc and free is also available with Julia..


[Possibly it helps to split your task into more than one process, having 
only one real-time? If you can have shared memory between two processes, 
would that help? Be careful with that.. I'm not sure it's a good idea or at 
least I need to explain it better..]



https://github.com/micropython/micropython/wiki/FAQ
"Regarding RAM usage, MicroPython can start up with 2KB of heap. Adding 
stack and required static memory, a 4KB microcontroller could start a 
MicroController, but hardly could go further than interpreting simple 
expressions. Thus, 8KB is minimal amount to run simple scripts."

https://forum.micropython.org/viewtopic.php?t=1778
"today I painfully learned, that uPy's automatic garbage collection can 
really mess up your 500Hz feedback control loop, since it takes forever 
(>1ms  :o :shock: :cry: )."


http://entitycrisis.blogspot.is/2007/12/is-hard-real-time-python-possible.html

http://stackoverflow.com/questions/1402933/python-on-an-real-time-operation-system-rtos


>
> Uwe
>
> On Monday, May 30, 2016 at 12:00:13 PM UTC+2, John leger wrote:
>>
>> Hi everyone,
>>
>> I am working in astronomy and we are thinking of using Julia for a real 
>> time, high performance adaptive optics system on a solar telescope.
>>
>> This is how the system is supposed to work: 
>>1) the image is read from the camera
>>2) some correction are applied
>>3) the atmospheric turbulence is numerically estimated in order to 
>> calculate the command to be sent to the deformable mirror
>>
>> The overall process should be executed in less than 1ms so that it can be 
>> integrated to the chain (closed loop).
>>
>> Do you think it is possible to do all the computation in Julia or would 
>> it be better to code some part in C/C++. What I fear the most is the GC but 
>> in our case we can pre-allocate everything, so once we launch the system 
>> there will not be any memory allocated during the experiment and it will 
>> run for days.
>>
>> So, what do you think? Considering the current state of Julia will I be 
>> able to get the performances I need. Will the garbage collector be an 
>> hindrance ?
>>
>> Thank you.
>>
>

Re: [julia-users] switch superior

2016-05-12 Thread Páll Haraldsson
On Monday, May 9, 2016 at 6:10:40 AM UTC, Tamas Papp wrote:
>
> Was there more recent discussion about switch? I think I missed it, last 
> thing I am aware of is #5410. And of course see Switch.jl.
>

I took a look, and it implements C-style switch with its pros/features 
(mainly if you are translating C-code line-by-line) and cons. While I find 
it great that implementing switch (and match) with macros in Julia is 
possible, I also see at:

https://github.com/JuliaLang/julia/issues/5410

that Go's switch fixed C's "software engineering"-mistakes (still OO 
polymorphism and Julia's multiple dispatch is also the alternative), with 
"fallback"-keywords still allowing "Duff's device" (fast code).

Something like:

https://github.com/kmsquire/Match.jl

should be emphasized over switch.

-- 
Palli.


> In any case, I find the proposed syntax a bit obscure and 
> complicated. I would prefer using the result of 
>
> findfirst(x->input % x == 0,[2,3,5,7]) 
>
> or if that does not help, then explicit currying, 
>
> input = 119 
> let d(x) = input % x == 0 
> if d(2) 
> # code 
> elseif d(3) 
> # code 
> elseif d(5) 
> # code 
> elseif d(7) 
> # code 
> end 
> end 
>
> Best, 
>
> Tamas 
>
>
>
> On Mon, May 09 2016, Ford Ox wrote: 
>
> > I have a little suggestion: If julia is going to have switch, could we 
> make it a bit better? 
> > 
> > Basically the switch would take two parameters : function and variable. 
> On each case it would would 
> > call the function with those two params, and if the functions would 
> return true, it would evaluate the case 
> > block. 
> > 
> > Note: the function has to return boolean. 
> > 
> > Example 
> > 
> > function divides(a, b) 
> >   return a % b == 0 
> > end 
> > 
> > input = 119 
> > switch(divides, input) 
> >   case 2 # this can be translated as ~ if(divides(input, 2)) 
> >   case 3 # 3 divides input without remainder 
> >   case 5 # 5 divides input without remainder 
> >   case 7 # 7 divides input without remainder 
> > end 
> > 
> > Of course you could achieve the default switch behavior like this: 
> > switch(==, input) 
> > ... 
> > 
> > What do you think about it? 
>


[julia-users] Re: Julia large project example.

2016-05-12 Thread Páll Haraldsson
On Thursday, May 12, 2016 at 8:45:47 AM UTC, Ford Ox wrote:
>
> I have searched docs but didn't find any info on this.
>
> *How should one structure larger project (say minesweeper) with data 
> encapsulation on mind?*
>

Funny you should mention Minesweeper and "larger" in the same sentence, as 
the 80-line Minesweeper in my favorite example of concise Julia code to 
show:

http://escher-jl.org/ *

https://github.com/shashi/Escher.jl

This includes all the code (excluding this generic library, and 
dependencies), and doesn't need JavaScript, HTML, CSS..


In general, similar to Python, you can access data members of your types 
(classes in Python), and there is no "private" as in C++ (or "protected", 
that wouldn't apply here), but it's bad practice to do that.

There are very large Julia projects, so this is not a hindrance. See also:

https://en.wikipedia.org/wiki/Composition_over_inheritance


* Very strangely, the website comes up, then turns blank, at least in 
Firefox. I believed I had screwed up, something in the browser, with some 
add-on, but in fact, I happened to get a new machine, and now have a clean 
profile. This used to work..

 

>
>
> Do you have link to any existing project, or could you provide an 
> interface or graph, showing, how would you implement ^?
>
>
> All projects I have written so far, feel like a mess (* pretty much like 
> android app developing* :P ).
>


[julia-users] Re: Some sort of sandboxing and scheduling and persistence of state

2016-04-17 Thread Páll Haraldsson
On Tuesday, April 12, 2016 at 3:22:50 PM UTC, Aaron R. M. wrote:
>
> Hey I'm looking for a solution to some problems that i have using Julia as 
> a general purpose language.
> Given that there are no access modifiers i cannot restrict people calling 
> functions i don't want them to call.
> Use case is a game (yeah that's somewhat new territory) that has its 
> codebase (provided by me) but allows scripting the environment (within some 
> borders also outlined by me). Examples would be spawning some 
> plants/animals etc. manipulating what happens on clicking and so on. All 
> stuff that changes your experience in the game.
>
> But i clearly don't want people do be able to accidentally harm their 
> computers by downloading/removing/creating anything in their file system.
> The other obvious reason is that i don't want people to be able to open a 
> file stream download a virus and execute it (what theoretically would be 
> possible). This is no problem in a singleplayer game but imagine it being 
> multiplayer.
>

I hope I didn't scare you off, but you wouldn't normally send code from one 
client to another.

As I said previously, Java is an environment made, designed to handle 
arbitrary code from the internet (still the implementations have had 
bugs..). 

Maybe in the future Julia will have something like:

https://docs.python.org/2/library/restricted.html

"Restricted execution is the basic framework in Python that allows for the 
segregation of trusted and untrusted code."

I imagine e.g. Blender uses such.

I know at CCP Games, they use [Stackless] Python for EVE Online. I still 
think they do not use, such an environment, as the clients should only get 
code from their servers. And The servers do not trust the clients. That 
isn't really appropriate in a multiplayer game.

What I was describing (safe environment), is probably an overkill for your 
or any game. And one more thing I remember, you would want to restrict 
file-system access and/or at least the run-function..

This case could also be handled by just not sharing stuff or having no 
> multiplayer but i think you get my problem.
>
> The next problem that also results from the very same game idea was the 
> following:
> Mind you've got some automation tool that does anything on its own (like 
> protecting you like a shield) now say this shield does something scheduled 
> as well. Since this is all coded by the player there are no real hooks in 
> his code. So what can i do to preserve the current state of the execution 
> of this script in case he leaves the game or pauses it. (Means i need 
> something to make the current execution state persistent aswell as being 
> able to stop it at any given point and of course resume it later)
> This problem leads to something similar to an interpreter or even debugger 
> (or even more abstract a VM, but i think that'd be overkill) but for the 
> current existing ones they clearly don't focus on persistence and the 
> ability to yield after like 10 execution steps. A perfect solution to this 
> problem would be some sort of scheduler for the interpreter/debugger that 
> can save the state of it's scheduled program. But sadly something like that 
> doesn't exist (at least i didn't find it).
>
> So are there any tips/workarounds to come nearer to my dream of game?
> Partial solutions/solutions/ideas etc even for "only" one of the problems 
> are highly appreciated.
>
> Of course i know these problems are very special but I'm sure that having 
> more people reading it might result in some very nice ideas.
>


[julia-users] Re: Some sort of sandboxing and scheduling and persistence of state

2016-04-17 Thread Páll Haraldsson
On Tuesday, April 12, 2016 at 3:22:50 PM UTC, Aaron R. M. wrote:
>
> Hey I'm looking for a solution to some problems that i have using Julia as 
> a general purpose language.
> Given that there are no access modifiers i cannot restrict people calling 
> functions i don't want them to call.
> Use case is a game (yeah that's somewhat new territory) that has its 
> codebase (provided by me) but allows scripting the environment (within some 
> borders also outlined by me). Examples would be spawning some 
> plants/animals etc. manipulating what happens on clicking and so on. All 
> stuff that changes your experience in the game.
>
> But i clearly don't want people do be able to accidentally harm their 
> computers by downloading/removing/creating anything in their file system.
>

As I think you may know, Julia is not a sandboxed environment (like the 
JVM, or I guess similar in Android).

I've been meaning to write a post on this.. There is some old one if I 
recall. I would like there to be a safe subset of Julia. As ccall is a 
keyword, I'm not sure if it can be disabled. Basically, when you can call 
another lower level language (C) with that, all bets on safety are off 
(this is like with JNI in Java, while the permission is off by default 
there, right).

For many users, ccall is critical for getting Julia popular, even if they 
do not use directly then indirectly through libraries. I find it amazing 
how self-sufficient Java programmers have gotten in a sandbox (for their 
needs, I guess most suff has been implemented in Java).
 

> The other obvious reason is that i don't want people to be able to open a 
> file stream download a virus and execute it (what theoretically would be 
> possible).
>

Right, if there is not a sandbox, finding out if your code ends up using 
ccall, imples the "halting problem", proofably not possible.

See here on limitiing Julia power (as must happen for when compiling to 
static binaries, and not embedding the compiler):

http://juliacomputing.com/blog/2016/02/09/static-julia.html

"Or if a program written in a dynamic language doesn’t use eval, then it 
can be transpiled to avoid the runtime interpreter[1]."

eval is one of your problems, if you get around that one, disable ccall 
(limitting to Julia only code..) and run julia with "yes":

--check-bounds={yes|no}   Emit bounds checks always or never (ignoring 
declarations)

I think Julia could be made safe. Am I missing something else?

Is this a 3D game? The garbage collection could be a problem.. There are 
ways around (as Lua does, disabling in vblank). And there is Libc.malloc 
etc.

This is no problem in a singleplayer game but imagine it being multiplayer. 
> This case could also be handled by just not sharing stuff or having no 
> multiplayer but i think you get my problem.
>
> The next problem that also results from the very same game idea was the 
> following:
> Mind you've got some automation tool that does anything on its own (like 
> protecting you like a shield) now say this shield does something scheduled 
> as well. Since this is all coded by the player there are no real hooks in 
> his code. So what can i do to preserve the current state of the execution 
> of this script in case he leaves the game or pauses it. (Means i need 
> something to make the current execution state persistent aswell as being 
> able to stop it at any given point and of course resume it later)
> This problem leads to something similar to an interpreter or even debugger 
> (or even more abstract a VM, but i think that'd be overkill) but for the 
> current existing ones they clearly don't focus on persistence and the 
> ability to yield after like 10 execution steps. A perfect solution to this 
> problem would be some sort of scheduler for the interpreter/debugger that 
> can save the state of it's scheduled program. But sadly something like that 
> doesn't exist (at least i didn't find it).
>
> So are there any tips/workarounds to come nearer to my dream of game?
> Partial solutions/solutions/ideas etc even for "only" one of the problems 
> are highly appreciated.
>
> Of course i know these problems are very special but I'm sure that having 
> more people reading it might result in some very nice ideas.
>


[julia-users] Re: Int or Int64

2016-04-16 Thread Páll Haraldsson
On Wednesday, April 13, 2016 at 9:27:00 AM UTC, Bill Hart wrote:
>
> Int is either Int32 or Int64, depending on the machine. Int64 does still 
> seem to be defined on a 32 bit machine. In fact, even Int128 is defined.
>

Yes, this is all safe when you only have one thread, but if you plan for 
the future (threads in Julia), I wander if only Int64 on 32-bit (and 
Int128, on 32- and 64-bit) is unsafe, as it is non-atomic:

http://preshing.com/20130618/atomic-vs-non-atomic-operations/

See there "torn write". I saw a little surprised that all accesses in C/C++ 
can be non-safe.. so maybe that also applies to Julia.

Do I worry to much, as locks would be the way (or "lock-free"), to guard 
against non-atomic?

If you need big numbers keep BigNum in mind that I think should always be 
safe.

But of course it is going to have to emulate processor instructions to do 
> 64 bit arithmetic unless the machine actually has such instructions. So it 
> could well be quite a bit slower.
>
> On Wednesday, 13 April 2016 11:09:27 UTC+2, vincent leclere wrote:
>>
>> Hi all,
>>
>> quick question: I am building a package and has been defining types with 
>> Int64 or Float64 properties.
>> Is there any reason why I should be using Int and Float instead ? (Does 
>> Int64 work on 32bits processors ?)
>> Will it be at the price of efficiency loss ?
>>
>> Thanks
>>
>

[julia-users] Re: Defining Python classes from Julia

2016-04-12 Thread Páll Haraldsson
On Friday, April 8, 2016 at 8:59:33 PM UTC, Steven G. Johnson wrote:
>
> Just FYI, cstjean has implemented a really cool new feature in PyCall (
> https://github.com/stevengj/PyCall.jl/pull/250  ... requires PyCall 
> master at the moment): you can now define new Python classes easily from 
> Julia, via a natural syntax:
>
 

> Multiple inheritance, special methods like __init__, and getter/setter 
> properties are all supported.  Better yet, the methods support full Julian 
> multiple dispatch, optional args, and keywords (because, at their core, 
> they are ordinary Julia functions with ordinary dispatch taking place after 
> the Python arguments are converted to Julia types).
>
> It will be interesting to see what new kinds of interoperability this 
> enables.  (I suspect that Python GUI toolkits, which often require you to 
> define your own classes, will become much easier to use.)
>

One thing I can think of, enabling using Django with Julia:

https://docs.djangoproject.com/en/1.9/topics/class-based-views/intro/

https://docs.djangoproject.com/es/1.9/topics/db/models/


As I expected, the [GUI/web] framework Hollywood principle ("Don't call us. 
We'll call you"), seem to apply to Django too.

As I've never used Django, and more generally do not have a complete 
overview of Python, would you say most any library and now framework of 
Python is supported with PyCall? At least the language features required by 
Django?

[Is Django, preferable over web micro-frameworks, such as the Sinatra-like 
native Julia one? Or some other code, such as http://escher-jl.org/ ? I 
just like to know Django, and all Python web stuff is available.. Might 
look into all ways.]

-- 
Palli.



  1   2   3   >