[julia-users] Re: [ANN] Nemo 0.5 released

2016-07-30 Thread Alireza Nejati
Very cool! Is there a page where these changes are described more? How has 
the abstract type hierarchy been changed?

Also, I heard there was supposed to be a complex root isolation method?

On Wednesday, July 27, 2016 at 9:07:44 AM UTC+12, Bill Hart wrote:
>
> Hi all,
>
> We are pleased to release version 0.5 of Nemo, our computer algebra 
> package written in Julia. 
>
> Instructions on how to get Nemo are on our website:
>
> http://nemocas.org/downloads.html
>
> Note that we have moved our repository, so existing users may need to 
> reinstall.
>
> Documentation for Nemo, including example code, how to get started etc., 
> is available online:
>
> http://nemocas.github.io/Nemo.jl/latest/
>
> The new features of Nemo 0.5 include:
>
> * Wrap Arb's arb_mat, acb_mat (matrices over R and C)
> * Wrap Arb's arb_poly, acb_poly (polys over R and C)
> * Completely rewritten, online documentation
> * Wrap Flint's fmpq_mat (matrices over Q)
> * Nullspace over the integers (using HNF)
> * Factorisations now return a Dict
> * Make caching of parent objects optional
> * Add benchmarks
> * Remove a lot of type instability
> * Integrate C libraries with Julia's counted malloc
> * Redesign abstract type hierarchy
> * Appveyor continuous integration for Windows
> * Lots of cleaning up and restructuring of the code base
> * Many bug fixes
>
> We will release a small update in the next few weeks to support Julia 0.5 
> when it comes out, However it should work with nightlies right now with 
> some warnings.
>  
> Enjoy,
>
> The Nemo Developers.
>


[julia-users] Re: How to build Julia with Visual Studio ?

2016-07-30 Thread Alireza Nejati
What exactly are you trying to do? What's a "julia sample"?

On Sunday, July 31, 2016 at 6:17:13 AM UTC+12, jq...@tibco.com wrote:
>
> Hi 
>
>I try to build a Julia sample with Visual Studio, got link error ( 
> works on Linux).  Which library it should link to ?
>
> unresolved external symbol __imp__jl_atexit_hook referenced in function
> unresolved external symbol __imp__jl_init referenced 
>
>
> Thanks for the help
> Jason
>
>   
>


[julia-users] Re: ANN: Algebraic numbers

2016-07-13 Thread Alireza Nejati
To confuse two roots, the approximation error would have to be larger than 
the minimum distance between two roots. I'm using PolynomialRoots.jl to 
calculate roots, and it has the ability to calculate roots to very high 
precision (using BigFloats) but of course it's hard to *guarantee* precision. 
If Nemo.jl has guaranteed root isolation, that would be very helpful, and 
I'll look into using it. Thanks.

As for highly clustered roots, you can take the complexity of the 
polynomial to be the sum of the number of bits required to represent its 
coefficients, in which case my point stands - you need fairly complex 
polynomials to get roots clustered in a way that would trip up my 
implementation. Again, I'll look into using Nemo's root isolation methods.

Cheers,

On Thursday, July 14, 2016 at 10:35:23 AM UTC+12, Fredrik Johansson wrote:
>
> On Wednesday, July 13, 2016 at 10:18:06 PM UTC+2, Alireza Nejati wrote:
>>
>> Simplification and equality testing are *exact* operations as they work 
>> by distinctly specifying the roots of a minimal polynomial. Two algebraic 
>> numbers are distinct if their minimal polynomials are distinct. If their 
>> minimal polynomials are equal, then they can only be equal if they are the 
>> same in a small set of distinct roots, which can be exactly distinguished 
>> by calculating the smallest distance between roots (this is 
>> AlgebraicNumber.prec). I'm not sure what you mean by nonrigorous numerical 
>> approximations.
>>
>
> If the roots are computed approximately, you can get wrong results when 
> you compare them without taking the approximation errors into account.
>
>
>> The worst-case scenario would be if the minimal polynomial had two roots 
>> that were too close for the root-finding procedure to distinguish. If this 
>> happens in practice though, because of the restriction of polynomial 
>> coefficients to integers, you'd be dealing with a very, very complex 
>> polynomial, and at that point you'd have more problems (it's likely that + 
>> and * would exhaust system memory or at least take a *very* long time to 
>> process).
>>
>
> Integer polynomials with highly clustered roots are not at all a rare 
> thing. Note that you don't even need high degree for this; large 
> coefficients suffice.
>
> Fredrik
>


[julia-users] Re: ANN: Algebraic numbers

2016-07-13 Thread Alireza Nejati
Simplification and equality testing are *exact* operations as they work by 
distinctly specifying the roots of a minimal polynomial. Two algebraic 
numbers are distinct if their minimal polynomials are distinct. If their 
minimal polynomials are equal, then they can only be equal if they are the 
same in a small set of distinct roots, which can be exactly distinguished 
by calculating the smallest distance between roots (this is 
AlgebraicNumber.prec). I'm not sure what you mean by nonrigorous numerical 
approximations.

The worst-case scenario would be if the minimal polynomial had two roots 
that were too close for the root-finding procedure to distinguish. If this 
happens in practice though, because of the restriction of polynomial 
coefficients to integers, you'd be dealing with a very, very complex 
polynomial, and at that point you'd have more problems (it's likely that + 
and * would exhaust system memory or at least take a *very* long time to 
process).

It's also not guaranteed that the composed_sum and composed_product will 
always return the correct answer. Not because the calculation is 
approximate (it's not) but because it's possible that there could be bugs 
in my implementation. If you find a bug please let me know on the github 
repo.

On Thursday, July 14, 2016 at 2:16:42 AM UTC+12, Fredrik Johansson wrote:
>
>
>
> On Wednesday, July 13, 2016 at 2:21:18 AM UTC+2, Alireza Nejati wrote:
>>
>> Ever wanted to do exact arithmetic, geometry, and so on? Well now you can:
>>
>> https://github.com/anj1/AlgebraicNumbers.jl
>>
>
> Looks nice, though it appears that equality testing and simplification 
> uses nonrigorous numerical approximations. We want to have an 
> implementation of the field of algebraic numbers in Nemo, with interval 
> arithmetic in the underlying comparisons to guarantee correctness. The 
> necessary tools should be available already (including certified complex 
> root isolation), modulo some wrapping. SageMath has an implementation of 
> algebraic numbers (QQbar) that works similarly.
>
> Fredrik 
>


[julia-users] Re: ANN: Algebraic numbers

2016-07-13 Thread Alireza Nejati
I see. Will try it, thanks for the tip

On Thursday, July 14, 2016 at 12:49:19 AM UTC+12, Tommy Hofmann wrote:
>
> You have the same problem with QQ. Use FlintQQ instead of QQ.
>
> On Wednesday, July 13, 2016 at 1:31:23 PM UTC+2, Alireza Nejati wrote:
>>
>> Tommy: Thanks!
>>
>> About ZZ, I didn't know that. Thanks. I only use ZZ when carrying out 
>> polynomial factoring though. For most everything else I use QQ. Would it 
>> make much of a difference?
>>
>> Cheers,
>>
>> On Wednesday, July 13, 2016 at 10:13:41 PM UTC+12, Tommy Hofmann wrote:
>>>
>>> Looks cool!
>>>
>>> It is quite different to Hecke. You work in the field of all algebraic 
>>> numbers, while Hecke works with elements inside an algebraic number field.
>>>
>>> I skimmed over the code and noticed that you use ZZ. If you care about 
>>> performance, you might want to change ZZ to FlintZZ. The variabe ZZ is a 
>>> global variable, while FlintZZ is a constant variable.
>>>
>>> On Wednesday, July 13, 2016 at 2:21:18 AM UTC+2, Alireza Nejati wrote:
>>>>
>>>> Ever wanted to do exact arithmetic, geometry, and so on? Well now you 
>>>> can:
>>>>
>>>> https://github.com/anj1/AlgebraicNumbers.jl
>>>>
>>>

[julia-users] Re: ANN: Algebraic numbers

2016-07-12 Thread Alireza Nejati
Indeed!

Hecke.jl also has some similar abilities.

On Wednesday, July 13, 2016 at 12:29:28 PM UTC+12, Jeffrey Sarnoff wrote:
>
> (another good use of Nemo!)
>
> On Tuesday, July 12, 2016 at 8:21:18 PM UTC-4, Alireza Nejati wrote:
>>
>> Ever wanted to do exact arithmetic, geometry, and so on? Well now you can:
>>
>> https://github.com/anj1/AlgebraicNumbers.jl
>>
>

[julia-users] ANN: ThinPlateSplines.jl

2016-05-23 Thread Alireza Nejati
https://github.com/anj1/ThinPlateSplines.jl


[julia-users] Re: ANN: AffineSpaces.jl (work in progress)

2016-05-16 Thread Alireza Nejati
Indeed you're right; I wrote that example in a hurry. It's been updated 
(I've taken a lot more care in the actual code - documentation was never my 
forte).

What kind of vehicle routing algorithms do you use? I'd be interested to 
know the uses people can find for stuff like this. I'm personally using it 
in a 2- and 3-d geometry library I've been working on (which I'll release 
as soon as I can get cleaned up).

On Tuesday, May 17, 2016 at 3:08:14 AM UTC+12, Evan Fields wrote:
>
> Very cool stuff; I could see this being really useful in heuristic vehicle 
> routing work I do.
>
> By the way, in the readme should the 2d example which creates the line y = 
> 1 first create the x-axis (y=0) and then offset? It looks like you're using 
> the second component vector [0,1].
>


[julia-users] ANN: AffineSpaces.jl (work in progress)

2016-05-13 Thread Alireza Nejati
https://github.com/anj1/AffineSpaces.jl

If you're doing computational geometry but are tired of copy-pasting 
fragile stackoverflow answers to do simple things like point-line distance 
and so on (or choosing between that and some huge and bloated computational 
geometry library like CGAL), this is for you.

It''s still work in progress. I'm going to add some more computational 
geometry like meshes, convex hulls, constructive solid geometry, and so on 
(might do those in another repo that pulls base functionality from this 
one).

Cheers,
Al Nejati


[julia-users] Re: Creating a stable version of Julia + Packages for a semester long course?

2015-11-16 Thread Alireza Nejati
John Lambert: Unless you can run julia in sagemath cloud, I fail to see how 
it is relevant to the discussion at hand. Sheehan clearly wants to run 
julia, not sage.


[julia-users] Re: Google releases TensorFlow as open source

2015-11-11 Thread Alireza Nejati
> From reading through some of the TensorFlow docs, it seems to currently 
only run on one machine. This is where MXNet has an advantage (and 
MXNet.jl) as it can run across multiple machines/gpus

I think it's fair to assume that Google will soon release a distributed 
version.

> problem is, there are so many ML toolkits coming out now that things are 
already getting pretty fragmented in the space.

Let them fight it out until one wins, I say.

Anyway, the problem I'm facing right now is that even though TensorFlow's 
python interface works fine, I can't get TensorFlow's C library to build! 
Has anyone else had any luck with this? I've had to update java AND gcc 
just to make some progress in building (they use c++11 features, don't 
ask). Plus I had to install google's own bizarre and buggy build manager 
(bazel). TensorFlow.jl would be kind of pointless if everyone faced the 
same build issues...


[julia-users] JuliaML

2015-11-11 Thread Alireza Nejati
Hello,

I'd like to join the JuliaML group. My github account name is anj1.

Regards,
Al Nejati


[julia-users] Re: Google releases TensorFlow as open source

2015-11-11 Thread Alireza Nejati
Both! :)

[julia-users] Re: Google releases TensorFlow as open source

2015-11-10 Thread Alireza Nejati
If anyone draws up an initial implementation (or pathway to implementation, 
even), I'd gladly contribute. I think it's highly strategically important 
to have a julia interface to TensorFlow.


[julia-users] Re: Google releases TensorFlow as open source

2015-11-10 Thread Alireza Nejati
Randy: To answer your question, I'd reckon that the two major gaps in julia 
that TensorFlow could fill are:

1. Lack of automatic differentiation on arbitrary graph structures.
2. Lack of ability to map computations across cpus and clusters.

Funny enough, I was thinking about (1) for the past few weeks and I think I 
have an idea about how to accomplish it using existing JuliaDiff libraries. 
About (2), I have no idea, and that's probably going to be the most 
important aspect of TensorFlow moving forward (and also probably the 
hardest to implement). So for the time being, I think it's definitely 
worthwhile to just have an interface to TensorFlow. There are a few ways 
this could be done. Some ways that I can think of:

1. Just tell people to use PyCall directly. Not an elegant solution.
2. A more julia-integrated interface *a la* SymPy.
3. Using TensorFlow as the 'backend' of a novel julia-based machine 
learning library. In this scenario, everything would be in julia, and 
TensorFlow would only be used to map computations to hardware.

I think 3 is the most attractive option, but also probably the hardest to 
do.


[julia-users] Re: For loop = or in?

2015-10-29 Thread Alireza Nejati
I'm with Tomas here - if anything this thread is testimony to the fact that 
both '=' and 'in' should be left in.


[julia-users] Re: Moving from 0.3 to 0.4

2015-10-27 Thread Alireza Nejati
Most things in 0.3 will still work in 0.4 except with a handy depreciation 
error which will tell you exactly what to fix. Here are some tips:

- Use Compat.jl. It makes life a LOT easier. It lets you write a single 
version of your code for 0.4 (or, more generally, the latest version of 
julia) while automagically allowing your code to run in previous versions 
as well.

- Union(...) -> Union{}

- FloatingPoint -> AbstractFloat

- [a; b] -> [a, b]

- [a, b] (if a and b are vectors) -> vcat(a, b)

- float64([...]) -> map(Float64, [...]) or just Float64[...]

These should cover most errors I've seen...

Also a few coding notes: Now that Cartesian.jl is part of the Base library, 
it makes sense to use it whenever you want to. Also, we're gradually moving 
to slices becoming views so you might want to keep that in mind.


[julia-users] Re: [ANN] MXNet.jl - Flexible and Efficient Deep Learning for Julia

2015-10-27 Thread Alireza Nejati
Nice!

Any plans on merging this functionality with that of Mocha.jl?


[julia-users] Re: How to get first number from Int64

2015-10-27 Thread Alireza Nejati
Michele's solution is preferred here, but you can also do it like this:

string(lista[3])[1]


[julia-users] Re: [Gadfly] : Help regarding plotting

2015-10-27 Thread Alireza Nejati
Also look at: https://github.com/dcjones/Gadfly.jl/blob/master/src/theme.jl 
 (line_style)


[julia-users] Re: [Gadfly] : Help regarding plotting

2015-10-27 Thread Alireza Nejati
The design philosophy of Gadfly seems to be that you should think about the 
data and let the software worry about how to present it.

That said, it is possible to change things like fonts, line thicknesses and 
dash styles, and legend placement through 
themes: http://gadflyjl.org/themes.html

More advanced functionality is possible via the Compose.jl backend.


[julia-users] use of @generated for specializing on functions

2015-10-26 Thread Alireza Nejati
Hi all,

I was wondering if this is a julian use of the @generated macro:

type Functor{Symbol} end

# A simple general product-sum operator;
# returns a[1]⊙b[1] ⊕ a[2]⊙b[2] ⊕ ...
@generated function dot{⊕,⊙,T}(::Type{Functor{⊕}}, ::Type{Functor{⊙}}, 
a::Array{T}, b::Array{T})
return quote
assert(length(a)==length(b))
p = zero(T)
for i=1:length(a)
@inbounds p=$⊕(p,$⊙(a[i],b[i]))
end
p
end
end

The idea is to produce a specialized dot() operator that can work with 
arbitrary product and sum operators yet still be computationally efficient. 
These are some examples on how to use the above function:

dot(Functor{:+},   Functor{:*}, [-1,0,1],[1,0,1])  # vector dot; returns 0
dot(Functor{:max}, Functor{:+}, [-1,0,1],[1,0,1])  # max-sum; returns 2
dot(Functor{:|},   Functor{:&}, [true,false,true],[false,true,true])  # 
constraint satisfaction; returns true

Testing shows that this is faster by about 10x than passing the functions 
directly and runs at the same speed . It also doesn't go through any memory 
allocations.


[julia-users] Re: use of @generated for specializing on functions

2015-10-26 Thread Alireza Nejati
I missed a word in there. Meant to say, "runs at the same speed as 
hard-coding the * and + functions". Anyway, I wanted to know if there's a 
better way of doing something similar without using the @generated macro.


[julia-users] Re: A grateful scientist

2015-10-26 Thread Alireza Nejati
I've been coding in julia so much lately that I actually think my brain 
might be forgetting the other languages I used to know!

On Monday, October 26, 2015 at 4:30:26 PM UTC+13, Yakir Gagnon wrote:
>
> Hi Julia community and developers,
> I'm a postdoc researching color vision, biological optics, polarization 
> vision, and camouflage. I've always used Matlab in my research and made the 
> switch to Julia about two years ago. I just wanted to report, for what it's 
> worth, that as a researcher I think Julia is the best. I promote it 
> everywhere I think it's appropriate, and use it almost exclusively. 
> Just wanted to say a big fat thank you to all the developers and community 
> for creating this magnificence.
>
> THANK YOU! 
>


[julia-users] Re: use of @generated for specializing on functions

2015-10-26 Thread Alireza Nejati
I didn't know there was already a discussion going on this. Thanks for the 
links.

My goal here isn't to replace Dot{} but rather to figure out what the most 
julian way of doing this would be. Thanks again though.


[julia-users] Re: For loop = or in?

2015-10-26 Thread Alireza Nejati
There is no difference, as far as I know.

'=' seems to be used more for explicit ranges (i = 1:5) and 'in' seems to 
be used more for variables (i in mylist). But using 'in' for everything is 
ok too.

The '=' is there for familiarity with matlab. Remember that julia's syntax 
was in part designed to be familiar to matlab users.

On Tuesday, October 27, 2015 at 8:26:07 AM UTC+13, FANG Colin wrote:
>
> Hi All
>
> I have got a stupid question:
>
> Are there any difference in "for i in 1:5" and "for i = 1:5"?
>
> Does the julia community prefer one to the other? I see use of both in the 
> documentations and source code.
>
> Personally I haven't seen much use of "for i = 1:5" in other languages.
>
> Thanks.
>


[julia-users] Re: ANN: NeuralNets.jl

2014-07-25 Thread Alireza Nejati
There were some issues with the gradient_descent method which have now been 
solved; thanks to Sam Lendel https://github.com/lendle for pointing them 
out.

On Wednesday, July 23, 2014 8:15:56 PM UTC+12, Alireza Nejati wrote:

 For about two weeks now, Zac Cranko, Pasquale Minervini, and I (Alireza 
 Nejati a.k.a. anj1) have been working on a new package for neural networks 
 in julia: NeuralNets.jl https://github.com/anj1/NeuralNets.jl.

 The goal is to create a clean, modular implementation of neural networks 
 that can easily be extended, while keeping it fast. This would not be 
 possible in a lot of other languages but it's been pretty straightforward 
 in julia so far. Currently we support a whole bunch of training methods 
 including Levenberg-Marquardt, gradient descent with momentum, and Adagrad.

 We have not yet released a numbered release, so a lot of things are still 
 in their preliminary stages. Especially, the documentation is incomplete in 
 parts (but you can find working examples in the examples directory). Any 
 and all feedback welcome.




[julia-users] ANN: NeuralNets.jl

2014-07-23 Thread Alireza Nejati
For about two weeks now, Zac Cranko, Pasquale Minervini, and I (Alireza 
Nejati a.k.a. anj1) have been working on a new package for neural networks 
in julia: NeuralNets.jl https://github.com/anj1/NeuralNets.jl.

The goal is to create a clean, modular implementation of neural networks 
that can easily be extended, while keeping it fast. This would not be 
possible in a lot of other languages but it's been pretty straightforward 
in julia so far. Currently we support a whole bunch of training methods 
including Levenberg-Marquardt, gradient descent with momentum, and Adagrad.

We have not yet released a numbered release, so a lot of things are still 
in their preliminary stages. Especially, the documentation is incomplete in 
parts (but you can find working examples in the examples directory). Any 
and all feedback welcome.




[julia-users] Re: push! function and multidimensional arrays

2014-07-23 Thread Alireza Nejati
John, just to give some explanation: push! is there as an efficient push 
operation - one that ideally takes O(1) time because it simply extends the 
vector in-place rather than copying everything to a new vector. (In 
practice, though, it takes slightly longer than O(1) time because of the 
overhead of occasionally allocating a new array). Julia stores arrays in 
column-major order, so you can push a new column on to a matrix by pushing 
the column to a 1d array and then reshaping, as Ivar said. But you can't do 
the same with rows, because there is no way to append a new row to a matrix 
in an in-place fashion. You have to shift all the array elements around in 
memory.

The suggestions above are both good, and another way would be to simply 
create row matrix by appending columns instead. At the end, just transpose 
the matrix. The transpose operation does add O(n) overhead but depending on 
what you're doing a single transpose at the end could be much more 
efficient than cat'ing at each iteration.



On Thursday, July 24, 2014 7:22:01 AM UTC+12, john pollack wrote:

 Hi. I want to create a 2-column array.It should be empty at first, then I 
 should be able to add rows to it. I was able to do this by : 

 A = Array(Float64,(0,2)) 

 A = vcat(A , [ 1 2 ] ) 

 However, I couldn't use push! function for this. Is it possible to use 
 push! function for this aim, and are there any substantial performance 
 differences between push! and vcat functions  ?
 Thanks for any help.



[julia-users] Re: GSoC: Julia IDE Progress update

2014-07-01 Thread Alireza Nejati
FWIW, I did a huge upgrade on my system and now that error goes away, but I 
still don't have autocomplete. Anyway, it's not that important, everything 
else is usable and nice.

On Sunday, June 29, 2014 9:46:21 PM UTC+12, Mike Innes wrote:

 Hey all,

 I've released the latest version of the Julia environment 
 https://github.com/one-more-minute/Jupiter-LT I'm building. There are a 
 whole bunch of improvements but the main ones are:

- Support for latex completions (\alpha etc.)
- Support for graphics including Gadfly and Images.jl (though until my 
patch is released you'll need to Pkg.checkout(Gadfly) to get 
interactivity)
- Rewritten and improved autocomplete system, which now completes 
package names in Pkg functions and paths in include statements, and can be 
extended to support anything else
- Support for accessing methods and docs both while on a function and 
within its parentheses
- Auto-detection of the module you're working in
- Links and highlighted lines for error messages
- Semantic highlighting in the june night theme
- Highlighting support for string interpolation
- Full support for unicode
- More documentation
- Tabs are restored after restarting, like Sublime
- Several new and improved rough edges

 I also want to shout out to all the people who have tried this out so far, 
 given feedback, and/or sent me PRs – every bit of enthusiasm really makes a 
 big difference, so thank you.

 – Mike



[julia-users] Re: Lowering the (mostly social/psychological) barriers to sharing packages?

2014-07-01 Thread Alireza Nejati
As someone who is a relative newcomer preparing packages for submission to 
METADATA, I'm also inclined to agree with the above posts. When I first 
started using Julia I was under no illusions that what's in METADATA may 
not necessarily be sanctioned by the core julia developers and caveat 
emptor applies.

On Wednesday, July 2, 2014 1:07:47 PM UTC+12, Iain Dunning wrote:

 Hi all,

 Something that came up in some discussions I had at *JuliaCon* is that 
 people perceive packages in METADATA as being for more serious packages, 
 i.e. by being there there is an implication of a certain minimum quality. A 
 lot of my efforts in the package ecosystem have been try to help package 
 developers to live up to that expectation. A consequence of this perception 
 is that some people might be averse to list their work on METADATA, for 
 fear its not good enough/not ready.

 You can currently list a package on METADATA with:
 - a version 0.0.0, which was the preferred way originally but is now 
 discouraged. This tagged version's hash would be updated as needed (i.e. it 
 doesn't follow master)
 - a listing with no tagged version, which allows someone to do 
 Pkg.add(YourPkg) and automatically get the most up-to-date version of 
 your package.

 Of course, you pretty much need to announce your package somewhere other 
 than METADATA to let users know it exists, and users can you 
 Pkg.clone(..) almost as easily as Pkg.add(..) with a no-version 
 listing. Currently pkg.julialang.org doesn't show packages without a 
 version, so the no-version listing is of limited utility for 
 discoverability.

 A proposal that came up a few times at the conference was for some sort of 
 METADATA-EXTRA, which only has versions of packages without version numbers 
 and is open to everyone and anyone. It'd be super easy to add packages - 
 simply add a name and URL to a list. Perhaps it could be accessed through a 
 package not in Base, e.g. PkgExtra.
 It would have 
 PkgExtra.update_listing() - refresh local list of package names and URLs
 PkgExtra.add(pkgname..) - git clone a package to ~/.juila/v0.x/
 PkgExtra.update(pkgname...) - git pull the packages
 PkgExtra.rm(pkgname...) - nuke the packages
 So basically, super simple. User could even be responsible for satisfying 
 any dependencies of the packages installed this way. At the most, the 
 REQUIRE in the package should be used, to keep this system as light-weight 
 as possible.

 So this wouldn't be much work to get going, but I was more curious to see 
 if there is actually demand for this. I'm worried its one of those things 
 people say they want, but I'm not sure if the demand is real. This might be 
 bad just in that it forks METADATA sort of, which is possibly not a great 
 idea for a new package. On the plus side, it could encourage even more 
 development and sharing.

 Thoughts?



[julia-users] Re: GSoC: Julia IDE Progress update

2014-06-29 Thread Alireza Nejati
I'm on 64-bit linux by the way

On Sunday, June 29, 2014 9:46:21 PM UTC+12, Mike Innes wrote:

 Hey all,

 I've released the latest version of the Julia environment 
 https://github.com/one-more-minute/Jupiter-LT I'm building. There are a 
 whole bunch of improvements but the main ones are:

- Support for latex completions (\alpha etc.)
- Support for graphics including Gadfly and Images.jl (though until my 
patch is released you'll need to Pkg.checkout(Gadfly) to get 
interactivity)
- Rewritten and improved autocomplete system, which now completes 
package names in Pkg functions and paths in include statements, and can be 
extended to support anything else
- Support for accessing methods and docs both while on a function and 
within its parentheses
- Auto-detection of the module you're working in
- Links and highlighted lines for error messages
- Semantic highlighting in the june night theme
- Highlighting support for string interpolation
- Full support for unicode
- More documentation
- Tabs are restored after restarting, like Sublime
- Several new and improved rough edges

 I also want to shout out to all the people who have tried this out so far, 
 given feedback, and/or sent me PRs – every bit of enthusiasm really makes a 
 big difference, so thank you.

 – Mike



[julia-users] Re: GSoC: Julia IDE Progress update

2014-06-29 Thread Alireza Nejati
Nice work, looks much better than my previous Sublime setup.

One issue I'm having is that autocomplete doesn't seem to be working, and 
I'm repeatedly getting:

[8725:0630/141807:ERROR:vsync_provider.cc(70)] glXGetSyncValuesOML should 
not return TRUE with a media stream counter of 0.

And It seems that whenever I type stuff I get this error more rapidly. 
Perhaps something wrong with the autocomplete symbol lookup?

On Sunday, June 29, 2014 9:46:21 PM UTC+12, Mike Innes wrote:

 Hey all,

 I've released the latest version of the Julia environment 
 https://github.com/one-more-minute/Jupiter-LT I'm building. There are a 
 whole bunch of improvements but the main ones are:

- Support for latex completions (\alpha etc.)
- Support for graphics including Gadfly and Images.jl (though until my 
patch is released you'll need to Pkg.checkout(Gadfly) to get 
interactivity)
- Rewritten and improved autocomplete system, which now completes 
package names in Pkg functions and paths in include statements, and can be 
extended to support anything else
- Support for accessing methods and docs both while on a function and 
within its parentheses
- Auto-detection of the module you're working in
- Links and highlighted lines for error messages
- Semantic highlighting in the june night theme
- Highlighting support for string interpolation
- Full support for unicode
- More documentation
- Tabs are restored after restarting, like Sublime
- Several new and improved rough edges

 I also want to shout out to all the people who have tried this out so far, 
 given feedback, and/or sent me PRs – every bit of enthusiasm really makes a 
 big difference, so thank you.

 – Mike



[julia-users] Re: GSoC: Julia IDE Progress update

2014-06-29 Thread Alireza Nejati
Another issue: I'm not sure if this is Jupiter-specific, but it overrides 
my tab settings. It changes all the tabs in my files to 2 spaces, which is 
horrendous. I tried changing lt.objs.editor/tab-settings in all 3 of 
default behaviors, user behaviors, and jupiter behaviors, but no dice.



On Sunday, June 29, 2014 9:46:21 PM UTC+12, Mike Innes wrote:

 Hey all,

 I've released the latest version of the Julia environment 
 https://github.com/one-more-minute/Jupiter-LT I'm building. There are a 
 whole bunch of improvements but the main ones are:

- Support for latex completions (\alpha etc.)
- Support for graphics including Gadfly and Images.jl (though until my 
patch is released you'll need to Pkg.checkout(Gadfly) to get 
interactivity)
- Rewritten and improved autocomplete system, which now completes 
package names in Pkg functions and paths in include statements, and can be 
extended to support anything else
- Support for accessing methods and docs both while on a function and 
within its parentheses
- Auto-detection of the module you're working in
- Links and highlighted lines for error messages
- Semantic highlighting in the june night theme
- Highlighting support for string interpolation
- Full support for unicode
- More documentation
- Tabs are restored after restarting, like Sublime
- Several new and improved rough edges

 I also want to shout out to all the people who have tried this out so far, 
 given feedback, and/or sent me PRs – every bit of enthusiasm really makes a 
 big difference, so thank you.

 – Mike



[julia-users] Re: 100 Julia exercises

2014-06-23 Thread Alireza Nejati
Actually that's not a bad idea; someone should start a Julia-specific 
exercise repo.

About Expert.4, I'm not sure how you're running it, but matlist and veclist 
should obviously be lists of matrices and vectors, respectively.

matlist = Matrix[rand(4,4), rand(4,4)]
veclist = Vector[rand(4), rand(4)]
reduce(+, [A*x for A in matlist, x in veclist])

You can also try setting matlist = Matrix[rand(4,4)] and veclist = 
Vector[rand(4)] and it will still work.


On Monday, June 23, 2014 2:43:32 AM UTC+12, Michiaki Ariga wrote:

 Hi all,

 I'm a Julia newbee, and I'm trying to learn Julia and wrote Julia version 
 of rougier's 100 numpy exercises(
 http://www.loria.fr/~rougier/teaching/numpy.100/index.html).

 https://github.com/chezou/julia-100-exercises

 I'd like you to tell me more julia way or something wrong with.

 Best regards,
 Michiaki



[julia-users] Re: 100 Julia exercises

2014-06-23 Thread Alireza Nejati
Actually, a slight modification. The way I wrote it, it will compute the 
product of all matrices with all vectors (pxp mults), which is not what you 
want. You just want each matrix to multiply its respective vector (p 
mults). The solution to that is:

p = length(matlist)
reduce(+, [matlist[i]*veclist[i] for i = 1:p])

On Monday, June 23, 2014 2:43:32 AM UTC+12, Michiaki Ariga wrote:

 Hi all,

 I'm a Julia newbee, and I'm trying to learn Julia and wrote Julia version 
 of rougier's 100 numpy exercises(
 http://www.loria.fr/~rougier/teaching/numpy.100/index.html).

 https://github.com/chezou/julia-100-exercises

 I'd like you to tell me more julia way or something wrong with.

 Best regards,
 Michiaki



[julia-users] Re: 100 Julia exercises

2014-06-22 Thread Alireza Nejati
Same with Apprentice.4 :

[(x,y) for x in linspace(0,1,10), y in linspace(0,1,10)]

meshgrid() isn't included in Julia because it's almost never really needed.

Good work on these exercises, although I fear that the questions, being 
designed for numpy, may not accurately reflect typical julia programming 
patterns and idioms. While numpy and Matlab both place a lot of emphasis on 
vectorization, there is no need at all to vectorize many element-wise 
operations in Julia. In fact, vectorization often makes code both harder to 
read and less efficient. Although I see that you've opted for loops in some 
of the more advanced exercises, which is good.

Another thing is that more functional-type coding is also emphasized in 
julia. Here's a solution for Expert.4 (which I notice you've left empty):

reduce(+, [A*x for A in matlist, x in veclist])

On Monday, June 23, 2014 2:43:32 AM UTC+12, Michiaki Ariga wrote:

 Hi all,

 I'm a Julia newbee, and I'm trying to learn Julia and wrote Julia version 
 of rougier's 100 numpy exercises(
 http://www.loria.fr/~rougier/teaching/numpy.100/index.html).

 https://github.com/chezou/julia-100-exercises

 I'd like you to tell me more julia way or something wrong with.

 Best regards,
 Michiaki



[julia-users] Re: Running Julia in a sandboxed environment?

2014-06-21 Thread Alireza Nejati
The easiest way is probably to run julia in a chroot jail: 
http://docs.oracle.com/cd/E37670_01/E36387/html/ol_cj_sec.html

Note that that method is only to prevent unintentional mistakes from 
harming your system, not to defend against a determined hacker. For that, 
you'd need to look at more serious stuff like FreeBSD jails. Use of 
virtualization is heavily recommended.

On Sunday, June 22, 2014 6:43:19 AM UTC+12, Aerlinger wrote:

 So I'm looking to produce a publicly available Julia REPL that runs on a 
 server. Obviously, this is something that needs to be sandboxed and doesn't 
 expose any direct control over the OS environment. It seems like the way to 
 do this would be to override or wrap any functions in stdlib that provide 
 access to the filesystem, running processes, etc. Of course, this could 
 potentially be tedious and error prone. Is there an easier way to do this? 
 As far as I know julia doesn't have a safe mode or something similar.



[julia-users] Re: A Question About `display`

2014-06-20 Thread Alireza Nejati
This is specific to the REPL display. If you try:

import Base.Multimedia.displays

then use display(displays[1], fm) instead, you will not get the new lines. 
(This might vary depending on setup; on my setup displays[1] is the text 
display and displays[2] is the REPL). It's obviously not a good idea to do 
this; it's better to just NOT use the REPL for your function.

On Saturday, June 21, 2014 2:21:34 PM UTC+12, Leah Hanson wrote:

 My code calls `display` on a bunch of values (of a type I define). Most of 
 these values chose not to display anything; only a few of them are meant to 
 print anything at all. However, running the code currently generates a lot 
 of extra new lines for the non-printing values.

 This is the main function being run:

 ~~~
 function checkallmodule(m::Module;test=checkreturntypes,kwargs...)
   score = 0
   for n in names(m)
  f = eval(m,n)
 if isgeneric(f)  typeof(f) == Function
   fm = test(f;mod=m,kwargs...)
   score += length(fm.methods)
   display(fm)
 end
   end
   println(The total number of failed methods in $m is $score)
 end
 ~~~

 The variable `fm` will be a FunctionSignature. The two relevant custom 
 types and their `writemime` methods are below:

 ~~~
 type MethodSignature
   typs::Vector{AType}
   returntype::Union(Type,TypeVar) # v0.2 has TypeVars as returntypes; v0.3 
 does not
 end
 MethodSignature(e::Expr) = MethodSignature(argumenttypes(e),returntype(e))
 function Base.writemime(io, ::MIMEtext/plain, x::MethodSignature)
   println(io,(,join([string(t) for t in x.typs],,),)::,x.returntype)
 end

 type FunctionSignature
   methods::Vector{MethodSignature}
   name::Symbol
 end

 function Base.writemime(io, ::MIMEtext/plain, x::FunctionSignature)
   for m in x.methods
 print(io,string(x.name))
 display(m)
   end
 end
 ~~~

 The call to `display` in `checkallmodule` should end up calling 
 `writemime` for `FunctionSignature`. In the case that the 
 `FunctionSignature` has no methods, the for-loop will not execute and 
 nothing should be displayed. However, there are still a lot of new lines 
 appearing when I run the code.

 Does anyone have any pointers to what might be going wrong or how I might 
 avoid these new lines?

 Thanks,
 Leah



[julia-users] Re: Cycling a list

2014-06-19 Thread Alireza Nejati
circshift

On Thursday, June 19, 2014 10:20:57 PM UTC+12, Paweł Biernat wrote:

 Is there a function that cycles the list as follows?

 cycle([1,2,3,4,5,6],2) - [5,6,1,2,3,4]
 cycle([1,2,3,4,5,6],-2) - [3,4,5,6,1,2]
 cycle([1,2,3,4,5,6],0) - [1,2,3,4,5,6]



[julia-users] Re: Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Alireza Nejati
 But for fastest transcendental function performance, I assume that one 
must use the micro-coded versions built into the processor's FPU--Is that 
what the fast libm implementations do?

Not at all. Libm's version of log() is about twice as fast as the CPU's own 
log function, at least on a modern x86_64 processor (really fast log 
implementations use optimized look-up tables). I had a look at your code 
and it seems that the 'consumption' variable is always in the very narrow 
range of 0.44950 to 0.56872. If you plot the log function in this tiny 
range, it is very flat and linear. I think that if you simply replaced it 
with a 2- or 4-part piecewise approximation, you could get significant 
speedup across the board, in julia, c++, and others, with only a very small 
approximation error.

On Tuesday, June 17, 2014 3:52:07 AM UTC+12, Florian Oswald wrote:

 Dear all,

 I thought you might find this paper interesting: 
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf

 It takes a standard model from macro economics and computes it's solution 
 with an identical algorithm in several languages. Julia is roughly 2.6 
 times slower than the best C++ executable. I was bit puzzled by the result, 
 since in the benchmarks on http://julialang.org/, the slowest test is 
 1.66 times C. I realize that those benchmarks can't cover all possible 
 situations. That said, I couldn't really find anything unusual in the Julia 
 code, did some profiling and removed type inference, but still that's as 
 fast as I got it. That's not to say that I'm disappointed, I still think 
 this is great. Did I miss something obvious here or is there something 
 specific to this algorithm? 

 The codes are on github at 

 https://github.com/jesusfv/Comparison-Programming-Languages-Economics




[julia-users] Re: Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Alireza Nejati
Dahua: On my setup, most of the time is spent in the log function.

On Tuesday, June 17, 2014 3:52:07 AM UTC+12, Florian Oswald wrote:

 Dear all,

 I thought you might find this paper interesting: 
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf

 It takes a standard model from macro economics and computes it's solution 
 with an identical algorithm in several languages. Julia is roughly 2.6 
 times slower than the best C++ executable. I was bit puzzled by the result, 
 since in the benchmarks on http://julialang.org/, the slowest test is 
 1.66 times C. I realize that those benchmarks can't cover all possible 
 situations. That said, I couldn't really find anything unusual in the Julia 
 code, did some profiling and removed type inference, but still that's as 
 fast as I got it. That's not to say that I'm disappointed, I still think 
 this is great. Did I miss something obvious here or is there something 
 specific to this algorithm? 

 The codes are on github at 

 https://github.com/jesusfv/Comparison-Programming-Languages-Economics




[julia-users] Re: How to fork a child process and communicate low-level system calls between parent process (popen)?

2014-06-15 Thread Alireza Nejati
It's my impression that to do this sort of stuff you should use Julia's 
built-in process creation/communication facilities. Have a look at this 
page: http://docs.julialang.org/en/release-0.1/manual/parallel-computing/

On Monday, June 16, 2014 10:57:28 AM UTC+12, Aerlinger wrote:

 I'm writing a package to allow a Julia program to asynchronously listen 
 and respond to file change events on disk, but I've hit a bit of a 
 stumbling block. I need a way to fork a Julia process and have it listen to 
 specific OS system calls such as select, and then notify the parent process 
 of the event. This is sometimes called 'popen' in other languages (
 http://www.ruby-doc.org/core-2.1.2/IO.html#method-c-popen). I'm aware 
 that there are a bunch of functions for handling general IO (
 http://julia.readthedocs.org/en/latest/stdlib/base/#i-o) but they don't 
 quite give me the control and interprocess communication that I'm looking 
 for. There was also a short discussion about this a couple of years ago: 
 https://groups.google.com/forum/#!topic/julia-dev/l-4HLYX2qSI. Was 
 wondering if there have been any developments or if anyone else has some 
 insight on this capability.

 Thanks!



[julia-users] Re: How to fork a child process and communicate low-level system calls between parent process (popen)?

2014-06-15 Thread Alireza Nejati
Kevin: Thanks, yeah I didn't pay any attention to the version

On Monday, June 16, 2014 10:57:28 AM UTC+12, Aerlinger wrote:

 I'm writing a package to allow a Julia program to asynchronously listen 
 and respond to file change events on disk, but I've hit a bit of a 
 stumbling block. I need a way to fork a Julia process and have it listen to 
 specific OS system calls such as select, and then notify the parent process 
 of the event. This is sometimes called 'popen' in other languages (
 http://www.ruby-doc.org/core-2.1.2/IO.html#method-c-popen). I'm aware 
 that there are a bunch of functions for handling general IO (
 http://julia.readthedocs.org/en/latest/stdlib/base/#i-o) but they don't 
 quite give me the control and interprocess communication that I'm looking 
 for. There was also a short discussion about this a couple of years ago: 
 https://groups.google.com/forum/#!topic/julia-dev/l-4HLYX2qSI. Was 
 wondering if there have been any developments or if anyone else has some 
 insight on this capability.

 Thanks!