On Thu, Oct 3, 2013 at 10:59 AM, F. B. <[email protected]> wrote:
>>
>> I think once we stop supporting Python 2.7, we can start using static
>> typing
>> quite easily and I think it helps.
>>
>
> That will be a long time, maybe 2017-2018 ? I'm just giving a naive
> estimate.

Yeah, that might be true.

>
>> Though I think one of the most useful algorithms in SymPy that are
>> candidates for C++ rewrite
>> are polynomials, and those as far as I know don't use much duck typing, so
>> it should be fairly easy to rewrite using fast native C++ datastructures.
>> For example, here is my implementation of sparse polynomial multiplication
>> in C++:
>>
>> https://github.com/certik/csympy/blob/master/src/rings.cpp#L63
>> https://github.com/certik/csympy/blob/master/src/monomials.cpp#L7
>>
>> and I think it might be possible to speed it up further by playing
>> with the hashtable
>> and hashes and similar things. I tried to optimize a hash function for
>> this case already,
>> see here:
>>
>> https://github.com/certik/csympy/blob/master/src/dict.h#L48
>>
>> the idea is that the hash function needs to be fast to compute for the
>> tuple of ints and
>> also as unique as possible.
>>
>
>
> Experiments on data types can be made in Python too. Python has basically
> three complex data-structures (I mean, the most common used): tuples, lists,
> dicts.
>
> But in C++ a list can be different things. For example, in STL there are
> vector and list, vector fits better for fixed-length arrays, while lists is
> good if you have to append/pop elements. Python offers only a list object,
> but in many cases it would be useful to choose the array-like data structure
> that fits best. And by the way, if one already knows the size of the array,
> he should be able to pre-allocate the dimension of the list object.

Exactly. Note that I use std::unordered_map for the representation of the
polynomials for now, but I use std::vector for the tuple and it might
not be the most optimal, since I need to add them together fast.

>
> I would try first to put this in Python, that would be a good point in
> easing translation and efficiency.

Here is the Python version that I "translated" to C++:

https://github.com/sympy/sympy/blob/master/sympy/polys/rings.py#L1033
https://github.com/sympy/sympy/blob/master/sympy/polys/monomials.py#L92

>
>> Btw, I don't want to discourage you from trying things. I only wanted
>> to share my own
>> (hard-earned) experience, as I tried various approaches. For example,
>> one great project
>> would be to implement similar symbolic core in Julia. It might get
>> competitive or even
>> faster than my Cython version above. Whether it could match my C++
>> version,
>> that I don't know.
>>
>
> It is possible that over the time new programming languages with powerful
> feature will start to emerge, in any case I would not commit too much time
> to optimization. You know, it's a continuously evolving subject, unrelated
> to just SymPy, maybe in the future a new technology will appear making
> things easier and faster. I would rather keep an eye an websites and papers
> related to optimization.
>
> Trying hard on one way today, may turn up having being futile in the future.

It is possible.

In numerical computing, which is my day job, the best
compromise that I arrived at is modern Fortran, which allows very readable
NumPy like code, but extremely fast execution, which pretty much can
only be beaten by very hard laborious work of optimizing for specific
architecture/cache/cpu, either in assembler, or by having various kernels
and choosing which one is faster, e.g. like FFTW works.

C++ seems a similar thing here for symbolics, but unlike Fortran, it
is much more complex,
so the benefits are not that clear, I agree with that.

But the bottom line is --- whatever programming language emerges in the future
as the winner, I will always be able to reuse Fortran or C++ codes, so it's
not a lost work. For example FFTPACK, written in 1984 in F77 is still
only 2x or so slower than FFTW on a *single* core on modern
architectures with the best
Fortran compilers today. The advantage of modern code like FFTW is that
it consumes less memory bandwidth, so it scales better on more cores and so on.

So my conclusion from all this is that I am not going to go to
assembler or stuff like that.
Rather, I want to have a readable single C++ codebase, that compiles
and is as fast or faster
than other symbolic programs/libraries. It seems it is possible. That
way people can use
SymPy together with CSymPy today, and be able to do anything that is
possible with
other programs.

> On the other hand, adding new features to SymPy is always good.

I agree.

Ondrej

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sympy.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to