Yes, but it is likely to fail on many of the commits (probably more than
half depending on the functionality being tested and how backwards
compatible the benchmark is written).
There is also setup overhead time for each commit, so it will take longer
than your calc.
Jason
moorepants.info
+01 53
On Wed, Jul 22, 2015 at 6:26 PM, Jason Moore wrote:
> We will work on the documentation. But essentially I'm a running:
>
> SYMPY_CACHE_SIZE=1 asv run ALL --parallel -k -e
>
> You can use -s to do a sparser mesh and then replace ALL with a git
> commit range to narrow down to commits in speci
We will work on the documentation. But essentially I'm a running:
SYMPY_CACHE_SIZE=1 asv run ALL --parallel -k -e
You can use -s to do a sparser mesh and then replace ALL with a git
commit range to narrow down to commits in specific areas. Note that `asv
find` is useful for bisecting to an o
On Wed, Jul 22, 2015 at 4:16 PM, Jason Moore wrote:
> FYI, this database is building up:
> http://www.moorepants.info/misc/sympy-asv/ (I'm running this with the sympy
> cache set to 1).
>
> For the integration benchmarks it has every single functioning commit and
> maybe half the commits for t
FYI, this database is building up:
http://www.moorepants.info/misc/sympy-asv/ (I'm running this with the sympy
cache set to 1).
For the integration benchmarks it has every single functioning commit and
maybe half the commits for the other benchmarks.
The repository to submit benchmarks to is
On Mon, Jul 20, 2015 at 11:29 PM, Ondřej Čertík wrote:
> It's because I didn't have fastcache installed After installing
> it, by default I got:
>
> certik@redhawk:~/repos/symengine/benchmarks(py)$ python kane2.py
> Setup
> Converting to SymEngine...
> SymPy Jacobian:
> Total time: 0.123499155
Nice work!
Jason
moorepants.info
+01 530-601-9791
On Mon, Jul 20, 2015 at 10:29 PM, Ondřej Čertík
wrote:
> It's because I didn't have fastcache installed After installing
> it, by default I got:
>
> certik@redhawk:~/repos/symengine/benchmarks(py)$ python kane2.py
> Setup
> Converting to Sy
It's because I didn't have fastcache installed After installing
it, by default I got:
certik@redhawk:~/repos/symengine/benchmarks(py)$ python kane2.py
Setup
Converting to SymEngine...
SymPy Jacobian:
Total time: 0.123499155045 s
SymEngine Jacobian:
Total time: 0.00305485725403 s
Speedup: 40.4
Ondrej,
I'm not sure why you don't see performance increase with increased cache.
The following shows that the benchmarks do run faster with a large cache.
Interestingly the memory doesn't seem to change (but I'm not sure I
understand how they measure mem usage). Notice that the jacobian wrt to
sy
On Mon, Jul 20, 2015 at 7:48 PM, Ondřej Čertík wrote:
> On Sun, Jul 19, 2015 at 4:57 PM, Jason Moore wrote:
>> I just tried this out with jacobian() and subs() over the commits since
>> 0.7.3 to master. It's showing me that the new caching is the killer
>> slowdown:
>>
>> https://github.com/sympy
On Sun, Jul 19, 2015 at 4:57 PM, Jason Moore wrote:
> I just tried this out with jacobian() and subs() over the commits since
> 0.7.3 to master. It's showing me that the new caching is the killer
> slowdown:
>
> https://github.com/sympy/sympy/commit/a63005e4
>
> I've submitted a PR to Björn's repo
If we increase the default cache size and it speeds things up it might be
worthwhile to revisit the travis test splits again.
On Monday, July 20, 2015 at 4:36:45 PM UTC-6, Jason Moore wrote:
>
> I've got a machine that I don't use, so I'm going periodically run the
> benchmarks from Bjorn's repo
I've got a machine that I don't use, so I'm going periodically run the
benchmarks from Bjorn's repo and automatically publish them to:
moorepants.info/misc/sympy-asv
I'll make an initial pass and the results will likely be up by tomorrow
sometime.
Once I get the base database up and running I wi
On Mon, Jul 20, 2015 at 11:09 AM, Aaron Meurer wrote:
> Awesome. This is exactly the sort of thing I've wanted to see for a long
> time.
>
> So apparently the new cache is way too slow. Can the size be increased to a
> point that makes the performance comparable to the old cache? One obviously
> h
I can try to come up with something...but I need to get back to the day job
at the moment :(
Jason
moorepants.info
+01 530-601-9791
On Mon, Jul 20, 2015 at 12:17 PM, Aaron Meurer wrote:
> How is the memory usage? We should try to find a good balance. Can you
> create plots of memory usage and
How is the memory usage? We should try to find a good balance. Can you
create plots of memory usage and performance vs. cache size (in master)?
Aaron Meurer
On Mon, Jul 20, 2015 at 2:14 PM, Jason Moore wrote:
> FYI, if I increased the cache size I can push the timings, post new cache,
> down to
FYI, if I increased the cache size I can push the timings, post new cache,
down to the normal speeds.
e.g.
SYMPY_CACHE_SIZE=5000 asv run 051850f2..880f5fa6
Jason
moorepants.info
+01 530-601-9791
On Mon, Jul 20, 2015 at 12:07 PM, Aaron Meurer wrote:
> Regarding the Raspberry Pi 2, as far as I
Regarding the Raspberry Pi 2, as far as I can tell it is a good option, but
I opened https://github.com/spacetelescope/asv/issues/292 to see if anyone
else has any suggestions.
Aaron Meurer
On Mon, Jul 20, 2015 at 1:20 PM, Björn Dahlgren wrote:
>
> On Monday, 20 July 2015 19:09:54 UTC+2, Aaron
On Monday, 20 July 2015 19:09:54 UTC+2, Aaron Meurer wrote:
> So apparently the new cache is way too slow. Can the size be increased to
> a point that makes the performance comparable to the old cache? One
> obviously has to balance the cache size against memory usage (which won't
> show up in
Awesome. This is exactly the sort of thing I've wanted to see for a long
time.
So apparently the new cache is way too slow. Can the size be increased to a
point that makes the performance comparable to the old cache? One obviously
has to balance the cache size against memory usage (which won't sho
This now has every commit from 0.7.3 on:
http://www.moorepants.info/misc/sympy-asv/#diff.TimeJacobian.time_subs
first major slow down: new caching added
slight speedup: fastcache optional dep added
another speedup: cache increased from 500 to 1000
slight slow down: c removed from core
This goes
Elaborating a bit, the 2 main additional costs of a bounded cache compared
to an unbounded one are:
1. The extra cost of managing the LRU machinery (fastcache alleviates this
by doing all the necessary management at the C level)
2. The cost of repeated cache misses because the cache size is no
We visited the jacobian issue a while ago and I think the takeaway was that
a larger cache size (about 2-3000) sped things up considerably. Not sure
if this is the same issue though.
On Monday, July 20, 2015 at 10:25:24 AM UTC-6, Ondřej Čertík wrote:
>
> On Mon, Jul 20, 2015 at 1:02 AM, Jason M
On Mon, Jul 20, 2015 at 1:02 AM, Jason Moore wrote:
> Yes, it seems that the new cache commit is the slow down in these tests.
If this is the case, then I know that Peter Brady who wrote it will be
interested in this. We should get to the bottom of the issue.
Ondrej
--
You received this messag
On Monday, 20 July 2015 09:02:23 UTC+2, Jason Moore wrote:
>
> Yes, it seems that the new cache commit is the slow down in these tests.
>
>
Running with fastcache installed seems to make a minor difference (~10-30%)
http://hera.physchem.kth.se/~sympy_asv/
I haven't yet tried running tests with SY
Yes, it seems that the new cache commit is the slow down in these tests.
Jason
moorepants.info
+01 530-601-9791
On Sun, Jul 19, 2015 at 11:54 PM, Ondřej Čertík
wrote:
> On Mon, Jul 20, 2015 at 12:33 AM, Jason Moore
> wrote:
> > Here is the last run I made:
> >
> > http://www.moorepants.info/m
On Mon, Jul 20, 2015 at 12:33 AM, Jason Moore wrote:
> Here is the last run I made:
>
> http://www.moorepants.info/misc/sympy-asv/
>
> from:
>
> asv run sympy-0.7.3..master -s 200
Is caching causing the massive (10x) slowdown? If so, I know you can
turn it off. We should investigate this.
Ondrej
Here is the last run I made:
http://www.moorepants.info/misc/sympy-asv/
from:
asv run sympy-0.7.3..master -s 200
Jason
moorepants.info
+01 530-601-9791
On Sun, Jul 19, 2015 at 9:38 PM, Aaron Meurer wrote:
> Cool. For this benchmark, you're likely seeing the evolution in the
> integration al
Cool. For this benchmark, you're likely seeing the evolution in the
integration algorithms as well. The risch algorithm doesn't handle this
integral (too many symbolic parameters), but meijerg does, and so does
heurisch.
On Sun, Jul 19, 2015 at 6:16 AM, Björn Dahlgren wrote:
> Hi all,
>
> On Tue
I'm rerunning with the dependencies installed (I think). I'll post results
once it is done.
Jason
moorepants.info
+01 530-601-9791
On Sun, Jul 19, 2015 at 4:16 PM, Ondřej Čertík
wrote:
> Can you post your timings results? I am doing your benchmark with
> SymEngine and Sage for comparison.
>
>
Can you post your timings results? I am doing your benchmark with
SymEngine and Sage for comparison.
On Sun, Jul 19, 2015 at 4:57 PM, Jason Moore wrote:
> I just tried this out with jacobian() and subs() over the commits since
> 0.7.3 to master. It's showing me that the new caching is the killer
I just tried this out with jacobian() and subs() over the commits since
0.7.3 to master. It's showing me that the new caching is the killer
slowdown:
https://github.com/sympy/sympy/commit/a63005e4
I've submitted a PR to Björn's repo:
https://github.com/bjodah/sympy_benchmarks_bjodah/pull/1/files
On Sun, Jul 19, 2015 at 5:16 AM, Björn Dahlgren wrote:
> Hi all,
>
> On Tuesday, 14 July 2015 02:49:57 UTC+2, Aaron Meurer wrote:
>>
>> - Get a benchmark machine and run airspeed velocity on it. We need to
>> catch performance regressions. The benchmark suite can be anything, although
>> obviously
Hi all,
On Tuesday, 14 July 2015 02:49:57 UTC+2, Aaron Meurer wrote:
>
> - Get a benchmark machine and run airspeed velocity on it. We need to
> catch performance regressions. The benchmark suite can be anything,
> although obviously well-made benchmarks are better.
>
I found a few hours and t
34 matches
Mail list logo