I updated my proposal with benchmarking and sympy-bot, any further
comments would be appreciated. 

Bi Ge

On Monday, April 22, 2013 2:14:47 AM UTC-4, Aaron Meurer wrote:
>
> Some comments: 
>
> - Anything that cannot be automated should be closely evaluated  If 
> it's not absolutely necessary, we should not do it. For example, 
> building the Sage package. We should see if this can be automated. If 
> it can, that's great. It will just be another step in the release 
> process, but it won't be any more work because everything will be done 
> automatically. Otherwise, it really isn't something that we need to 
> do. It's nice to do, because otherwise it tends not to get done (for 
> example, Sage is still including 0.7.1).  And it's good for marketing. 
> But it's not essential, and it isn't worth the time effort when every 
> manual thing that has to be done makes the release process slower. 
>
> - Anything that is a test should be done all the time. This includes 
> the import speed thing. That's something that we test at each release, 
> but really we should be testing it at all times. This is actually just 
> a special case of benchmarking, which I think it would be great to 
> focus on. 
>
> Ronan said, "our troubles with releasing don't seem to be primarily a 
> technical issue, but rather a cultural/political one." I think the 
> issues are both technical and cultural. The issue is that we have too 
> many things to do at each release, and too few of them are automated. 
> The lack of automation is the technical issue. The cultural thing that 
> needs to change is that we have to assert that we will not do anything 
> at a release unless it can be automated, or unless it is absolutely 
> necessary. 
>
> - Regarding benchmarking, look at what PyPy does. I think we may have 
> to set up a server if we want to mimic it, but that is not an issue (I 
> think Ondrej still has a server that we can use). 
>
> We really do need to do a lot more benchmarking. The current 
> benchmarks are pathetic compared to the full feature set of SymPy, and 
> the fact that we almost never run them is worse.  We need to stress 
> test everything. This is how we find the bottlenecks in the code, and 
> speed them up. I had a positive experience with this last year with 
> as_numer_denom (you can read 
> https://code.google.com/p/sympy/issues/detail?id=2607 for more info). 
> We should convert the things from that issue to benchmarks. 
>
> If we can optimize a stress test to run instantly, we should add it to 
> the regular tests, so that it never gets slow again. Then again, if we 
> can get a good benchmark system that is run as often as the tests, 
> then it should just go there instead. 
>
> So I think a good proposal would be 
>
> 1. Completely automate the release process. This is the highest priority. 
>
> 2. In the process, make sure that we test everything that could 
> possibly go wrong at every pull request. This means testing in Travis 
> and/or SymPy Bot 
>
> 3. Get a good benchmarking system going. 
>
> 4. Add benchmarks for SymPy. 
>
> 5. If you find something that's too slow, take a stab at fixing it. 
> Those of us who have been working with the code base for a long time 
> will have a good intuition about what makes things slow, so that we 
> can give you advice on where to look to fix it, or if it will be 
> probably too much work and you should really focus on another slow 
> thing (for example, a lot of slow stuff in SymPy right now is due to 
> slow matrices, which are only going to be fixed when Mateusz finishes 
> rewriting them to use faster arithmetic). 
>
> You should also help foster a community of caring about speed and 
> benchmarks, so that people will add benchmarks of their own when they 
> add new features, and so that they will check for speed regressions on 
> each pull request.  For the latter, it is enough to just comment on 
> each pull request with the benchmark results, similar to Travis or 
> SymPy Bot. For the former, the best way is to help review pull 
> requests, and to ask people who write new features to add benchmarks 
> (especially your fellow GSoC students). 
>
> Aaron Meurer 
>
> On Sun, Apr 21, 2013 at 7:17 PM, Ronan Lamy <ronan...@gmail.com<javascript:>> 
> wrote: 
> > Le 21/04/2013 22:50, Bi Ge a écrit : 
> >> 
> >> 
> >> 
> >> On Sunday, April 21, 2013 1:17:52 PM UTC-4, Ronan Lamy wrote: 
> >> 
> >>     Le 21/04/2013 05:11, Bi Ge a �crit : 
> >> 
> >>      > I finished my first draft of application here 
> >>      > 
> >> 
> >> <
> https://github.com/sympy/sympy/wiki/GSOC-2013-Application-Bi-Ge:-Automating-Release-Process-and-Sympy-bot
>  
> >> 
> >> <
> https://github.com/sympy/sympy/wiki/GSOC-2013-Application-Bi-Ge:-Automating-Release-Process-and-Sympy-bot>>.
>  
>
> >> 
> >>      > I would love to hear comments from you and others. 
> >>      > 
> >>      > And another question is am I addressing enough stuff for a GSoC 
> >>     term? 
> >>      > Especially for the sympy-bot part. Most of my ideas are kind of 
> >>      > scattered and lack of a systematic approach of doing them. 
> >> 
> >>     Well, I'm sorry to say that I'm not convinced that working on 
> >> sympy-bot 
> >>     is worth the effort. 
> >> 
> >> Are you suggesting I should abandon the sympy-bot part and propose 
> >> something else 
> >> in the proposal? Or I should put it low on list of priorities? (i.e. 
> >> after I finish all the proposed stuff, then I will 
> >> work on smpy-bot) 
> > 
> > 
> > I'm only stating my personal feeling. Other devs may disagree with me. 
> > Anyway, it already has a low priority according to your timeline, so you 
> > don't necessarily need to change that. 
> > 
> >>     Also, I doubt that the changes to the release 
> >>     process will make much of a difference. Couldn't we drop some 
> steps? 
> >>     What's the point of test_isolated? Why do we do the job of Sage 
> >>     packagers? Why do we bother testing import speed if we don't act on 
> >> it? 
> >> 
> >> I have not done a release so I can't say anything on this. Every step 
> is 
> >> in the wiki <https://github.com/sympy/sympy/wiki/New-Release>. I agree 
> >> with 
> >> 
> >> you on testing import speed. 
> > 
> > 
> > Well, those questions weren't really aimed at you. But that's part of 
> the 
> > problem with your proposal: our troubles with releasing don't seem to be 
> > primarily a technical issue, but rather a cultural/political one. I 
> don't 
> > think a GSoC can fix that. 
> > 
> > 
> >>     One way of expanding the scope of your project would be to look 
> into 
> >>     testing and benchmarking. 
> >> 
> >> I'm looking for how Scipy and Numpy do their benchmarking. 
> >> I found Numpy does some level of benchmark 5-6 years ago but their 
> >> /benchmark 
> >> directory is not present in their repository right now. So I guess 
> >> either they moved it to a different 
> >> place or they abandon it. Is "sympy/utilities/benchmarking.py" the only 
> >> benchmark we do now? 
> > 
> > 
> > Yes, I think so. It's probably completely broken nowadays. 
> > 
> > 
> >> What would be the critical parts in sympy to benchmark upon? I 
> currently 
> >> don't have a clear idea of how to expend testing and benchmarking. 
> > 
> > 
> > Anything that real users might want to do and that takes more than a 
> couple 
> > seconds is probably a good benchmark. 
> > 
> > The bench_*** functions in the various benchmarks/ directories are 
> probably 
> > still mostly meaningful, though they don't cover everything. 
> > 
> > 
> >> Can you expend this idea of testing and benchmarking a little bit more? 
> > 
> > 
> > Testing more or less meets our current needs, but there are a few areas 
> that 
> > could be improved: 
> > * having our own test framework (instead of using py.test) is a drag on 
> > maintenance and causes us to miss out on a ton of features 
> > * running the test suite takes way too long 
> > 
> > The lack of benchmarking is a major problem. We basically have no idea 
> what 
> > any change does wrt. performance. 
> > 
> > 
> >> Is there 
> >> a discussion about this? 
> > 
> > 
> > I'm not aware of any recent discussion about this. 
> > 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sympy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sympy+un...@googlegroups.com <javascript:>. 
> > To post to this group, send email to sy...@googlegroups.com<javascript:>. 
>
> > Visit this group at http://groups.google.com/group/sympy?hl=en-US. 
> > For more options, visit https://groups.google.com/groups/opt_out. 
> > 
> > 
>

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To post to this group, send email to sympy@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy?hl=en-US.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to