I think we need to get the system for measuring performance in place before
we can issue a mandate. The analogy is "test the application functionality
carefully before chcking in" vs "run these unit tests before checking in."
Even if everyone does their own microbenchmarks they likely won't be
comparable (some will have errors, some will measure different things, the
data will vary widely etc). Once we have a clear metric that everyone can
use, I'm all for a requirement to document changes to that metric. I'm 100%
behind careful scrutiny of things in the main code paths until then, but I
don't think we can issue a mandate that is essentially "do your own thing".
That will quickly revert to the current situation. I don't think that the
problem is nobody cares, more likely the problem is it's hard and there's
always a tug of war between getting things done and out there where people
can benefit from the feature/fix etc vs the risk that they stall out
waiting for one more thing to do. If the time to complete a task grows the
likelihood that real life, and jobs interrupt it grows, and the chance
it lingers indefinitely or is abandoned goes up.

On Tue, Aug 11, 2020 at 5:09 PM Mike Drob <md...@apache.org> wrote:

> Hi Ishan,
>
> Thanks for starting this conversation! I think it's important to pay
> attention to performance, but I also have some concerns with coming out
> with such a strong mandate. In the repository, I'm looking at how to run in
> local mode, and see that it looks like it will try to download a jdk from
> some university website? That seems overly restrictive to me, why can't we
> use the already installed JDK?
>
> Is the benchmark suite designed for master? Or for branch_8x?
>
> Mike
>
> On Tue, Aug 11, 2020 at 9:04 AM Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
>
>> Hi Everyone!
>>    From now on, I intend to request/nag/demand/veto code changes, which
>> affect default code paths for most users, be accompanied by performance
>> testing numbers for it (e.g. [1]). Opt in features are fine, I won't
>> personally bother about them (but if you'd like to perf test them, it would
>> set a great precedent anyway).
>>
>> I will also work on setting up automated performance and stress testing
>> [2], but in the absence of that, let us do performance test manually and
>> report them in the JIRA. Unless we don't hold ourselves to a high
>> standards, performance will be a joke whereby performance regressions can
>> creep in without the committer(s) taking any responsibility towards those
>> users affected by it (SOLR-14665).
>>
>> A benchmarking suite that I am working on is at
>> https://github.com/thesearchstack/solr-bench (SOLR-10317). A stress test
>> suite is under development (SOLR-13933). If you wish to use either of
>> these, I shall offer help and support (please ping me on Slack directly or
>> #solr-dev, or open a Github Issue on that repo).
>>
>> Regards,
>> Ishan
>>
>> [1] -
>> https://issues.apache.org/jira/browse/SOLR-14354?focusedCommentId=17174221&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17174221
>> [2] -
>> https://issues.apache.org/jira/browse/SOLR-14354?focusedCommentId=17174234&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17174234
>> (edited)
>>
>>
>>

-- 
http://www.needhamsoftware.com (work)
http://www.the111shift.com (play)

Reply via email to