I enabled the Policeman and ASF Jenkins builds of 7.2 branch.

 

Uwe

 

-----

Uwe Schindler

Achterdiek 19, D-28357 Bremen

http://www.thetaphi.de <http://www.thetaphi.de/> 

eMail: u...@thetaphi.de

 

From: jim ferenczi [mailto:jim.feren...@gmail.com] 
Sent: Tuesday, January 9, 2018 11:00 AM
To: dev@lucene.apache.org
Subject: Re: BugFix release 7.2.1

 

Hi,

All the issues we discussed have been backported to 6.2. I added the draft for 
the release notes in the wiki:

https://wiki.apache.org/lucene-java/ReleaseNote721

https://wiki.apache.org/solr/ReleaseNote721

 

I'll create the first RC later today.

 

 

 

2018-01-09 4:56 GMT+01:00 S G <sg.online.em...@gmail.com 
<mailto:sg.online.em...@gmail.com> >:

Sorry, I missed some of the details but this is what we did in one of my past 
projects with success:

 

We can begin by supporting only those machines where Apache Solr's regression 
tests are run.

The aim is to identify OS-independent performance regressions, not to certify 
each OS where Solr could be run.

 

Repository wise is easy too - We store the results in a performance-results 
directory that stays in the github repo of Apache Solr.

This directory will receive metric-result-file(s) whenever a Solr release is 
made.

And if older files are present, then the last metric file will be used to 
compare the current performance.

When not making a release, the directory can be used to compare current code's 
performance without writing to the performance-results directory.

When releasing Solr, the performance-metrics file should get updated 
automatically.

 

Further improvements can include:

1) Deleting older files from performance-results directory

2) Having performance-results directories for each OS where Solr is released 
(if we think OS-dependent performance issues could be there).

 

These ideas can be fine-tuned to ensure that they work.

Please suggest more issues if you think this would be impractical.

 

Thanks

SG

 

 

 

 

On Mon, Jan 8, 2018 at 12:59 PM, Erick Erickson <erickerick...@gmail.com 
<mailto:erickerick...@gmail.com> > wrote:

Hmmm, I think you missed my implied point. How are these metrics collected and 
compared? There are about a dozen different machines running various op systems 
etc. For these measurements to spot regressions and/or improvements, they need 
to have a repository where the results get published. So a report like "build 
XXX took YYY seconds to index ZZZ documents" doesn't tell us anything. You need 
to gather then for a _specific_ machine.

As for whether they should be run or not, an annotation could help here, there 
are already @Slow, @Nightly, @Weekly and @Performance could be added. Mike 
McCandless has some of these kinds of things already for Lucene, I htink the 
first thing would be to check whether they are already done, it's possible 
you'd be reinventing the wheel.

Best,

Erick

 

On Mon, Jan 8, 2018 at 11:45 AM, S G <sg.online.em...@gmail.com 
<mailto:sg.online.em...@gmail.com> > wrote:

We can put some lower limits on CPU and Memory for running a performance test.

If those lower limits are not met, then the test will just skip execution.

 

And then we put some lower bounds (time-wise) on the time spent by different 
parts of the test like:

 - Max time taken to index 1 million documents

 - Max time taken to query, facet, pivot etc

 - Max time taken to delete 100,000 documents while read and writes are 
happening.

 

For all of the above, we can publish metrics like 5minRate, 95thPercent and 
assert on values lower than a particular value.

 

I know some other software compare CPU cycles across different runs as well but 
not sure how.

 

Such tests will give us more confidence when releasing/adopting new features 
like pint compared to tint etc.

 

Thanks

SG

 

 

 

On Sat, Jan 6, 2018 at 9:59 AM, Erick Erickson <erickerick...@gmail.com 
<mailto:erickerick...@gmail.com> > wrote:

Not sure how performance tests in the unit tests would be interpreted. If I run 
the same suite on two different machines how do I compare the numbers? 

Or are you thinking of having some tests so someone can check out different 
versions of Solr and run the perf tests on a single machine, perhaps using 
bisect to pinpoint when something changed?

I'm not opposed at all, just trying to understand how one would go about using 
such tests.

Best,

Erick 

 

On Fri, Jan 5, 2018 at 10:09 PM, S G <sg.online.em...@gmail.com 
<mailto:sg.online.em...@gmail.com> > wrote:

Just curious to know, does the test suite include some performance test also?

I would like to know the performance impact of using pints vs tints or ints etc.

If they are not there, I can try to add some tests for the same.

 

Thanks

SG

 

 

On Fri, Jan 5, 2018 at 5:47 PM, Đạt Cao Mạnh <caomanhdat...@gmail.com 
<mailto:caomanhdat...@gmail.com> > wrote:

Hi all,

 

I will work on  <https://issues.apache.org/jira/browse/SOLR-11771> SOLR-11771 
today, It is a simple fix and will be great if it get fixed in 7.2.1

 

On Fri, Jan 5, 2018 at 11:23 PM Erick Erickson <erickerick...@gmail.com 
<mailto:erickerick...@gmail.com> > wrote:

Neither of those Solr fixes are earth shatteringly important, they've both been 
around for quite a while. I don't think it's urgent to include them.

That said, they're pretty simple and isolated so worth doing if Jim is willing. 
But not worth straining much. I was just clearing out some backlog over 
vacation.

Strictly up to you Jim.

Erick

 

On Fri, Jan 5, 2018 at 6:54 AM, David Smiley <david.w.smi...@gmail.com 
<mailto:david.w.smi...@gmail.com> > wrote:

https://issues.apache.org/jira/browse/SOLR-11809 is in progress, should be easy 
and I think definitely worth backporting

 

On Fri, Jan 5, 2018 at 8:52 AM Adrien Grand <jpou...@gmail.com 
<mailto:jpou...@gmail.com> > wrote:

+1

 

Looking at the changelog, 7.3 has 3 bug fixes for now: LUCENE-8077, SOLR-11783 
and SOLR-11555. The Lucene change doesn't seem worth backporting, but maybe the 
Solr changes should?

 

Le ven. 5 janv. 2018 à 12:40, jim ferenczi <jim.feren...@gmail.com 
<mailto:jim.feren...@gmail.com> > a écrit :

Hi,

We discovered a bad bug in 7x that affects indices created in 6x with 
Lucene54DocValues format. The SortedNumericDocValues created with this format 
have a bug when advanceExact is used, the values retrieved for the docs when 
advanceExact returns true are invalid (the pointer to the values is not 
updated):

https://issues.apache.org/jira/browse/LUCENE-8117

This affects all indices created in 6x with sorted numeric doc values so I 
wanted to ask if anyone objects to a bugfix release for 7.2 (7.2.1). I also 
volunteer to be the release manager for this one if it is accepted.

 

Jim




-- 

Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker

LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
http://www.solrenterprisesearchserver.com

 

 

 

 

 

 

 

Reply via email to