introduce a back-compat issue?
On Fri, Oct 13, 2017 at 4:40 PM Erick Erickson
mailto:erickerick...@gmail.com>> wrote:
I'd also like to get SOLR-11297 in if there are no objections. Ditto if the
answer is no
It's quite a safe fix though.
On Fri, Oct 13, 2017 at 1:26 PM, Alliso
Any chance we could get SOLR-11450 in? I understand if the answer is no. 😊
Thank you!
From: Ishan Chattopadhyaya [mailto:ichattopadhy...@gmail.com]
Sent: Friday, October 13, 2017 4:23 PM
To: dev@lucene.apache.org
Subject: 6.6.2 Release
Hi,
In light of [0], we need a 6.6.2 release as soon as pos
Alex,
I'm more than happy to chip in on the Tika side. Thank you for leading this
effort.
Cheers,
Tim
-Original Message-
From: Alexandre Rafalovitch [mailto:arafa...@gmail.com]
Sent: Sunday, March 26, 2017 9:09 PM
To: dev@lucene.apache.org
Subject:
All,
I recently blogged about some of the work we're doing with a large scale
regression corpus to make Tika, POI and PDFBox more robust and to identify
regressions before release. If you'd like to chip in with recommendations,
requests or Hadoop/Spark clusters (why not shoot for the stars), p
>> so that means that using tika metadata indexing with schemaless mode
> is, well, useless ?
Yes.
>I know of nobody using "schemaless" for production for the simple >reason that
>it makes the best guess it can based on the _first_ time it >sees a particular
>field. There's absolutely no way t
ICU looks promising:
Μῆνιν ἄειδε, θεὰ, Πηληϊάδεω Ἀχιλλῆος ->
1.μηνιν
2.αειδε
3.θεα
4.πηληιαδεω
5.αχιλληοσ
-Original Message-
From: Alexandre Rafalovitch [mailto:arafa...@gmail.com]
Sent: Friday, November 21, 2014 3:08 PM
To: dev@lucene.apache.org
Subject: Re: Lucene ancient greek normal
Thank you, Dennis:
https://issues.apache.org/jira/i#browse/LUCENE-5839
From: Dennis Walter [mailto:dennis.wal...@gmail.com]
Sent: Sunday, July 20, 2014 2:52 PM
To: dev@lucene.apache.org
Subject: Bug in AnalyzingQueryParser Pattern
Hi there,
While reading the source code of AnalyzingQueryParser
either..
On Wed, Mar 19, 2014 at 8:53 AM, Allison, Timothy B. wrote:
> This is similar to David Smiley's question on Feb 16th, but SuppressCodecs
> would be too broad of a solution, I think.
>
>
>
> I'm using LuceneTestCase'
This is similar to David Smiley's question on Feb 16th, but SuppressCodecs
would be too broad of a solution, I think.
I'm using LuceneTestCase's newIndexWriterConfig, and I have a test that
requires IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS. The test
passes quite often (famous la
Tommaso,
Ah, now I see. If you want to add new operators, you'll have to modify the
javacc files. For the SpanQueryParser, I added a handful of new operators and
chose to go with regexes instead of javacc...not sure that was the right
decision, but given my lack of knowledge of javacc, it wa
Hi Tommaso,
It will depend on how different your target syntax will be. If you extend
the classic parser (or, QueryParserBase), there is a fair amount of overhead
and extras that you might not want or need. On the other hand, the query
syntax and the methods will be familiar to the Lucene c
AM, Gopal Agarwal
mailto:gopal.agarw...@gmail.com>> wrote:
Sounds perfect. Hopefully one of the committer picks this up and adds this to
4.7.
Will keep checking the updates...
On Fri, Jan 17, 2014 at 1:17 AM, Allison, Timothy B.
mailto:talli...@mitre.org>> wrote:
And don't forge
doing the exact same changes to my QueryParserBase class,
I would be locked with the current version of SOLR for forseeable future.
Can you comment on when is the possible release if it gets reviewed by next
week?
On Thu, Jan 16, 2014 at 11:06 PM, Allison, Timothy B.
mailto:talli...@mitr
Apologies for the self-promotion...LUCENE-5205 and its Solr cousin (SOLR-5410)
might help. I'm hoping to post updates to both by the end of next week. Then,
if a committer would be willing to review and add these to Lucene/Solr, you
should be good to go.
Take a look at the description for LUC
All,
I realize that we should be consuming all tokens from a stream. I'd like to
wrap a client's Analyzer with LimitTokenCountAnalyzer with consume=false. For
the analyzers that I've used, this has caused no problems. When I use
MockTokenizer, I run into this assertion error: "end() called b
15 matches
Mail list logo