suggestions based on query

2014-11-08 Thread Sascha Janz
Hi,

 

is there a solution to build suggestions based on a query? 

 

Greetings

Sascha

 

 



analyzers for Thai, Telugu, Vietnamese, Korean, Urdu,...

2014-11-08 Thread Olivier Binda

Hello

What should I use for analysing languages like Thai, Telugu, Vietnamese, 
Korean, Urdu ?

The StandardAnalyzer ? The ICUAnalyzer ?

It doesn't look like they have dedicated analyzers (I'm using Lucene 
4.7.2 on Android)


Best regards,
Olivier


-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: analyzers for Thai, Telugu, Vietnamese, Korean, Urdu,...

2014-11-08 Thread Erick Erickson
There are a bunch of different examples in the schema file that should
point you in the right
direction, whether these specific languages are supported is an open
question though.

Best,
Erick

On Sat, Nov 8, 2014 at 2:47 AM, Olivier Binda olivier.bi...@wanadoo.fr wrote:
 Hello

 What should I use for analysing languages like Thai, Telugu, Vietnamese,
 Korean, Urdu ?
 The StandardAnalyzer ? The ICUAnalyzer ?

 It doesn't look like they have dedicated analyzers (I'm using Lucene 4.7.2
 on Android)

 Best regards,
 Olivier


 -
 To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
 For additional commands, e-mail: java-user-h...@lucene.apache.org


-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: Caused by: java.lang.OutOfMemoryError: Map failed

2014-11-08 Thread Brian Call
I’ll try bumping up the per-process file max and see if that fixes it. Thanks 
for all your help and suggestions guys!

-Brian

On Nov 7, 2014, at 5:00 PM, Toke Eskildsen t...@statsbiblioteket.dk wrote:

 Brian Call [brian.c...@soterawireless.com] wrote:
 Yep, you guys are correct, I’m supporting a slightly older version of our 
 product based
 on Lucene 3. 
 
 In my previous email I forgot to mention that I also bumped up the maximum 
 allowable
 file handles per process to 16k, which had been working well.
 
 If you don't use compound indexes and all your indexes are handled under the 
 same process constraint, then 16K seems quite low for hundreds of indexes. 
 You could check by issuing a file count on your index folders.
 
 - Toke Eskildsen
 
 -
 To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
 For additional commands, e-mail: java-user-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: Exceptions during batch indexing

2014-11-08 Thread Jack Krupansky
Oops... you sent this to the wrong list - this is the Lucene user list, send 
it to the Solr user list.


-- Jack Krupansky

-Original Message- 
From: Peter Keegan

Sent: Thursday, November 6, 2014 3:21 PM
To: java-user
Subject: Exceptions during batch indexing

How are folks handling Solr exceptions that occur during batch indexing?
Solr (4.6) stops parsing the docs stream when an error occurs (e.g. a doc
with a missing mandatory field), and stops indexing. The bad document is
not identified, so it would be hard for the client to recover by skipping
over it.

Peter 



-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org