I noticed that your suggester analyzers include
filter class=solr.PatternReplaceFilterFactory pattern=([^\w\d\*æøåÆØÅ ])
replacement= replace=all /
which seems like a bad idea -- this will strip all those arabic, russian
and japanese characters entirely, leaving you with probably only
Erik
I have attached the screen shot of the toplogy , as you can see I have
three nodes and no two replicas of the same shard reside on the same node,
this was made sure so as not affect the availability.
The query that I use is a general get all query of type *:* to test .
The behavior I
Hi All,
I have a use case where I need to group documents that have a same field
called bookName , meaning if there are a multiple documents with the same
bookName value and if the user input is searched by a query on bookName ,
I need to be able to group all the documents by the same bookName
On 12/26/2014 7:17 AM, Mahmoud Almokadem wrote:
We've installed a cluster of one collection of 350M documents on 3
r3.2xlarge (60GB RAM) Amazon servers. The size of index on each shard is
about 1.1TB and maximum storage on Amazon is 1 TB so we add 2 SSD EBS
General purpose (1x1TB + 1x500GB) on
I am looking at the collection1/techproducts schema and I can't figure
out how the reversed wildcard example is supposed to work.
We define text_general_rev type and text_rev field, but we don't seem
to be populating it at any point. And running the example does not
seem to show any tokens in the
On 12/28/2014 8:48 AM, S.L wrote:
I have attached the screen shot of the toplogy , as you can see I have
three nodes and no two replicas of the same shard reside on the same
node, this was made sure so as not affect the availability.
The query that I use is a general get all query of type
HI,
You can use the grouping in the solr. You can does this by via query or via
solrconfig.xml.
*A) via query*
http://localhost:8983?your_query_params*group=truegroup.field=bookName*
You can limit the size of group (how many documents you wants to show),
suppose you want to show 5 documents
thanks it is work for me
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multi-Language-Suggester-Solr-Issue-tp4176075p4176324.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks Aman, the thing is the bookName field values are not exactly
identical , but nearly identical , so at the time of indexing I need to
figure out which other book name field this is similar to using NLP
techniques and then put it in the appropriate bag, so that at the retrieval
time I only
Mahmoud Almokadem [prog.mahm...@gmail.com] wrote:
We've installed a cluster of one collection of 350M documents on 3
r3.2xlarge (60GB RAM) Amazon servers. The size of index on each shard is
about 1.1TB and maximum storage on Amazon is 1 TB so we add 2 SSD EBS
General purpose (1x1TB + 1x500GB)
Thanks Jack for your suggestions.
Regards,
Modassar
On Fri, Dec 26, 2014 at 6:04 PM, Jack Krupansky jack.krupan...@gmail.com
wrote:
Either you have too little RAM on each node or too much data on each node.
You may need to shard the data much more heavily so that the total work on
a single
Erick,
I am trying to do a premature optimization. *There will be no updates to my
index. So, no worries about ageing out or garbage collection.*
Let me get my understanding correctly; when we talk about filterCache, it
just stores the document IDs in the cache right?
And my setup is as follows.
You can also use group.query or group.func to group documents matching a
query or unique values of a function query. For the latter you could
implement an NLP algorithm.
-- Jack Krupansky
On Sun, Dec 28, 2014 at 5:56 PM, Meraj A. Khan mera...@gmail.com wrote:
Thanks Aman, the thing is the
Hi, Joel
Thanks for your reply.
It seems that the weird export results is because that I removed the str
namexsort/str invariant of the export request handler in the default
sorlconfig.xml to get csv-format output.
I don't quite understand the meaning of xsort, but I removed it because I
always
14 matches
Mail list logo