Michael,
What's first re-indexing?
I'm sure you are aware about binary/number DocValues updates, but it works
for existing column strides. I can guess you are talking about something
like sidecar index http://www.youtube.com/watch?v=9h3ax5Wmxpk
On Tue, Jul 22, 2014 at 6:50 AM, Michael Ryan
Thanks Jack for your guidance on DSE. However it would be great if somebody
could help me solving my use case:
So my full text data lies on Cassandra along with an ID. Now I have a lot
of structured data linked to the ID which lies on an RDBMS (read MySQL). I
need this structured data as it would
I did observe the same..
1. updated an existing document.. means potentially marking the previous
document as deleted and adding a new version of it.. posted the JSON doc
using the Documents interface on the Admin UI.. left the default commit
within 1000 ms there on the Documents UI..
2. NOT
Hello,
I am using solr 4.2.1. I have the following use case.
I should find results inside bbox OR if there is none, first result outside
bbox within a 1000 km distance. I was wondering what is the best way to
proceed.
I was considering doing a geofilt search from the center of my bounding box
Thanks, Umesh
You can get the parent bitset by running a the parent doc type query on
the solr indexsearcher.
Then child bitset by runnning the child doc type query. Then use these
together to create a int[] where int[i] = parent of i.
Can you kindly add an example? I am not quite sure how
Hi.
My solr-index (version=4.7.2.) has an id-field:
field name=id type=string indexed=true stored=true/
...
uniqueKeyid/uniqueKey
The index will be updated once per hour.
I use the following query to retrieve some documents:
q=id:2^2 id:1^1
I would expect that the document(2) should be
I faced the same issue sometime back, root cause is docs getting deleted
and created again without getting optimized. Here is the discussion
http://www.signaldump.org/solr/qpod/22731/docfreq-coming-to-be-more-than-1-for-unique-id-field
On Tue, Jul 22, 2014 at 4:56 PM, Johannes Siegert
Deleted documents remain in the Lucene index until an optimize or segment
merge operation removes them. As a result they are still counted in document
frequency. An update is a combination of a delete and an add of a fresh
document.
-- Jack Krupansky
-Original Message-
From:
I don't think the Solr Data Import Handler has a Cassandra plugin (entity
processor) yet, so the most straight forward approach is to write a Java app
that reads from Cassandra, then reads the corresponding RDBMS data, combines
the data, and then uses SolrJ to add documents to Solr.
Your best
I mean re-adding all of the documents in my index. The DocValues wiki page says
that this is necessary, but I wanted to know if there was a way around it.
-Michael
-Original Message-
From: Mikhail Khludnev [mailto:mkhlud...@griddynamics.com]
Sent: Tuesday, July 22, 2014 2:14 AM
To:
Exactly. Thanks a lot Jack. +1 for Your best bet is to get that RDBMS data
moved to Cassandra or DSE ASAP.
On Tue, Jul 22, 2014 at 5:15 PM, Jack Krupansky j...@basetechnology.com
wrote:
I don't think the Solr Data Import Handler has a Cassandra plugin (entity
processor) yet, so the most
On 7/22/2014 6:14 AM, Michael Ryan wrote:
I mean re-adding all of the documents in my index. The DocValues wiki page
says that this is necessary, but I wanted to know if there was a way around
it.
If your index meets the strict criteria for Atomic Updates, you could
update all the documents
So by using the SimplePostTool I can define the application type and handling
of specific documents (Such as word, powerpoint, xml, png, etcetera). I have
defined these and they are handled based on their type. In my file system
however, I have a large number of files that can be read as plain
I am copy-pasting the file extensions /from /the text document /into /the
source code, not /from /the source code. My typing mistake.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Edit-Example-Post-jar-to-read-ALL-file-types-tp4148312p4148567.html
Sent from the Solr -
Query parentFilterQuery = new TermQuery(new Term(document_type,
parent));
int[] childToParentDocMapping = new int[searcher.maxDoc()];
DocSet allParentDocSet = searcher.getDocSet(parentFilterQuery);
DocIterator iter = allParentDocSet.iterator();
public static DocSet mapChildDocsToParentOnly(DocSet childDocSet) {
DocSet mappedParentDocSet = new BitDocSet();
DocIterator childIterator = childDocSet.iterator();
while (childIterator.hasNext()) {
int childDoc = childIterator.nextDoc();
int
Hi Gopal,
I just started a repository on github
(https://github.com/tballison/tallison-lucene-addons) to host a standalone
version of LUCENE-5205 (with other patches to come). SOLR-5410 is next (Solr
wrapper of the SpanQueryParser), and then I'll try to add LUCENE-5317
(concordance) and
Hi
i am running into java heap space issue. Please see below log.
ERROR - 2014-07-22 11:38:59.370; org.apache.solr.common.SolrException;
null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
at
On 7/22/2014 11:37 AM, Ameya Aware wrote:
i am running into java heap space issue. Please see below log.
All we have here is an out of memory exception. It is impossible to
know *why* you are out of memory from the exception. With enough
investigation, we could determine the area of code where
So can i come over this exception by increasing heap size somewhere?
Thanks,
Ameya
On Tue, Jul 22, 2014 at 2:00 PM, Shawn Heisey s...@elyograg.org wrote:
On 7/22/2014 11:37 AM, Ameya Aware wrote:
i am running into java heap space issue. Please see below log.
All we have here is an out of
Hello!
Yes, just edit your Jetty configuration file and add -Xmx and -Xms
parameters. For example, the file you may be looking at it
/etc/default/jetty.
--
Regards,
Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr Elasticsearch Support * http://sematext.com/
So can
What field type or filters do I use to get something like the word “Lacuma” to
return results with “Lucuma” in it ? The word “Lucuma” has been indexed in a
field with field type text_en_splitting that came with the original solar
examples.
Thanks,
Warren
fieldType name=text_en_splitting
Hi
I am running into below error while indexing a file in solr.
Can you please help to fix this?
ERROR - 2014-07-22 16:40:32.126; org.apache.solr.common.SolrException;
null:java.lang.RuntimeException: java.lang.NoClassDefFoundError:
com/uwyn/jhighlight/renderer/XhtmlRendererFactory
at
Hi Warren,
Check out the section about fuzzy search here
https://cwiki.apache.org/confluence/display/solr/The+Standard+Query+Parser.
On Tue, Jul 22, 2014 at 1:29 PM, Warren Bell warr...@clarksnutrition.com
wrote:
What field type or filters do I use to get something like the word
“Lacuma” to
Or possibly use the synonym filter at query or index time for common
misspellings or misunderstandings about the spelling. That would be
automatic, without the user needing to add the explicit fuzzy query
operator.
-- Jack Krupansky
-Original Message-
From: Anshum Gupta
Sent:
I think, I found the issue!
I actually missed to mention a very important step that I did, which is,
CORE SWAP
otherwise, it's not replicating the full index.
when we do CORE SWAP, doesn't it do the same checks of copying only deltas?
--
View this message in context:
On 7/22/2014 5:00 PM, Robin Woods wrote:
I think, I found the issue!
I actually missed to mention a very important step that I did, which is,
CORE SWAP
otherwise, it's not replicating the full index.
when we do CORE SWAP, doesn't it do the same checks of copying only deltas?
Yes, it will
Hi,
Is it possible to execute queries using doc Id as a query parameter
For eg, query docs whose doc Id is between 100 and 200
Thanks Regards
Mukund
i guess you can use these two params in your query,
rows=100start=100
which will give you 100 documents after 100th document.
On Wed, Jul 23, 2014 at 10:19 AM, Mukundaraman Valakumaresan
muk...@8kmiles.com wrote:
Hi,
Is it possible to execute queries using doc Id as a query parameter
Do you mean something different from docId:[100 TO 200] ?
Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
On Wed, Jul 23, 2014 at 11:49 AM,
Solr is trying to load com/uwyn/jhighlight/renderer/XhtmlRendererFactory
but that is not a class which is shipped or used by Solr. I think you have
some custom plugins (a highlighter perhaps?) which uses that class and the
classpath is not setup correctly.
On Wed, Jul 23, 2014 at 2:20 AM, Ameya
Same with me too, in a multi-core Master/Slave.
11:17:30.476 [snapPuller-8-thread-1] INFO o.a.s.h.SnapPuller - Master's
generation: 87
11:17:30.476 [snapPuller-8-thread-1] INFO o.a.s.h.SnapPuller - Slave's
generation: 3
11:17:30.476 [snapPuller-8-thread-1] INFO o.a.s.h.SnapPuller - Starting
32 matches
Mail list logo