Thanks Rick. Unfortunately we have no that converter, so we have to count
characters in the rich text.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-there-a-way-to-retrieve-the-a-term-s-position-offset-in-Solr-tp4326931p4328859.html
Sent from the Solr - User mailing
Unfortunately the rich text is not an html/xml/doc/pdf or any other popular
rich text format. And we would like to show the highlighted text in the
doc's own specific viewer. That's why I'm eagerly want the offset.
The /tvrh(term vector component) and tv.offsets/tv.positions can give us
such
Thanks All!
Actually we are going to show the highlighted words in a rich text format
instead of the plain text which was indexed. So the hl.fragsize=0 seems not
work for me..
And for the patch(SOLR-4722), haven't tried it. Hope it can return the
position/offset info.
Thanks!
--
View this
Thanks Eric.
Actually solr highlighting function does not meet my requirement. My
requirement is not showing the highlighted words in snippets, but show them
in the whole opening document. So I would like to get the term's
position/offset info from solr. I went through the highlight feature, but
We are going to implement a feature:
When opening a document whose body field is already indexed in Solr, if we
issued a keyword search before opening the doc, highlight the keyword in the
opening document.
That needs the position/offset info of the keyword in the doc's index, which
I think
Yeah,, I'm curious why this thread is used to talk that topic.
I'll start a new thread on my questions.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-cannot-provide-index-service-after-a-large-GC-pause-but-core-state-in-ZK-is-still-active-tp4308942p4310302.html
Sent
Sorry for my wrong memory. The swap is 16GB.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Very-long-young-generation-stop-the-world-GC-pause-tp4308911p4310301.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks a lot, PushKar! And sorry for late response.
Our OS ram is 128GB. And we have 2 solr nodes on one machine. Each solr node
has max heap size 32GB.
And we do not have swap.
--
View this message in context:
Thanks a lot, Shawn.
We'll consider your suggestion to tune our solr servers. Will let you know
the result.
Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-has-a-CPU-spike-when-indexing-a-batch-of-data-tp4309529p4310002.html
Sent from the Solr - User mailing
Thanks, Shawn!
We are doing index on the same http endpoint. But as we have shardnum=1 and
replicafactor=1, so each collection only has one core. So there should no
distributed update/query, as we are using solrj's CloudSolrClient which will
get the target URL of the solrnode when requesting to
Hi,
I posted this issue to a JIRA. Could anyone help comment? Thanks!
https://issues.apache.org/jira/browse/SOLR-9741
The details:
When we doing a batch of index and search operations to SolrCloud v5.3.2, we
usually met a CPU% spike lasting about 10 min.
We have 5 physical servers, 2 solr
Hi Erick, Mark and Varun,
I'll use this mail thread tracking the issue in
https://issues.apache.org/jira/browse/SOLR-9829 .
@Erick, for your question:
I'm sure the solr node is still in the live_nodes list.
The logs are from solr log. And the most root cause I can see here is the
IndexWriter
Besides, will those JVM options make it better?
-XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=10
--
View this message in context:
http://lucene.472066.n3.nabble.com/Very-long-young-generation-stop-the-world-GC-pause-tp4308911p4308937.html
Sent from the Solr - User mailing list
As you can see in the gc log, the long GC pause is not a full GC. It's a
young generation GC instead.
In our case, full gc is fast and young gc got some long stw pause.
Do you have any comments on that, as we usually believe full gc may cause
longer pause, but young generation should be ok?
Hi Shawn,
Thanks a lot for your response!
I'll use this mail thread on tracking the issue in JIRA
https://issues.apache.org/jira/browse/SOLR-9828 .
--
View this message in context:
http://lucene.472066.n3.nabble.com/Very-long-young-generation-stop-the-world-GC-pause-tp4308911.html
Sent
We have a 3 node solrcloud. Each solr collection has only 1 shard and 1
replica.
When we restart the 3 solr nodes, we found all the cores in one solr node
are at down state and not changed to other state. The solr node is shown in
/live_nodes in zookeeper.
After restarting all the zk server and
We have a 5 node solrcloud. When a solr node's disk had issue and Raid5
downgraded, a recovery on the node was triggered. But there's a hanging
happens. The node disappears in the live_nodes list.
Could anyone help comment why this happens? Thanks!
The only meaningful call stacks are:
Thanks! That's very helpful!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-there-any-JIRA-changed-the-stored-order-of-multivalued-field-tp4264325p4271312.html
Sent from the Solr - User mailing list archive at Nabble.com.
We have a SolrCloud with solr v5.3.2.
collection1 contains 1 shard with 2 replicas on solr nodes: solr1 and solr2
respectively.
In solrconfig.xml, there are updateLog config and uploaded to ZK and
effective:
${solr.ulog.dir:}
${solr.ulog.numVersionBuckets:65536}
1000
We have a field named "attachmentnames":
We do POST to add data to Solr v4.7 and Solr v5.3.2 respectively. The
attachmentnames are in 789, 456, 123 sequence:
{
"add": {
"overwrite": true,
"doc": {
"id":"1",
I have read the articles below, but does not find the jetty.home/start.ini in
solr/server folder and there is no etc/jetty-jmx.xml config file.
http://www.eclipse.org/jetty/documentation/current/jmx-chapter.html
http://wiki.apache.org/solr/SolrJmx
--
View this message in context:
I'm using solr v8.5.1 in SolrCloud mode and enabled in solrconfig.xml,
and added those variables in solr.in.sh to enable jmx.
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.ssl=false
https://issues.apache.org/jira/browse/SOLR-7982
We have a 3 Zookeeper 5 solr server Solrcloud.
We created collection1 and collection2 with 80 shards respectively in the
cloud, replicateFactor is 2.
But after created, we found in a same collection, the coreNodeName has some
duplicate in
Opened a JIRA - https://issues.apache.org/jira/browse/SOLR-7947
A SolrCloud with 2 solr node in Tomcat server on 2 VM servers. After restart
one solr node, the cores on it turns to down state and logs showing below
errors.
Logs are in attachmenent. solr.zip
For the _version_ field in the schema.xml, do we need to set it be
docValues=true?
field name=_version_ type=long indexed=true stored=true/
As we noticed there are FieldCache for _version_ in the solr stats:
http://lucene.472066.n3.nabble.com/file/n4212123/IMAGE%245A8381797719FDA9.jpg
--
For the fieldCache, what determines the entries_count?
Is each search request containing a sort on an non-docValues field
contribute one entry to the entries_count?
For example, search A ( q=owner:1sort=maildate asc ) and search b (
q=owner:2sort=maildate asc ) will contribute 2 field cache
We have the same issue as this JIRA.
https://issues.apache.org/jira/browse/SOLR-6156
I have posted my query, response and solr logs to the JIAR.
Could anyone please take a look? Thanks!
--
View this message in context:
Can we have [core name] in each log entry?
It's hard for us to know the exact core having a such issue and the
sequence, if there are too many cores in a solr node in a SolrCloud env.
I post the request to this JIRA ticket.
https://issues.apache.org/jira/browse/SOLR-7434
--
View this message
Thanks Ramkumar!
Understood. We will try 100, 10.
But with our original steps which we found the exception, can we say that
the patch has some issue?
1, put the patch to all 5 running solr servers(tomcat) by replacing the
tomcat/webapps/solr/WEB-INF/lib/solr-core-4.7.0.jar with the patched
https://issues.apache.org/jira/browse/SOLR-6359
I also posted the questions to the JIRA ticket.
We have a SolrCloud with 5 solr servers of Solr 4.7.0. There are one
collection with 80 shards(2 replicas per shard) on those 5 servers. And we
made a patch by merge the patch
Yes, I also doubt the patch. I restore the patch with original .jar file,
there is no that issue.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Restart-solr-failed-after-applied-the-patch-in-https-issues-apache-org-jira-browse-SOLR-6359-tp4196251p4196278.html
Sent from
But if the value can only be 100,10, is there any difference with no that
patch? Can we enlarge those 2 values? Thanks!
--
View this message in context:
Hi, all.
We have some questions of commit/softcommit and cache.
We understand that a softcommit will create a new searcher. Will the
filtercache be invalid after a softcommit is done?
And also for commit, if we do commit with openSearcher, will the filtercache
be invalid?
Thanks!
--
View
I have 2 solr nodes(solr1 and solr2) in a SolrCloud.
After some issue happened, solr2 are in recovering state. The peersync
cannot finish in about 15 min, so it turn to snappull.
But when it's doing snap pull, it always met this issue below. Meanwhile,
there are still update requests sent to
Thanks.
My env is 2 VM with good network condition. So not sure why it is happened.
We are trying to reproduce it. The peersync fail log is :
2014年7月25日 上午6:30:48
WARN
SnapPuller
Error in fetching packets
java.io.EOFException
at
I have opened one JIRA for it:
https://issues.apache.org/jira/browse/SOLR-6333
--
View this message in context:
http://lucene.472066.n3.nabble.com/Cannot-finish-recovery-due-to-always-met-ReplicationHandler-SnapPull-failed-Unable-to-download-xxx-fy-tp4151611p4151631.html
Sent from the Solr -
I have 2 solr nodes(solr1 and solr2) in a SolrCloud.
After this issue happened, solr2 are in recovering state. And after it takes
long time to finish recovery, there is this issue again, and it turn to
recovery again. It happens again and again.
ERROR - 2014-08-04 21:12:27.917;
37 matches
Mail list logo