Hi Peter,
Hi Andrea,
I changed it to:
When I run full-import 0 documents are indexed, but no errors in the
console.
That's the reason why you can't see facets and errors: 0 documents are
indexed
When I run my query via MySQL Workbench the statement executes correctly.
Once I spent a who
Thanks Shalin.
I am facing one issue while replicating, as my replication (very large
index 100g)is happening, I am also doing the indexing and I believe the
segment_N file is changing because of new commits. So would the
replication fail if the the filename is different from what it found
w
On 1/2/2014 10:22 PM, gpssolr2020 wrote:
> Caused by: java.lang.RuntimeException: Invalid version (expected 2, but 60)
> or the data in not in 'javabin' format
> (Account:123+AND+DATE:["2013-11-29T00:00:00Z"+TO+"2013-11-29T23:59:59Z"])+OR+
> (Account:345+AND+DATE:["2013-11-29T00:00:00Z"+TO+"2013
60 in ASCII is '<'. Is it returning XML? Or maybe an error message?
wunder
On Jan 2, 2014, at 9:22 PM, gpssolr2020 wrote:
> Hi,
>
> We are getting the below error message while trying to delete 30k records
> from solr.
>
> Error occured while invoking endpoint on Solr:
> org.apache.solr.cli
Hi,
I am hitting this error on replication, can somebody please tell me
what's wrong here and what can be done to correct this error :
[explicit-fetchindex-cmd] ERROR
org.apache.solr.handler.ReplicationHandler- SnapPull failed
:org.apache.solr.common.SolrException: Unable to download _av3.f
Hi,
We are getting the below error message while trying to delete 30k records
from solr.
Error occured while invoking endpoint on Solr:
org.apache.solr.client.solrj.SolrServerException: Error executing query
at
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:
This is a neat idea, but could be too close to lucene/etc.
You could jump up one level in the stack and use Redis/memcache as a
distributed HTTP cache in conjunction with Solr's HTTP caching and a proxy.
I tried doing this myself with Nginx, but I forgot what issue I hit - I
think "misses" needed
Hi Chris,
> but also exactly what response you got
I didn't get any response. Even with debug=true, there was nothing at all
printed after the curl command. Nothing on the Solr log file either. (Are there
higher debug levels on Solr log?) That was the reason I thought I needed to add
JoinQParse
Hi Suren,
Unfortunately what you did is not the way to activate it. You need to add
ComplexPhrase-4.2.1.jar inside of solr.war file and re-package solr.war file.
There is a detailed ReadMe.txt file inside the zip-ball. Please follow
instructions written in ReadMe.txt file.
Ahmet
On Friday,
The defType parameter applies only to the q parameter, not to fq, so you
will need to explicitly give the query parser for fq:
fq={!queryparsername}filterquery
-- Jack Krupansky
-Original Message-
From: suren
Sent: Thursday, January 02, 2014 7:32 PM
To: solr-user@lucene.apache.org
Su
Ahmet,
It did not solve the problem. I added
1)ComplexPhrase-4.2.1.jar to my local solr
"solr-4.3.1\example\solr\collection1\lib"
2)added the below content to "solrConfig.xml"
false
3)re-started solr, appended the Query param
"defType=unorderedcomplexphrase" and ran the query
Hi,
I have decided to upgrade my index to SOLR 4.6.0 Its quite small now
(~40 000 simple docs). It should be optimized to efficient search and
regular inserts (10-20 new docs every few minutes).
I did follow the manual and have prepared index divided into 2 shards, 4
replicas each.
When I'
:
:
: KeepIndexed
:
:
Unless you are using Solr 3.1 (or earlier) "update.processor" is a
meaningless request param ... the param name you need to specify is
"update.chain"...
https://wiki.apache.org/solr/UpdateRequestProcessor
If you can tell us where you saw i
Hi Andrea,
I changed it to:
When I run full-import 0 documents are indexed, but no errors in the
console.
When I run my query via MySQL Workbench the statement executes correctly.
How else can I debug the index process?
--
View this message in context:
http://lucene.472066.n3.nabble.co
Then that means dih is not populating the field..
I guess if you set required=true in your field you will get some error
during indexing
Try to debug the index process and /or run queries outside solr in order to
see results, field names matches and so on
Best,
Andrea
I get "Sorry, no Term Info
I get "Sorry, no Term Info available :("
--
View this message in context:
http://lucene.472066.n3.nabble.com/Empty-facets-on-Solr-with-MySQL-tp4109170p4109186.html
Sent from the Solr - User mailing list archive at Nabble.com.
: I managed to assign the individual cores to a collection using the collection
: API to create the collection and then the solr.xml to define the core(s) and
: it's collection. This *seemed* to work. I even test indexed a set of documents
: checking totals before and after as well as content. Aga
Hi Peter,
Go to your solr admin page and select your core. Hit the schema-browser URL and
select cat_name_raw field. Example URL :
http://localhost:8983/solr/#/crawl/schema-browser
Push the 'Load Term info' button, do you see some data there?
ahmet
On Thursday, January 2, 2014 9:23 PM, PeterK
Hi Ahmet,
I tried this URL:
http://localhost:8983/solr/wordpress/select/?indent=on&facet=true&sort=post_modified%20desc&q=*:*&start=0&rows=10&fl=id,post_title,cat_name*&facet.field=cat_name_raw&facet.mincount=1
and this URL:
http://localhost:8983/solr/wordpress/select/?indent=on&facet=true&sort
Hi Peter,
I think you meant faceting on cat_name_raw field.
facet=true&facet.field=cat_name_raw
Second check if that field is actually popupated. Below URL or at Schema
Browser link.
q=*:*&fl=cat_name*
Ahmet
On Thursday, January 2, 2014 8:45 PM, PeterKerk wrote:
I've set up Solr with MySQL.
I've set up Solr with MySQL.
My data import is succesful:
http://localhost:8983/solr/wordpress/dataimport?command=full-import
However, when trying to get the cat_name facets all facets are empty:
http://localhost:8983/solr/wordpress/select/?indent=on&facet=true&sort=post_modified%20desc&q=*:*&sta
On 01/02/2014 12:44 PM, Chris Hostetter wrote:
: Not really ... uptime is irrelevant because they aren't in production. I just
: don't want to spend the time reloading 1TB of documents.
terminologiy confusion: you mean you don't wnat to *reindex* all of the
documents ... in solr "reloading" a c
: Not really ... uptime is irrelevant because they aren't in production. I just
: don't want to spend the time reloading 1TB of documents.
terminologiy confusion: you mean you don't wnat to *reindex* all of the
documents ... in solr "reloading" a core means something specific &
different from w
On 1/2/2014 9:48 AM, elmerfudd wrote:
*ok im really lost...
Im trying to set a custom updatehandler
then trying to index a file - but the field wont change!
how come?
At a quick glance, it all looks right, but I could be completely wrong
about that. I made my custom update chain the def
*ok im really lost...
Im trying to set a custom updatehandler
ran into this:
*
package my.solr;
//imports were here - deleted for shorting.
public class myPlugin extends UpdateRequestProcessorFactory
{
@Override
public UpdateRequestProcessor getInstance(SolrQueryRequest req,
SolrQueryRespon
I am trying to setup Solr with HDFS following this wiki
https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS
My Setup:
***
VMWare: Cloudera Quick Start VM 4.4.0-1 default setup (only hdfs1,
hive1,hue1,mapreduce1 and zookeeper1 is running)
http://www.cloudera.com/conten
You touched an interesting point. I am really assuming if a quick win
scenario is even possible. But what would be the advantage of using Redis
to keep Solr Cache if each node would keep it's own Redis cache?
2013/12/29 Upayavira
> On Sun, Dec 29, 2013, at 02:35 PM, Alexander Ramos Jardim wrote
Hi,
Is there a way to do an exact match search on a tokenized field?
I have a scenario which i need a field to be indexed and searchable
regardless of the case or white spaces used. For this, I created a custom
field type with the following configuration:
Even usi
On 01/02/2014 08:29 AM, michael.boom wrote:
Hi David,
"They are loaded with a lot of data so avoiding a reload is of the utmost
importance."
Well, reloading a core won't cause any data loss. Is it 100% availability
during the process is what you need?
Not really ... uptime is irrelevant becaus
If I want to do it? do i have to work with the source code version of solr?
if so, where is the default search handler located?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Chaining-plugins-tp4108239p4109102.html
Sent from the Solr - User mailing list archive at Nabble.co
Hi David,
"They are loaded with a lot of data so avoiding a reload is of the utmost
importance."
Well, reloading a core won't cause any data loss. Is it 100% availability
during the process is what you need?
-
Thanks,
Michael
--
View this message in context:
http://lucene.472066.n3.nabble
Hi,
I have a few cores on the same machine that share the schema.xml and
solrconfig.xml from an earlier setup. Basically from the older
distribution method of using
shards=localhost:1234/core1,localhost:1234/core2[,etc]
for searching.
They are unique sets of documents, i.e., no overlap of
Replications won't run concurrently. They are scheduled at a fixed
rate and if a particular pull takes longer than the time period then
subsequent executions are delayed until the running one finishes.
On Tue, Dec 31, 2013 at 4:46 PM, anand chandak wrote:
> Quick question about solr replication :
Hi,
Absolutely!
In case of SPM, you'd put the little SPM Client on each node you want to
monitor. For shippings logs you can use a number of method -
https://sematext.atlassian.net/wiki/display/PUBLOGSENE/
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Sup
Is it possible to monitor all the nodes in a solrcloud together, that way?
--
View this message in context:
http://lucene.472066.n3.nabble.com/monitoring-solr-logs-tp4108721p4109067.html
Sent from the Solr - User mailing list archive at Nabble.com.
35 matches
Mail list logo