I got this working - the errors were due to a mistake in letter case - was
using 'datasource' instead of 'dataSource' in the entity that was using
XpathEntityProcessor. Hence this was being ignored and was inheriting the JDBC
Datasource of the parent entity.
I am pasting the complete
I had the very same issue,
because I had some document with a redundant field, and I was using the
Infix Suggester as well.
Because the Infix Suggester returns the whole field content, if you have
duplicated fields across your docs, you will se duplicate suggestions.
Do you have any intermediate
Our web site is created using PaperThin's CommonSpot CMS in a ColdFusion 10 and
Windows Server 2008 R2 environment, using Apache Solr 4.10.4 instead of CF
Solr. We create collections through the CMS interface and they do appear in
both the CMS and the Solr dashboard when created. However, when
Hi,
I created a *token concat filter* to concat all the tokens from token
stream. It creates the concatenated token as expected.
But when I am posting the xml containing more than 30,000 documents, then
only first document is having the data of that field.
*Schema:*
*field name=titlex
This is the error cause reported. I also see that it has been reported earlier
(http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201103.mbox/%3cd0f0d26c-3ac0-4982-9e2b-09dc96937...@535consulting.com%3E)
but could not find a solution.
I am nesting the FieldReaderDataSource within the
Hi Advait ,
First of all I suggest you to study Solr a little bit [1]. because your
requirements are actually really simple :
1) You can simply use more than one suggest dictionary if you care to keep
the suggestions separated ( keeping if a term is coming from the name or
from the the category)
Hi,
We run an ecommerce company and would like to use SOLR for our product database
searches.
We have products along with the categories that they belong to. In case the
product belongs to more than 1 category, we have a comma separated field of
categories.
How do we do auto complete on -
On 6/18/2015 8:05 AM, Bence Vass wrote:
Is there any documentation on how to start Solr 5.2.1 on Solaris (Solaris
10)? The script (solr start) doesn't work out of the box, is anyone running
Solaris 5.x on Solaris?
I think the biggest problem on Solaris will be the options used on the
ps
Please help, what wrong I am doing here. please guide me.
With Regards
Aman Tandon
On Thu, Jun 18, 2015 at 4:51 PM, Aman Tandon amantandon...@gmail.com
wrote:
Hi,
I created a *token concat filter* to concat all the tokens from token
stream. It creates the concatenated token as expected.
Hello,
Is there any documentation on how to start Solr 5.2.1 on Solaris (Solaris
10)? The script (solr start) doesn't work out of the box, is anyone running
Solaris 5.x on Solaris?
- Thanks
We would like more information, but the first thing I notice is that hardly
would make any sense to use a string type for a file content.
Can you give more details about the exception ?
Have you debugged a little bit ?
How does the solr input document look before it is sent to Solr ?
Furthermore
Hi,
I want to log Solr search queries/response time and Solr indexing log
separately in different set of log files.
Is there any convenient framework/way to do it.
Thanks
Bharath
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Logging-tp4212730.html
Sent from the
Hi everyone,
I just upgraded from 5.1.0 to 5.2.1 and noticed a behavior change which I
consider a bug.
In my solrconfig.xml, I have the following:
!-- schemaFactory class=ClassicIndexSchemaFactory/ --
schemaFactory class=ManagedIndexSchemaFactory
bool name=mutabletrue/bool
str
Hello,
I'm using Solr to pull information from a Database and a file system
simultaneously. The database houses the file path of the file in the file
system. It pulls all of those just fine. In fact, it combines the metadata
from the database and the metadata from the file system great. The
On 6/18/2015 8:10 AM, Steven White wrote:
In 5.1.0 (and maybe prior ver.?) when I enable managed schema per the
above, the existing schema.xml file is left as-is, a copy of it is created
as schema.xml.bak and a new one is created based on the name I gave it
my-schema.xml.
With 5.2.1
Thanks :)
exactly what I was looking for...as I only need to create the signature once
this works perfect for me:)
Cheers,
Markus
Sent from my iPhone
On 17.06.2015, at 20:32, Shalin Shekhar Mangar shalinman...@gmail.com wrote:
Comments inline:
On Wed, Jun 17, 2015 at 3:18 PM,
USING Solr 5.1.0
This is the schema file
?xml version=1.0 encoding=UTF-8 ?
schema name=example version=1.5
field name=_version_ type=long indexed=true stored=true/
field name=_root_ type=string indexed=true stored=false/
field name=id type=string indexed=true stored=true
See particularly the ADDREPLICA command and the
node parameter. You might not even need the node
parameter since when you add a replica Solr does its
best to put the new replica on an underutilized node.
Best,
Erick
On Thu, Jun 18, 2015 at 2:58 PM, Shawn Heisey apa...@elyograg.org wrote:
On
The stack trace is what gets returned to the client, right? It's often
much more informative to see the Solr log output, the error message
is often much more helpful there. By the time the exception bubbles
up through the various layers vital information is sometimes not returned
to the client in
No clue whatsoever, you haven't provided near enough details. I rather
doubt that many people
on this list really understand the interactions of that technology
stack, I certainly don't.
I'd ask on the ColdFusion list, as they're (apparently) the ones
who've integrated a Solr
connector of sorts.
Hi Steve,
you never set exhausted to false, and when the filter got reused, *it
incorrectly carried state from the previous document.*
Thanks for replying, but I am not able to understand this.
With Regards
Aman Tandon
On Fri, Jun 19, 2015 at 10:25 AM, Steve Rowe sar...@gmail.com wrote:
Aman,
My version won’t produce anything at all, since incrementToken() always returns
false…
I updated the gist (at the same URL) to fix the problem by returning true from
incrementToken() once and then false until reset() is called. It also handles
the case when the concatenated token is
I'm implementing an auto-suggest feature in Solr, and I'll like to achieve
the follwing:
For example, if the user enters mp3, Solr might suggest mp3 player,
mp3 nano and mp3 music.
When the user enters mp3 p, the suggestion should narrow down to mp3
player.
Currently, when I type mp3 p, the
Hi Aman,
The admin UI screenshot you linked to is from an older version of Solr - what
version are you using?
Lots of extraneous angle brackets and asterisks got into your email and made
for a bunch of cleanup work before I could read or edit it. In the future,
please put your code somewhere
Hello,
I'm a solr user with some question. I want to append new data to the
existing index. Does Solr support to append new data to index?
Thanks for any reply.
Best wishes.
Jason
Yes I just saw.
With Regards
Aman Tandon
On Fri, Jun 19, 2015 at 10:39 AM, Steve Rowe sar...@gmail.com wrote:
Aman,
My version won’t produce anything at all, since incrementToken() always
returns false…
I updated the gist (at the same URL) to fix the problem by returning true
from
Aman,
Solr uses the same Token filter instances over and over, calling reset() before
sending each document through. Your code sets “exhausted to true and then
never sets it back to false, so the next time the token filter instance is
used, its “exhausted value is still true, so no input
You've repeated your original statement. Shawn's
observation is that 10M docs is a very small corpus
by Solr standards. You either have very demanding
document/search combinations or you have a poorly
tuned Solr installation.
On reasonable hardware I expect 25-50M documents to have
sub-second
Hi,
We probably would like to shard the data since the response time for
demanding queries at 10M records is getting 1 second in a single request
scenario.
I have not done any data sharding before. What are some recommended way to
do data sharding. For example, may be by a criteria with a list
Hi Dmitry,
It’s weird that start and end offsets are the same - what do you see for the
start/end of ‘$’, i.e. if you take out MCFF? (I think it should be start:5,
end:6.)
As far as offsets “respecting the remapped token”, are you asking for offsets
to be set as if ‘dollarsign' were part of
Sent from my iPhone
Just rolling out a little bit more information as it is coming. I changed the
field type in the schema to text_general and that didn't change a thing.
Another thing is that it's consistently submitting/not submitting the same
documents. I will run over it one time and it won't index a set of
Hi,
Let's say I have a zookeeper ensemble with several Solr nodes connected to it.
I've created a collection successfully and all is well.
What happens when I want to add another solr node?
I've tried spinning one up and connecting it to zookeeper, but the new node
doesn't join the
The query without load is still under 1 second. But under load, response time
can be much longer due to the queued up query.
We would like to shard the data to something like 6 M / shard, which will
still give a under 1 second response time under load.
What are some best practice to shard the
10M doesn't sound too demanding.
How complex are your queries?
How complex is your data - like number of fields and size, like very large
documents?
Are you sure you have enough RAM to fully cache your index?
Are your queries compute-bound or I/O bound? If I/O-bound, get more RAM. If
On 6/18/2015 3:23 PM, Jim.Musil wrote:
Let's say I have a zookeeper ensemble with several Solr nodes connected to
it. I've created a collection successfully and all is well.
What happens when I want to add another solr node?
I've tried spinning one up and connecting it to zookeeper, but the
Hi Erick,
In that issue you forwarded to me, they want to make one token from all
tokens received from token stream but in my case I want to keep the tokens
same and create and extra new token which is concat of all the tokens.
I'd guess, is the case
here. I mean do you really want to
Hi,
It looks like MappingCharFilter sets start and end offset to the same
value. Can this be affected on by some setting?
For a string: test $ test2 and mapping $ = dollarsign (we insert
extra space to separate $ into its own token)
we get: http://snag.gy/eJT1H.jpg
Ideally, we would like to
Hi,
We created the new phonetic filter, It is working great on our products,
mostly of our suppliers are Indian, it is quite helpful for us to provide
the exact result e.g.
1) rikshaw, still able to find the suppliers of rickshaw
2) telefone, still able to find the suppliers of telephone
We
Hello,
I have a question to the extended dismax query parser. If the default operator
is changed to AND (q.op=AND) then the search results seems to be incorrect. I
will explain it on some examples. For this test I use solr v5.1 and the tika
core from the example directory.
== Preparation ==
Hi Aman,
https://wiki.apache.org/solr/HowToContribute
HTH
On Thu, Jun 18, 2015 at 12:11 PM, Aman Tandon amantandon...@gmail.com
wrote:
Hi,
We created the new phonetic filter, It is working great on our products,
mostly of our suppliers are Indian, it is quite helpful for us to provide
the
http://localhost:8983/solr/col/select?q=*:*sfield=geolocationpt=26.697,83.1876facet.query={!frange%20l=0%20u=50}geodist()facet.query={!frange%20l=50.001%20u=100}geodist()wt=json
I am not getting facet results .
schema:
field name=geolocation type=location indexed=true stored=true/
dynamicField
isn't facet=true necessary?
On Thu, Jun 18, 2015 at 12:03 PM, Midas A test.mi...@gmail.com wrote:
http://localhost:8983/solr/col/select?q=*:*sfield=geolocationpt=26.697,83.1876facet.query={!frange%20l=0%20u=50}geodist()facet.query={!frange%20l=50.001%20u=100}geodist()wt=json
I am not
Hi,
I am using solr 5.1. I'm getting duplicate suggestions when using my
solrsuggester. I'm using AnalyzingInfixLookupFactory
DocumentDictionaryFactory. can i configure it to suggest me only different
suggestions?
here are details about my configuration:
from schema.xml:searchComponent
If he has not put any appends or invariant in the request handler,
facet=true is mandatory to activate the facets.
I haven't tried those specific facet queries .
I hope the problem was not simply he didn't activate faceting ...
2015-06-18 10:35 GMT+01:00 Mikhail Khludnev
45 matches
Mail list logo