All of the text_en, text_es entries in the schema.xml are field
types. These field types are different ways of parsing and searching
free text appropriate for English, Spanish etc.
You have to say text_en instead of text as the field type. This
will do a good job for searching English language
The files in solr/example/solr/conf are an example of how to do a
schema. You want the 'location' type for your lat/long data. This is
one field storing both values with some custom geographic search code.
On Wed, May 9, 2012 at 10:28 PM, Jack Krupansky j...@basetechnology.com wrote:
And you
Hi,
Apache servers are returning my post with the status messages
HTML_FONT_SIZE_HUGE,HTML_MESSAGE,HTTP_ESCAPED_HOST,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL,URI_HEX,WEIRD_PORT.
I've tried clearing all formatting and a re-post, but the same thing
occurred. What to do?
Regards,
Hi,
i've applied the patch from
https://issues.apache.org/jira/browse/SOLR-2604 to Solr 3.5. It works
but noticeably slows down the query time. Did someone already solve
that problem?
Cheers,
Valeriy
Hi,
Thanks sujatha for your response.
I tried to create the core as per the blog url that you gave. But in
that
mkdir -p /etc/solr/conf/$name/conf
cp -a /etc/solr/conftemplate/* /etc/solr/conf/$name/conf/
sed -i s/CORENAME/$name/ /etc/solr/conf/$name/conf/solrconfig.xml
curl
Right, for Long/Lat I found this information:
-Long / Lat Field Type-
fieldType name=location class=solr.LatLonType
subFieldSuffix=_coordinate/
-Fields-
field name=latlng type=location indexed=true stored=true /
field name=latlng_0_coordinate type=double indexed=true
Hi Andre,
qs is used when you have explicit phrase query (you need to use quotes for
this) in your search string.
q=lisboa tiposqs=1
--- On Wed, 5/9/12, André Maldonado andre.maldon...@gmail.com wrote:
From: André Maldonado andre.maldon...@gmail.com
Subject: EDisMax and Query Phrase Slop
I am newbie in Solr thing. But with your advices i am in track now (sort of
way).
It seems that Lucene community is responsible, and fortunately it doesn't
turns its back to newbies!
Thank you guys,
Tom
--
View this message in context:
Hi sujatha,
Basically i just want to explain the use case . The use case is
described below,
1. Create a VM running solr, with one core per customer
2. Index all of each customer's data (config text, metadata, etc) into
a single core
3. Create one fake partner per 30
I have full text in my database and I am indexing that using Solr. Now at
runtime i.e. when the indexing is going on can I extract certain parameters
based on regex and create another field/column on the fly using Solr for that
extracted text?
For example my DB has just 2 columns (DocId
You can use Regex Transformer to extract from a source field.
See:
http://wiki.apache.org/solr/DataImportHandler#RegexTransformer
-- Jack Krupansky
-Original Message-
From: Husain, Yavar
Sent: Thursday, May 10, 2012 6:04 AM
To: solr-user@lucene.apache.org
Subject: Solr On Fly Field
Thanks Jack.
I tried (Regex Transformer) it out and the indexing has gone really slow. Is it
(RegEx Transformer) slower than N-Gram Indexing? I mean they may be apples and
oranges but what I mean is finally after extracting the field I want to NGram
Index it. So It seems going in for NGram
Hi,
The whole thinking of score threshold is flawed in this situation.
Chris, you say yourself that you plan to let people subscribe to searches which
are known to have crappy results for perhaps the majority of hits, and there is
no automatic way of rectifying that.
Imagine a search for the
Dear,
I can't find how can I define in my schema.xml a field with this format?
My original format is:
exch:inventors
exch:inventor
exch:inventor-name
nameWEBER WALTER/name
/exch:inventor-name
residence
countryCH/country
/residence
/exch:inventor
exch:inventor
exch:inventor-name
nameROSSI
Hi :)
You could just add a field called country and then add the information
to your document.
Regards,
Gary L.
Le 10/05/2012 14:25, Bruno Mannina a écrit :
Dear,
I can't find how can I define in my schema.xml a field with this format?
My original format is:
exch:inventors
exch:inventor
like that:
field name=inventor-countryCH/field
field name=inventor-countryFR/field
but in this case Ioose the link between inventor and its country?
if I search an inventor named ROSSI with CH:
q=inventor:rossi and inventor-country=CH
the I will get this result but it's not correct because
Hello,
I am using Nutch 1.4 with Solr 3.6.0 and would like to get the HTML keywords
and description metatags indexed into Solr. On the Nutch side I have followed
the http://wiki.apache.org/nutch/IndexMetatags to get nutch parsing the
extracting the metatags (using index-metatags and
When you add data into Solr, you add documents which contain fields.
In your case, you should create a document for each of your inventors
with every attribute they could have.
Here is an example in Java:
SolrInputDocument doc = new SolrInputDocument();
doc.addField(inventor, Rossi);
Am 10.05.2012 14:33, schrieb Bruno Mannina:
like that:
field name=inventor-countryCH/field
field name=inventor-countryFR/field
but in this case Ioose the link between inventor and its country?
Of course, you need to index the two inventors into two distinct documents.
Did you mark those
But I have more than 80 000 000 documents with many fields with this
kind of description?!
i.e:
inventor
applicant
assignee
attorney
I must create for each document 4 documents ??
Le 10/05/2012 14:41, G.Long a écrit :
When you add data into Solr, you add documents which contain fields.
In
Did you mark those fields as multi-valued?
yes, I did.
You don't have to create a document per field. You have to create a
document per person.
If inventors, applicants, assignees and attorneys have properties in
common, you could have a model like :
field name=name ...
field name=country ...
field name=occupation ...
...
Then you create a
I don't know the details of your schema, but I would create fields like
name, country, street etc., and a field named role, which contains
values like inventor, applicant, etc.
How would you do it otherwise? Create only four documents, each fierld
containing 80 mio. values?
Greetings,
Kuli
Hi all,
we've been running Solr 1.4 for about a year with no real problems. As
of monday it became impossible to do a full import on our master
because of an OOM. Now what I think is strange is that even after we
more than doubled the available memory there would still always be an
OOM. We seem
Perhaps I am missing the obvious but our slaves tend to run out of
disk space. The index sizes grow to multiple times the size of the
master. So I just toss all the data and trigger a replication.
However, can't solr handle this for me?
I'm sorry if I've missed a simple setting which does this
I think I see what the problem is.
Correct me if I'm wrong but I guess your schema does not represent a
person but something which can contain a list of persons with different
attributes, right?
The problem is that you can't reproduce easily the hierarchy of
structured data. There is no
Actually I have documents like this one, country of inventor is inside
the field inventor
It's not exactly an inventor notice, it's a patent notive with several
fields.
The patent-number field is the fieldkey.
Should I split my document and use fieldkey to link them (like on normal
database)?
Le 10/05/2012 15:12, G.Long a écrit :
I think I see what the problem is.
Correct me if I'm wrong but I guess your schema does not represent a
person but something which can contain a list of persons with
different attributes, right?
Yes exactly what I have ! (see my next message)
I don't know what is the best solution. You could indeed split your
documents and link them with the patent-number inside the same index. Or
you could also use different cores with a specific schema (one core with
the schema for the patent and one core with the schema for the inventor)
and
The problem is that you can't reproduce easily the hierarchy of
structured data. There is no attribute in lucene index as there can be
in a xml document. If your structured data is not too complex, you
could try to add a field to your schema called person and
concatenate all properties
You need to perform Garbage Collection tune up on your JVM to handle the OOM
Sent from my iPhone
On May 10, 2012, at 21:06, Jasper Floor jasper.fl...@m4n.nl wrote:
Hi all,
we've been running Solr 1.4 for about a year with no real problems. As
of monday it became impossible to do a full
Hi :)
In what unit of time is expressed the QTime of a QueryResponse? Is it
milliseconds?
Gary
Hi Jasper,
Solr does handle that for you. Some more stuff to share:
* Solr version?
* JVM version?
* OS?
* Java replication?
* Errors in Solr logs?
* deletion policy section in solrconfig.xml?
* merge policy section in solrconfig.xml?
* ...
You may also want to look at your Index report in SPM
Yes, milliseconds. --wunder
On May 10, 2012, at 8:57 AM, G.Long wrote:
Hi :)
In what unit of time is expressed the QTime of a QueryResponse? Is it
milliseconds?
Gary
Gary - milliseconds, right.
Otis
Performance Monitoring for Solr / ElasticSearch / HBase -
http://sematext.com/spm
- Original Message -
From: G.Long jde...@gmail.com
To: solr-user@lucene.apache.org
Cc:
Sent: Thursday, May 10, 2012 11:57 AM
Subject: question about solr
Yes
On Thu, May 10, 2012 at 4:57 PM, G.Long jde...@gmail.com wrote:
Hi :)
In what unit of time is expressed the QTime of a QueryResponse? Is it
milliseconds?
Gary
Thank you both =)
Gary
Le 10/05/2012 17:59, Otis Gospodnetic a écrit :
Gary - milliseconds, right.
Otis
Performance Monitoring for Solr / ElasticSearch / HBase - http://sematext.com/spm
Yes, milliseconds. --wunder
- Original Message -
From: G.Longjde...@gmail.com
To:
Jasper,
The simple answer is to increase -Xmx :)
What is your ramBufferSizeMB (solrconfig.xml) set to? Default is 32 (MB).
That autocommit you mentioned is a DB commit? Not Solr one, right? If so, why
is commit needed when you *read* data from DB?
Otis
Performance Monitoring for Solr
Hi,
My requirement is to calculate the sum of a certain field in the result
StatsComponent does what I need eg.
results in
Question.
1. I don't need to calculate min, max, sumOfSquares etc. Is there a way to
limit the stats to the sum and nothing else?
2. Is there going to be a
I am attempting to index a DB schema that has a many:one relationship. I
assume I would index this within Solr as a 'multivalue=true' field, is that
correct?
I am currently populating the Solr index w/ a stored procedure in which each DB
record is flattened into a single document in Solr. I
On 5/10/2012 2:02 AM, Tolga wrote:
Apache servers are returning my post with the status messages
HTML_FONT_SIZE_HUGE,HTML_MESSAGE,HTTP_ESCAPED_HOST,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL,URI_HEX,WEIRD_PORT.
I've tried clearing all formatting and a re-post, but the same thing
occurred.
I clean the entire index and re-indexed it with SOLRJ 3.6. Still I get
the same error every single day. How can I see if the container
returned partial/nonconforming response since it may be hidden by
solrj ?
Thanks
Ravi Kiran Bhaskar
On Mon, May 7, 2012 at 2:16 PM, Ravi Solr ravis...@gmail.com
Hi James,
I just pulled down the newest nightly build of 4.0 and it solves an issue I had
been having with solr ignoring the caching of the child entities. It was
basically opening a new connection for each iteration even though everything
was specified correctly. This was present in my
Hello,
Solr accepts fq parameter like: localhost:8080/solr/select/?q=blah+blah
fq=model:member+model:new_member
Is it possible to pass the fq parameter with alternative syntax like:
fq=model=membermodel= new_member or in other way?
Thank you,
Tom
--
View this message in context:
Hi,
I've been reading
http://lucene.apache.org/solr/api/doc-files/tutorial.html and in the
section Deleting Data, I've edited schema.xml to include a field named
id, issued the command for f in *;java -Ddata=args -Dcommit=no -jar
post.jar deleteid$f/id/delete;done, went on to the stats page
Sorry, commit=no should have been commit=yes in my previous post.
Regards,
Hi Guys!
I've removed the two largest documents which were very large. One of which
consisted of 1 field and was around 4MB (text)..
This fixed my issue..
Kind regards,
Bram Rongen
On Fri, Apr 20, 2012 at 2:09 PM, Bram Rongen m...@bramrongen.nl wrote:
Hmm, reading your reply again I see
It's possible to see what terms are indexed for a field of
document that
stored=false?
One way is to use http://wiki.apache.org/solr/LukeRequestHandler
I have a search that doesn't work with quotes like
this field:TEXT Nº
1098 when i remove quotes the search find the document
(using
On 5/10/2012 12:27 PM, Ravi Solr wrote:
I clean the entire index and re-indexed it with SOLRJ 3.6. Still I get
the same error every single day. How can I see if the container
returned partial/nonconforming response since it may be hidden by
solrj ?
If the server is sending a non-javabin error
I am trying to import data through my db but I have dynamic fields that I
don't always know the names of. Can someone tell me why something like this
doesn't work.
entity name=dynamicfield processor=CachedSqlEntityProcessor
datasource=database query=select optionname, datatype, optionvalue FROM
Thanks for responding Mr. Heisey... I don't see any parsing errors in
my log but I see lot of exceptions like the one listed belowonce
an exception like this happens weirdness ensues. For example - To
check sanity I queried for uniquekey:111 from the solr admin GUI it
gave back numFound equal
Is there any way to set the Expires header dynamically to the solr
response?
Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-custom-dynamic-expire-header-to-the-solr-Response-tp3978170.html
Sent from the Solr - User mailing list archive at Nabble.com.
I have created a custom transformer for dynamic fields but it doesn't seem to
be working correctly and I'm not sure how to debug it with a live running
solr instance.
Here is my transformer
package org.build.com.solr;
import org.apache.solr.handler.dataimport.Context;
import
Also here is my schema
dynamicField name=*_string type=facetstring indexed=true
stored=false multiValued=true/
dynamicField name=*_numeric type=tfloat indexed=true
stored=false multiValued=true/
dynamicField name=*_boolean type=boolean indexed=true
stored=false/
--
View this message in
Instead of hitting the Solr server directly from the client, I think I would go
through your application server, which would have access to all the users data
and can forward that to the Solr server, thereby hiding it from the client.
Mike
-Original Message-
From: Anupam Bhattacharya
Anyone at all?
Original Message
Subject:Delete documents
Date: Thu, 10 May 2012 22:59:49 +0300
From: Tolga to...@ozses.net
To: solr-user@lucene.apache.org
Hi,
I've been reading
http://lucene.apache.org/solr/api/doc-files/tutorial.html and in the
section
Hi,
You've restarted Solr after editing the schema?
And checked the logs? Paste?
Otis
Performance Monitoring for Solr / ElasticSearch / HBase -
http://sematext.com/spm
- Original Message -
From: Tolga to...@ozses.net
To: solr-user@lucene.apache.org
Cc:
Sent: Friday, May
Hi Sohail,
http://search-lucene.com/?q=Joinfc_project=Solr
Hit #1.
Otis
Performance Monitoring for Solr / ElasticSearch / HBase -
http://sematext.com/spm
- Original Message -
From: Sohail Aboobaker sabooba...@gmail.com
To: solr-user@lucene.apache.org
Cc:
Sent:
Yes, I agree with you.
But Ajax-SOLR Framework doesn't fit in that manner. Any alternative
solution ?
Anupam
On Fri, May 11, 2012 at 9:41 AM, Klostermeyer, Michael
mklosterme...@riskexchange.com wrote:
Instead of hitting the Solr server directly from the client, I think I
would go through
Try using the actual id of the document rather than the shell substitution
variable - if you're trying to delete one document.
To delete all documents, use delete by query:
deletequery*:*/query/delete
See:
http://wiki.apache.org/solr/FAQ#How_can_I_delete_all_documents_from_my_index.3F
--
On 5/10/2012 4:17 PM, Ravi Solr wrote:
Thanks for responding Mr. Heisey... I don't see any parsing errors in
my log but I see lot of exceptions like the one listed belowonce
an exception like this happens weirdness ensues. For example - To
check sanity I queried for uniquekey:111 from the
Hey all,
I don't know about you but most of the Solr URLs I issue are fairly
lengthy full of parameters on the query string and browser location
bars aren't long enough/have multi-line capabilities. I tried to find
something that does this but couldn't so I wrote a chrome extension to
help.
62 matches
Mail list logo