Hello,
The most promising approach for doing this is BlockJoinQuery. Here is the
great intro
http://blog.mikemccandless.com/2012/01/searching-relational-content-with.html.
This query and lowlevel indexing support is implemented in Lucene. Some
work is in progress for Solr
Dear all,
I want to generate compound type index instead of files contain fdt,fdx etc.
I follow the suggestion to change the useCompoundFile parameter to true
(both in indexDefaults and mainIndex) in solrconfig.xml, but when i use
post.jar to post example xml file (solr.xml), i find the index is
Hi Tomás,
I can not use Solr replcation in my scenario. My requirement is to gzip the
solr index folder and send to dotnet system through webservice.
Then in dotnet the same index folder should be unzipped and same folder
should be used as an index folder through solrnet .
Whether my
hi folks,
i think i found a bug in the spellchecker but am not quite sure:
this is the query i send to solr:
http://lh:8983/solr/CompleteIndex/select?
rows=0
echoParams=all
spellcheck=true
spellcheck.onlyMorePopular=true
spellcheck.extendedResults=no
q=a+bb+ccc++
and this is the result:
Can you try spellcheck.q ?
On Thu, 22 Mar 2012 09:57:19 +0100, tom dev.tom.men...@gmx.net wrote:
hi folks,
i think i found a bug in the spellchecker but am not quite sure:
this is the query i send to solr:
http://lh:8983/solr/CompleteIndex/select?
rows=0
echoParams=all
spellcheck=true
Maybe you don't use special characters such as '?', '', ... in your
query, but other guys do.
If someone want to search for '? the mysterians', it's impossible if
you don't encode it.
As the 'admin' interface must be used by anyone, the query has to be
url-encoded.
Franck
Le mercredi 21 mars
same
On 22.03.2012 10:00, Markus Jelsma wrote:
Can you try spellcheck.q ?
On Thu, 22 Mar 2012 09:57:19 +0100, tom dev.tom.men...@gmx.net wrote:
hi folks,
i think i found a bug in the spellchecker but am not quite sure:
this is the query i send to solr:
First question: What's taking the time? The data acquisition or the
actual indexing process? Until you answer that question, you don't
know where to spend your efforts
Best
Erick
On Wed, Mar 21, 2012 at 4:10 AM, ravicv ravichandra...@gmail.com wrote:
Hi
I am using Oracle Exadata as my DB.
You probably want to provide a custom Similarity class. Here's
a start:
http://wiki.apache.org/solr/SolrPlugins#Similarity
That's just a brief hint, but it should get you started. From there
you'll have to dig into the docs.
Do take some care here. I believe this is called for _every_ doc
that
Here's a sample of indexing with SolrJ instead of DIH, you
could consider partitioning your problem to N copies of this
and running in parallel.
http://www.lucidimagination.com/blog/2012/02/14/indexing-with-solrj/
But you haven't indicated whether your speed issue is on the
query side or the
Why stop at 1G? But no, it's really all-or-nothing when you blast a file
at Solr. But be sure you're bumping the _solr_ heap, not just Tomcat's
heap.
Best
Erick
On Wed, Mar 21, 2012 at 5:42 PM, vybe3142 vybe3...@gmail.com wrote:
While waiting for someohe to help answer my multicore config issue
When you say and send to dotnet system through webservice, you mean that
the client will be dotnet, but Solr is still going to be Solr, in Java,
right?
I'm sure that if you stop Solr, change the index directory (like if you
unzip the one you brought from the other server) and start Solr again,
I meant, how many values in total? A single document may have 20, but are
those 20 shared with other document (even if they have different score) or
each document will have 10-20 completely different values? I think Solr
could handle a couple hundred of fields, but I don't know how it would
behave
The index is not directory related, there is no path information in the
index. You can create an index then move it anywhere (or merge it with an
other one).
I often do this, there is no issue.
Olivier
2012/3/22 ravicv ravichandra...@gmail.com
Hi Tomás,
I can not use Solr replcation in my
That's correct. Solr4 will read your existing index and let you use it with the
feature set it already has.
But in order for you to use new fieldTypes, you need to re-index your data.
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
Solr Training - www.solrtraining.com
Good morning:
I have problems with the results obtained Solr search string (eg caso). Me back
records with similar terms (in this example would return the same as if looking
casa).
The 1.4.1 version of Solr is
The definition of type text in the file schema.xml is:
fieldtype name=text
Hello!
The probable cause is the use of solr.PorterStemFilterFactory. You can check it
using the Solr admin or by removing that filter and reindexing your data.
--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Good morning:
I have problems with the results
Remove the stemmer filter. Caso and casa are transformed into cas if you
use the stemmer filter.
En español:
Quita el filtro de stemmer, que se usa para sacar la raiz de las palabras, pero
en tu caso la raíz de casa y caso es la misma, cas.
Un saludo.
De: PINA
Thanks for the replies,
it fixed my mind, and I now have something to implement :o)
I will try to do that with 2 requests:
- 1 grouped by source to retrieve the documents to boost
- 1 with a FunctionQuery to add the boosts computed during the first
request
It won't be easy to do that with 1
Hi Tomas:
These fields are for searching only.
Currently we have around 1.8M docs indexed.and Assuming each Doc has about
20 of these additional fields to be created as dynamic fields (worst case
scenario), and also there are about 6K if these different values (I.e. If
we were to create static
Or if you still want to have stemming, you could use a Spanish stemmer,
like:
filter class=solr.SnowballPorterFilterFactory language=Spanish/
or
filter class=solr.SpanishLightStemFilterFactory/
Tomás
On Thu, Mar 22, 2012 at 11:09 AM, Juan Pablo Mora jua...@informa.es wrote:
Remove the stemmer
My solr server is running, and following is my client code:
File file = new File(1.pdf);
String urlString = constant.getUrl();
StreamingUpdateSolrServer solr = new
StreamingUpdateSolrServer(
Hi all,
I'm having some trouble wrapping my head around boosting
StandardQueries. It looks like the function: query(subquery, default)
http://wiki.apache.org/solr/FunctionQuery#query is what I want, but
the examples seem to focus on just returning a score (e.g. product of
popularity and
I post a *.doc file to the solr server, but I always get the error:
org.apache.solr.common.SolrException: parsing error
at
org.apache.solr.client.solrj.impl.BinaryResponseParser.processResponse(BinaryResponseParser.java:43)
at
: The admin screen is made for doing a quick query against the default field
: with the settings defined in the default search handler. To that end, it
: assumes that all entered characters should be part of the search string, so it
: encodes them accordingly.
correct ... that text box in
hi Chris and Hoss:
Thanks for the feedback. This is useful to hear. This seems like a bug to
me but not a very important one.
I'm new to Solr and seems like you have a great community here.
-Aaron
On Thu, Mar 22, 2012 at 1:34 PM, Chris Hostetter
hossman_luc...@fucit.orgwrote:
: The admin
On Mar 21, 2012, at 9:37 PM, I-Chiang Chen wrote:
We are currently experimenting with SolrCloud functionality in Solr 4.0.
The goal is to see if Solr 4.0 trunk with is current state is able to
handle roughly 200million documents. The document size is not big around 40
fields no more than a
Hi,
It was mentioned before that SolrCloud has all the capability of
regular solr (including handlers) with the exception of the MLT handler.
As this is a key capability for Solr, is there work planned to include
the MLT in SolrCloud? If so when? Our efforts greatly depend on it. As
such, I'm
On Mar 22, 2012, at 5:22 PM, Darren Govoni wrote:
Hi,
It was mentioned before that SolrCloud has all the capability of
regular solr (including handlers) with the exception of the MLT handler.
As this is a key capability for Solr, is there work planned to include
the MLT in SolrCloud? If so
Ok, I'll do what I can to help!
As always, appreciate the hard work Mark.
On Thu, 2012-03-22 at 17:31 -0400, Mark Miller wrote:
On Mar 22, 2012, at 5:22 PM, Darren Govoni wrote:
Hi,
It was mentioned before that SolrCloud has all the capability of
regular solr (including handlers) with
: Iam trying to get Solr installed using apache solr 3.5.0, Java 1.6.0, and
: Drupal 7. I am able to log in through ssh, navigate to
: apache-solr-3.5.0/example, and run java -jar start.jar. After that, however,
: trying to access either localhost:8983/solr/admin or localhost:8983/solr just
:
Am 22.03.2012 23:47, schrieb solr-user-h...@lucene.apache.org:
Hi! This is the ezmlm program. I'm managing the
solr-user@lucene.apache.org mailing list.
I'm working for my owner, who can be reached
at solr-user-ow...@lucene.apache.org.
To confirm that you would like
m...@akalla.de
I'm looking at the following. I want to (1) map some query fields to
some other query fields and add some things to FL, and then (2)
rescore.
I can see how to do it as a RequestHandler that makes a parser to get
the fields, or I could see making a SearchComponent that was stuck
into the list just
At this time we are not leveraging the NRT functionality. This is the
initial data load process where the idea is to just add all 200 millions
records first. Than do a single commit at the end to make them searchable.
We actually disabled auto commit at this time.
We have tried to leave auto
34 matches
Mail list logo