least somehow - to
tackle those problems in Unit-Tests instead of noticing such problems
after a deployment to Solr.
So my question is:
How can I (Unit-)test a TokenFilter with an Analyzer which reuses the
same TokenFilter instance for more than one Input-TokenStream?
Kind regards,
Em
Am 01.10.20
Hi Mikhail,
thanks for your feedback.
If so, how can I write UnitTests which respect the Reuse strategy?
What's the recommended way when creating custom Tokenizers and TokenFilters?
Kind regards,
Em
Am 01.10.2012 10:54, schrieb Mikhail Khludnev:
> Hello,
>
> Analyzers are reuse
only now and again - but not on every
request.
Is this a bug in Solr 4.0-BETA or is this expected behaviour?
If it is expected, what could be wrong with the TokenFilter?
Kind regards,
Em
-container/environment
instead of Solr only.
Kind regards,
Em
Am 04.03.2012 15:21, schrieb Ramo Karahasan:
> Hi,
>
>
>
> i'm somehow unable to "secure" my solr instance that runs on a dedicated
> server. I have a webapplication that needs this solr instance,
.
Sometimes a multiple-JVM setup moved to production, sometimes not.
In short:
>So should I add new instances of solr or should I use muticore feature
where
> each core has its own index.
As I tried to explain above: it depends.
Kind regards,
Em
Am 24.02.2012 23:16, schrieb Umesh_:
> Hi Em,
>
Hi Umesh,
how does your access-pattern looks like?
Do you need to combine results from different indizes?
Kind regards,
Em
Am 24.02.2012 18:37, schrieb Umesh_:
> All,
>
> I was trying to find information about plus/deltas with using
> - Solr muticore where each core has a dif
their
relationships and/or lifecycles are good startpoints that do not need
much time to write them down (take "How to write distributed
SearchComponents" as an example).
Kind regards,
Em
Am 24.02.2012 15:18, schrieb Yonik Seeley:
> On Fri, Feb 24, 2012 at 8:59 AM, Per Steffensen
ly to be compatible with your
changes.
This highly reduces maintenance-costs, while it leaves more ressources
for new innovations and features that make your product great.
Kind regards,
Em
> We might make it
> "outside" Solr/Lucene but I hope to be able to convince my ProductOwner
&
uncommited document is not guaranteed to be persisted in the index.
So if you give a Duplicate-Key-Error, because there is a pending
document with that key and afterwards the server goes down for any
reason, you might end up without that document inside of Solr.
You need a log for failover.
Kind regar
ses.
I am currently unaware of how things change in 4.0.
Kind regards,
Em
http://wiki.apache.org/solr/SolrTomcat
Kind regards,
Em
Am 23.02.2012 18:15, schrieb Frederic Bouchery:
> hello,
>
> I'm using Solr 3.5 over Tomcat 6 and I've some problemes with unicode quey.
>
> Here is my text field configuration
>
&
dated
> (implemented as "old deleted and new added"). Correct?
Exactly.
If you really want to get your hands on that topic I suggest you to
learn more about Lucene's IndexWriter:
http://lucene.apache.org/core/old_versioned_docs/versions/3_5_0/api/all/index.html?org/apache/lucene/index/IndexWriter.html
Kind Regards,
Em
s
per index/SolrCore/Shard (depending on your use-case).
Does this help you?
Kind regards,
Em
Am 23.02.2012 15:34, schrieb Per Steffensen:
> Em skrev:
>> Hi Per,
>>
>> Solr provides the so called "UniqueKey"-field.
>> Refer to the Wiki to learn more:
>> htt
That's strange.
Could you provide a sample dataset?
I'd like to try it out.
Kind regards,
Em
Am 22.02.2012 19:17, schrieb Yury Kats:
> On 2/22/2012 1:05 PM, Em wrote:
>> Yury,
>>
>> are you sure your request has a proper url-encoding?
>
> Yes
>
Yury,
are you sure your request has a proper url-encoding?
Kind regards,
Em
Am 22.02.2012 18:25, schrieb Yury Kats:
> I'm running into a problem with queries that contain forward slashes and more
> than one field.
>
> For example, these queries work fine:
> fiel
Btw.:
Solr has no downtime while reloading the core.
It loads the new core and while loading the new one it still serves
requests with the old one.
When the new one is ready (and warmed up) it finally replaces the old core.
Best,
Em
Am 22.02.2012 17:56, schrieb Xavier:
> I'm not
keep and remove everything else. Therefore their content for your
keepWordField contains only
doc1: {"indexedContent":"codeword"}
doc2: {"indexedContent":"keyword"}
However, if you add the word "gang" to your keywordlist AND reload your
SolrCore, doc
Erick,
> You'll *really like* the SolrCloud stuff going into trunk when it's baked
> for a while
How stable is SolrCloud at the moment?
I can not wait to try it out.
Kind regards,
Em
Am 22.02.2012 14:45, schrieb Erick Erickson:
> You'll *really like* the SolrClou
) and proper relevancy due to unbalanced shards by
design.
Kind regards,
Em
Am 22.02.2012 09:25, schrieb eks dev:
> Yes, I consciously let my slaves run away from the master in order to
> reduce update latency, but every now and then they sync up with master
> that is doing heavy lifting.
>
Erick,
damn!
The NOW of now isn't the same NOW a second later. So obvisiously. How
could I overlook it?
Kind regards,
Em
Am 22.02.2012 00:17, schrieb Erick Erickson:
> Be a little careful here. Any "fq" that references NOW will probably
> NOT be effectively cached. Think
Hi Ramo,
sorry for confusing you.
Forget everything that I said after "However" - it was wrong (I mixed
something here).
Yes, you can index documents via any UpdateRequestHandler you like while
using the DIH.
Kind regards,
Em
Am 21.02.2012 23:41, schrieb Ramo Karahasan:
> Hi,
>
tener could do for you, if one
would exist?
Kind regards,
Em
Am 21.02.2012 21:10, schrieb eks dev:
> Thanks Mark,
> Hmm, I would like to have this information asap, not to wait until the
> first search gets executed (depends on user) . Is solr going to create
> new searcher as a par
Hi Spadez,
MySQL, as well as any other SQL-database, needs the same amount of work
to integrate its data into Solr.
Choose your favorite database and get started!
Best,
Em
Am 21.02.2012 18:32, schrieb Spadez:
> Thank you for the information Damien.
>
> Is there a better database to u
Hi Ramo,
yes, it's possible.
However keep in mind that your cURL, CSV, XML, JSON etc. update-requests
store the information that is needed to do delta-updates with your DIH
(if needed!).
Kind regards,
Em
Am 21.02.2012 23:18, schrieb Ramo Karahasan:
> Hi,
>
>
>
> curren
-io.
Did you tried my advice of adjusting the precisionSteps of your
TrieDateFields and reindexed your documents afterwards?
Kind regards,
Em
Am 21.02.2012 22:52, schrieb ku3ia:
> Hi,
>
>>> First: I am really surprised that the difference between explicit
>>> Date-Val
configured well?
How many documents are part of that test-index?
I suggest you to adjust the precisionStep-definition of your TrieDateField.
Furthermore: Take into consideration, whether you really need 500 rows
per request.
Kind regards,
Em
Am 21.02.2012 21:49, schrieb ku3ia:
> Hi, Em, thanks
ou have to do the versioning on your own (and keep in mind concurrent
updates).
Kind regards,
Em
Am 21.02.2012 13:50, schrieb Per Steffensen:
> Hi
>
> Does solr/lucene provide any mechanism for "unique key constraint" and
> "optimistic locking (versioning)"?
&g
match your new keywordslist but
currently do not have those keywords assigned).
Kind regards,
Em
Am 21.02.2012 19:53, schrieb Xavier:
> In a way I agree that it would be easier to do that but i really wants to
> avoid this solution because it prefer to work "harder" on preparing
Hi,
1) and 2) should have equal performance, given that several searches are
performed with the same fq-param.
Since the filters are cached, 1) and 2) perform better.
Kind regards,
Em
Am 21.02.2012 19:06, schrieb ku3ia:
> Hi all!
>
> Please advice me:
> 1) q=test&fq=date:[
Wouldn't it be easier to store both types in different fields?
At query-time you are able to do a facet on both and can combine the
results client-side to present them within the GUI.
Kind regards,
Em
Am 21.02.2012 17:52, schrieb Xavier:
> Sure, the difference between my 2 fa
that field (the unanalyzed raw
input) is getting copied.
Could you explain what is the difference between your text_tag_facets
and your predefined facets?
Kind regards,
Em
Am 21.02.2012 17:11, schrieb Xavier:
> Hi everyone,
>
> Like explained in this post :
> http://lucene.472066.
ields and the KeepWordFilter you are able to achieve what
you want.
Kind regards,
Em
Am 20.02.2012 17:28, schrieb Xavier:
> Hi everyone,
>
> I'm a new Solr User but i used to work on Endeca.
>
> There is a modul called "TextTagger" with Endeca that is auto indexing
>
are
and which search-server provides you the tools to go.
I am pretty sure that both projects provide you php-client-libraries
etc. for indexing and searching (Solr does).
Kind regards,
Em
Am 20.02.2012 16:20, schrieb Spadez:
> I am creating what is effectively a search engine. Content is collec
a trade-off between precision and performance.
You could even improve the above by setting the doc's boost equal to
log(populary) at indexing time.
What do you think about that?
Regards,
Em
Am 20.02.2012 15:37, schrieb Carlos Gonzalez-Cadenas:
> Hi Em:
>
> The HTTP request is not gon
Could you please provide me the original request (the HTTP-request)?
I am a little bit confused to what "query_score" refers.
As far as I can see it isn't a magic-value.
Kind regards,
Em
Am 20.02.2012 14:05, schrieb Carlos Gonzalez-Cadenas:
> Yeah Em, it helped a lot :)
>
Carlos,
nice to hear that the approach helped you!
Could you show us how your query-request looks like after reworking?
Regards,
Em
Am 20.02.2012 13:30, schrieb Carlos Gonzalez-Cadenas:
> Hello all:
>
> We've done some tests with Em's approach of putting a BooleanQuery in
Hi Adam,
I made a quick review of the DIH-code in your exception and it seems
like it is not possible to resolve a multi-column PK at the moment.
Maybe I am wrong, but can anyone confirm to have problems with DIH,
whenever he uses a multi-column PK?
Kind regards,
Em
Am 18.02.2012 02:52
Hi Torsten,
did you have a look at WordDelimiterTokenFilter?
Sounds like it fits your needs.
Regards,
Em
Am 17.02.2012 15:14, schrieb Torsten Krah:
> Hi,
>
> is it possible to extend the standard tokenizer or use a custom one
> (possible via extending the standard one) to add
nctionQuery's AllScorer re-iterates for two times over the
0th and the 1st doc and the reason for that seems to be the construction
of two AllScorers), but as far as I can see the performance of your
queries *should* increase if you construct your query as I explained in
my last eMail.
Kind regar
ucting an every-time-match-all-function-query.
Can you validate whether your QueryParser constructs a query in the form
I drew above?
Regards,
Em
Am 16.02.2012 20:29, schrieb Carlos Gonzalez-Cadenas:
> Hello Em:
>
> 1) Here's a printout of an example DisMax query (as you can see mostly M
erefore (as shown by the numbers I included above) even
> if the query is very complex, we're getting very fast answers. The only
> situation where the response time explodes is when we include a
> FunctionQuery.
Could you give us some details about how/where did you plugin the
Collecto
Hi Jamie,
nice to hear.
Maybe you can share in what kind of bug you ran, so that other
developers with similar bugish components can benefit from your
experience. :)
Regards,
Em
Am 16.02.2012 19:23, schrieb Jamie Johnson:
> please ignore this, it has nothing to do with the faceting compon
Hi Jamie,
what version of Solr/SolrJ are you using?
Regards,
Em
Am 16.02.2012 18:42, schrieb Jamie Johnson:
> I am attempting to execute a query with the following parameters
>
> q=*:*
> distrib=true
> facet=true
> facet.limit=10
> facet.field=manu
>
docs specified by your inner-query (although I might
be wrong about that).
What are you trying to achieve by your request?
Regards,
Em
Am 16.02.2012 16:24, schrieb Carlos Gonzalez-Cadenas:
> Hello Em:
>
> The URL is quite large (w/ shards, ...), maybe it's best if I paste the
> relev
Hello carlos,
could you show us how your Solr-call looks like?
Regards,
Em
Am 16.02.2012 14:34, schrieb Carlos Gonzalez-Cadenas:
> Hello all:
>
> We'd like to score the matching documents using a combination of SOLR's IR
> score with another application-specific score th
Hello Mike,
have a look at Solr's Schema Browser. Click on "FIELDS", select "label"
and have a look at the number of distinct (term-)values.
Regards,
Em
Am 15.02.2012 23:07, schrieb Mike Hugo:
> Hello,
>
> We're building an auto suggest component based o
rce") to filter on it or create a core per database which
would completely seperate both indizes from eachother.
It depends on your usecase and access-patterns. To tell you more, you
should provide us more information.
Regards,
Em
Am 15.02.2012 16:23, schrieb Radu Toev:
> Hello,
>
&
Hi Mikhail,
> it's just how org.apache.lucene.search.CachingWrapperFilter works. The
> first out-of-the box stuff which I've found.
Thanks for your explanation and snippets - I thought this was configurable.
Regards,
Em
Am 15.02.2012 06:16, schrieb Mikhail Khludnev:
> On Tue,
troy the benefit of caching).
Don't you agree?
Kind regards,
Em
Am 14.02.2012 22:33, schrieb Erick Erickson:
> Whoa!
>
> fq=id(1 OR 2)
> is not the same thing at all as
> fq=id:1&fq=id:2
>
> Assuming that any document had one and only one ID, the second clause
> w
Hi Mark,
did you already had a look at http://wiki.apache.org/solr/FunctionQuery ?
Regards,
Em
Am 14.02.2012 20:09, schrieb Mark:
> Or better yet an example in solr would be best :)
>
> Thanks!
>
> On 2/14/12 11:05 AM, Mark wrote:
>> Would you mind throwing out an exam
reuse it again.
> it will use per segment bitset at contrast to Solr's fq which caches for
> top level reader.
Could you explain why this bitset would be per-segment based, please?
I don't see a reason why this *have* to be so.
What is the benefit you are seeing?
Kind regards,
Em
Hi,
have a look at:
http://search-lucene.com/m/Z8lWGEiKoI
I think not much had changed since then.
Regards,
Em
Am 13.02.2012 20:17, schrieb spr...@gmx.eu:
> Hi,
>
> how efficent is such an query:
>
> q=some text
> fq=id:(1 OR 2 OR 3...)
>
> Should I better use q:som
Hi Anderson,
you will need to rearrange the JSPs a little bit to do what you want.
If you do so, you can create rules via .htaccess.
Otherwise I would suggest you to look for a commercial distribution of
Solr which might fit your needs.
Regards,
Em
Am 13.02.2012 16:48, schrieb Anderson
earch component.
The last approach is better in my opinion.
The third option requires you to modify/fork existing code. Keep in mind
that this means that you have to maintain your modification/fork on
every update.
Regards,
Em
Am 05.10.2011 09:01, schrieb elisabeth benoit:
> Hello,
>
> I
Hi Sid,
unfortunately not and as far as I know it is not possible to realize your
requirements with Solr's SpellCheck-Packages (I talk about V. 1.4, since
there are some changes in 3.1).
Regards,
Em
--
View this message in context:
http://lucene.472066.n3.nabble.com/SpellCheckComponent-
blocky,
Shingles should be your way.
Regards,
Em
--
View this message in context:
http://lucene.472066.n3.nabble.com/Suggester-with-multi-terms-tp2859547p2860419.html
Sent from the Solr - User mailing list archive at Nabble.com.
lines:
org.eclipse.jdt.core.javanature
org.eclipse.wst.common.project.facet.core.nature
Now Lucene and Solr should be normal java-perspective projects. Be sure to
refresh after you changed the configuration. Perhaps a restart could be
neccessary, depending on your Eclipse-version.
Regards,
Em
--
View
?
Regards,
Em
Em wrote:
>
> Hello list,
>
> there is a problem with the SVN-Checkout of the current Solr-version, I
> think.
> I can run ant eclipse, it does not show any errors (needed 20 seconds the
> first time and 0.9 seconds afterwards).
>
> However, the c
expected changes.
I use Linux Mint 10 (Julia).
On Windows-Systems everything works as expected.
Did you also recognize such a problem?
Regards,
Em
--
View this message in context:
http://lucene.472066.n3.nabble.com/Ant-is-not-working-in-Eclipse-tp2852641p2852641.html
Sent from the Solr - User
Hi,
did you have a look at the query()-function mentioned in the Wiki?
It sounds like something you should give a try!
Regards,
Em
Bill Bell wrote:
>
> I know that the _val_ is the only thing influencing the score.
>
> The fq is just to limit also by those queries.
>
> W
Thank you Hoss.
I will try the comma-separated thing out. It seems to be what I searched
for. :)
Regards,
Em
Chris Hostetter-3 wrote:
>
> : I watched an online video with Chris Hostsetter from Lucidimagination.
> He
> : showed the possibility of having some Facets that exclude
olr.
Regards,
Em
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multiple-Tags-and-Facets-tp2843130p2848085.html
Sent from the Solr - User mailing list archive at Nabble.com.
nd a tie-break of 0 -
so only the better scoring stemmed-field contributes to the total score of
your document.
Regards,
Em
Robert Petersen-3 wrote:
>
> Adding another field with another stemmer and searching both??? Wow never
> thought of doing that. I guess that doesn't really double
Are there no ideas of how to use multiple tags per filter or to combine some
tags for excluding more than one filter per facet?
Regards,
Em
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multiple-Tags-and-Facets-tp2843130p2847569.html
Sent from the Solr - User mailing list
Additionally, there is an already set up example for a multicore-setup in the
example directory of your Solr-distribution.
Regards,
Em
--
View this message in context:
http://lucene.472066.n3.nabble.com/Need-to-create-dyanamic-indexies-base-on-different-document-workspaces-tp2845919p2846417
As Iboutrus mentioned, if you can summarize it in a query, than yes, Solr can
handle it.
Make a step backward: Do not think of Solr. Write a query (one! query) that
shows exactly the output you exepct. Afterwards, implement this query as a
source for DIH.
Regards,
Em
--
View this message in
Yes, have a look at the wiki-page. It explains some configurations and
REST-API-methods to create cores dynamically and if/how they are persisted.
Regards,
Em
Gaurav Shingala wrote:
>
> Is it possible to create solr core dyanamically?
>
> In our case we want each workspace to
Hi Stockii,
how did you configured your segments-number in Solrconfig.xml?
Decrease the number to speed up things automatically.
Regards,
Em
--
View this message in context:
http://lucene.472066.n3.nabble.com/how-to-abort-a-running-optimize-tp2838721p2846369.html
Sent from the Solr - User
rewrite your config to make
the second entity a sub-entity and to add a WHERE-clause.
If this is really not possible for you, just a guess, what happens if you
remove the OS06Y-field from your second entity?
Regards,
Em
--
View this message in context:
http://lucene.472066.n3.nabble.com/The-issue-of
tools, especially the WDF, wrong and the queryParser creates an unexpected
result which leads to unmatched but still relevant documents.
Please, show us your debugging-output and the field-definition so that we
can provide you some help!
Regards,
Em
Robert Petersen-3 wrote:
>
> I hav
Hi Tjong,
seems like your XML was invalid.
Try the following and compare it to your original config:
Regards,
Em
--
View this message in context:
http://lucene.472066.n3.nabble.com/entity-name-issue-tp2843812p2846326.html
Sent from the Solr - User mailing list
so. It sounds like unnecessary effort without a
win.
Regards,
Em
Bill Bell wrote:
>
> I would like to influence the score but I would rather not mess with the
> q=
> field since I want the query to dismax for Q.
>
> Something like:
>
> fq={!type=dismax qf=$qqf v=$qspec
This really helps at the mailinglists.
If you send your mails with Thunderbird, be sure to check that you enforce
plain-text-emails. If not, it will often send HTML-mails.
Regards,
Em
Marvin Humphrey wrote:
>
> On Thu, Apr 21, 2011 at 12:30:29AM -0400, Trey Grainger wrote:
>> (F
:
http://wiki.apache.org/solr/DIHQuickStart#Index_data_from_multiple_tables_into_Solr
I think it will help you in rewriting your queries to fit your usecase.
Regards,
Em
--
View this message in context:
http://lucene.472066.n3.nabble.com/The-issue-of-import-data-from-database-using-Solr-DIH
Hello,
I watched an online video with Chris Hostsetter from Lucidimagination. He
showed the possibility of having some Facets that exclude *all* filter while
also having some Facets that take care of some of the set filters while
ignoring other filters.
Unfortunately the Webinar did not explain h
Hi,
I have to jump into this topic.
I can not find the mentioned replies, Markus but I still noticed that
problem, too.
What could be the cause?
Regards,
Em
Markus Jelsma-2 wrote:
>
> You opened the same thread this monday and got two replies.
>
>> Hi,
>> Has anyone
ut's ( a few weeks ago ) Solr repository,
the CommonsHttpSolrServer uses a MultiThreaded-connection with 32
connections per host and 128 total connections.
Hope this helps.
Regards,
Em
Hi Paul,
what do you understand by saying "extra parameters"?
Regards
Paul Libbrecht-4 wrote:
>
>
> Hello Solr-friends,
>
> I want to implement a query-expander, one that enriches the input by the
> usage of extra parameters that, for example, a form may provide.
>
> Is the right way to su
Push again.
Regards
Em wrote:
>
> Just wanted to push that topic.
>
> Regards
>
>
> Em wrote:
>>
>> Hi Peter,
>>
>> I must jump in this discussion: From a logical point of view what you are
>> saying makes only sense if both instances d
make some components
pluggable?
Since Real-Time-Search is an issue where I read about the idea of making
things like the searcher pluggable, this could be beneficial to the
community.
Regards
Yonik Seeley-2-2 wrote:
>
> On Wed, Feb 9, 2011 at 1:18 PM, Em wrote:
>> How do they &
re out how
they make it accessible for the similarity that is in use.
Thank you!
Regards
Yonik Seeley-2-2 wrote:
>
> On Wed, Feb 9, 2011 at 12:16 PM, Em wrote:
>> For the current usecase we want to experiment with different values for
>> the
>> idf based on differen
Hello folks,
I got a question regarding an own QueryWeight implementation for a special
usecase.
For the current usecase we want to experiment with different values for the
idf based on different algorithms and how they affect the scoring.
Is there a way to plug-in an own weight-implementation
consists of at least two segmentReaders.
If you are committing three times at the same moment, you will warm 3 new
SolrIndexSearchers that contain 3,4 and 5 segmentReaders. Your old
SolrIndexSearcher contains 2 segmentReaders and is valid until the newer
SolrIndexReader based on 3 segmentReaders is warmed.
Hi Tavi,
if you want to use multiple tokenization strategies (different tokenizers so
to speak) you have to use different fieldTypes.
Maybe you have to create your own tokenizer for doing what you want or a
PatternTokenizer might help you.
However, your examples for the different positions of s
Hi Tavi,
could you please provide an example query for your problem and the
debugQuery's output?
It confuses me that you write "score(query
"apple") = max(score(field1:apple), score(field2:apple))"
I think your problem could come from the norms of your request, but I am not
sure.
If you can, sh
Hi list,
I wanted to create a Jira-issue because of the CSVUpdateHandler-topic I
started a few days ago. However I can not create a Jira-account - I do not
recieve any mail or something like that.
Are there any troubles with the Jira?
Regards
--
View this message in context:
http://lucene.47
Just wanted to push that topic.
Regards
Em wrote:
>
> Hi Peter,
>
> I must jump in this discussion: From a logical point of view what you are
> saying makes only sense if both instances do not run on the same machine
> or at least not on the same drive.
>
> When both
imizing an index?
How do you inform R about the finished commit?
Thank you for your explanation, it's a really interesting topic!
Regards,
Em
Peter Sturge-2 wrote:
>
> Hi,
>
> We use this scenario in production where we have one write-only Solr
> instance and 1 read-only, p
Hello Gustavo,
well, I did not use Nutch at all, but I got some experience with using Solr.
In Solr you could use a multicore-setup where each core points to
another hard-drive of your server. For other Solr-Servers ( and cores as
well ) each core is a seperate index, so to query all drives of on
Hi,
sorry for the late feedback. Everything seems to be fine now.
Thank you!
Koji Sekiguchi wrote:
>
> (11/01/31 3:11), Em wrote:
>>
>> Hello list,
>>
>> I build an application that uses SolrJ to communicate with Solr.
>>
>> What did I do?
>>
Hi Hoss,
actually I thought this would be neccessary for the SolrInputDocument to map
against a special FieldType, but this isn't true. The mapping comes
sometimes after the UpdateProcessor finished its work.
So yes, there is no reason to force the CSVRequestHandler to throw an
Exception if the f
hema.xml, if there are no other suggestions.
Thank you for your help!
Regards
Am 31.01.2011 16:58, schrieb Em:
> Okay, I added some Logging-Stuff to both the processor and its factory.
> It turned out that there IS an updateProcessor returned and it is NOT null.
> However, my logging-m
guchi:
> (11/01/31 23:33), Em wrote:
>> Hi Koji,
>>
>> following is the solrconfig:
>>
>>
>>
>> throwAway
>>
>>
>>
>>
>> > class="solr.experiments.solr.update.
Hi Koji,
following is the solrconfig:
throwAway
Do you see any mistake here?
Regards
Hi list,
I am not sure whether this behaviour is intended or not.
I am experimenting with the UpdateRequestProcessor-feature of Solr (V: 1.4)
and there occured something I find strange.
Well, when I send csv-data to the CSV-UpdateHandler with some fields
specified that are not part of the Schem
Hi,
I will give you feedback today. There occured another issue with our current
Solr-installation that I have to fix.
Thanks for your effort!
Regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrJ-Trunk-Invalid-version-or-the-data-in-not-in-javabin-format-tp2384421
Hello list,
I build an application that uses SolrJ to communicate with Solr.
What did I do?
Well, I deleted all the solrj-lib stuff from my application's
Webcontent-directory and inserted the solrj-lib from the freshly compiled
solr 4.0 - trunk.
However, when trying to query Solr 4.0 it shows m
Hi,
no, you missunderstood me, I only said that Solr does not care of the
positions *usually*.
Lucene got SpanNearQuery which considers the position of the Query's terms
relative to eachother.
Furthermore there exists a SpanFirstQuery which boosts occurences of a Term
at the beginning of a speci
Hi,
excuse me for pushing this for a second time, but I can't figure it out by
looking at the source code...
Thanks!
> Hi Lance,
>
> thanks for your explanation.
>
> As far as I know in distributed search i have to tell Solr what other
> shards it has to query. So, if I want to query a sp
Hi Cyang,
usually Solr isn't looking at the position of a term. However, there are
solutions out there for considering the term's position when calculating a
doc's score.
Furthermore: If two docs got the same score, I think they are ordered the
way they were found in the index.
Does this answer
1 - 100 of 142 matches
Mail list logo