On 2/15/2013 3:55 PM, Kiran J wrote:
How can I start Solr from a different folder in Windows ? I tried
*java -cp "c:\\start.jar" -jar start.jar*
I think you'd want:
java -Djetty.home=c:\path -jar c:\path\start.jar
Note that the solr.solr.home property will default to .\solr ... in
which it
For 4.2, I'll try and put in https://issues.apache.org/jira/browse/SOLR-4078
soon.
Not sure about the behavior your seeing - you might want to file a JIRA issue.
- Mark
On Feb 15, 2013, at 8:17 PM, Gary Yngve wrote:
> Hi all,
>
> I've been unable to get the collections create API to work wit
Hi all,
I've been unable to get the collections create API to work with
createNodeSet containing hostnames, both localhost and external hostnames.
I've only been able to get it working when using explicit IP addresses.
It looks like zk stores the IP addresses in the clusterstate.json and
live_no
:
http://www.slideshare.net/thelabdude/boosting-documents-in-solr-lucene-revolution-2011
...
: > Start by looking at Solr's external file field and
Rather then using ExternalFileField as imspiration, i would suggest you
look at implementing a custom ValueSourceParser...
http://mail-arc
Hi Hoss,
Yup. My plan for dealing with that is to post this on LinkedIn next week
there, I *assume* slightly less on-the-bleeding-edge crowd hangs out.
Maybe you have some other ideas, too?
Thanks!
Otis
--
Solr & ElasticSearch Support
http://sematext.com/
On Fri, Feb 15, 2013 at 6:12 PM, C
Sounds like you should file a JIRA issue.
- Mark
On Feb 15, 2013, at 6:07 PM, "Charton, Andre"
wrote:
> Hi,
>
> I upgrade solr form 3.6 to 4.1. Since them the replication is full copy
> the index from master.
> Master is delta import via DIH every 10min. Slave poll interval is 10sec.
> After
Hi,
I upgrade solr form 3.6 to 4.1. Since them the replication is full copy
the index from master.
Master is delta import via DIH every 10min. Slave poll interval is 10sec.
After debug and search I found patch in SOLR-4413.
Problem was slave is checking the wrong directory (index/ instead of
index
: I think the subject is self explanatory and the results will be very
: interesting to see!
one caveat to keep in mind is that the results will most likely be biased
in favor of people actively participating in the broader community (ie: on
the mailing list, or following people realted to solr
Alvaro - still thinking ... will reply when I have more ;-)
On Fri, Feb 15, 2013 at 6:31 AM, Á_o wrote:
> Á_o wrote
>> As I see (and I may be wrong) Solr's external file fields are some kind of
>> maps, aren't them?
>
> Actually I was wrong ;)
> The key does not have to be
Markus,
I wonder why you need an access to it. I've thought that current searcher's
methods (getDocSet(), cacheDocSet() ) are enough to do everything. Anyway,
if you wish, I just looked in code and see that it's available via
core.getInfoRegistry().get("filterCache"), it can lead to some problems,
: I need to get the filterCache for SOLR-4280. I can create a new issue
: patching SolrIndexSearcher and adding the missing caches (non-user
: caches) to the cacheMap so they can be returned using getCache(String)
: but i'm not sure this is intented. It does work but is this the right
: path?
Good feedback. Kept them, but reordered. Thanks,
Otis
--
SOLR Performance Monitoring - http://sematext.com/spm/index.html
Search Analytics - http://sematext.com/search-analytics/index.html
On Fri, Feb 15, 2013 at 1:31 PM, Walter Underwood wrote:
> Seems like there is no way to change your vo
Seems like there is no way to change your vote. I saw the "... but upgrading"
options at the bottom after I'd already voted.
I would just remove those from the poll. They only complicate things.
wunder
On Feb 15, 2013, at 10:27 AM, Otis Gospodnetic wrote:
> Hi,
>
> I think the subject is self
Hello,
Thanks for the suggestion.
I actually solved the issue using POST and xmhttp.send(query). However I think
the most important question is do I really need to query an object to obtain
its wkt polygon and then send that polygon again to enquiry solr
geographically? Wouldn't be perfect if
Amazing. Thanks!
On Fri, Feb 15, 2013 at 7:07 PM, Robert Muir wrote:
> For 3.4, extend ReusableAnalyzerBase
>
> On Fri, Feb 15, 2013 at 12:06 PM, Dmitry Kan wrote:
> > Thanks a lot, Robert.
> >
> > I need to study a bit more closely the link you have sent. I have tried
> to
> > override the Ana
Hi,
Trying to find a better approach for searching keywords.
We have indexed about 100K documents indexed in Solr 3.5 and each doc has
field title for different country
Field "title" is dynamic defined as title.* about 20 countries .
its not necessary that each document will have title for all 2
For 3.4, extend ReusableAnalyzerBase
On Fri, Feb 15, 2013 at 12:06 PM, Dmitry Kan wrote:
> Thanks a lot, Robert.
>
> I need to study a bit more closely the link you have sent. I have tried to
> override the Analyzer class, but couldn't find a method
> createComponents(String
> fieldName,Reader r
Thanks a lot, Robert.
I need to study a bit more closely the link you have sent. I have tried to
override the Analyzer class, but couldn't find a method createComponents(String
fieldName,Reader reader) in LUCENE_34. Instead, there is a method required
to override: tokenStream(String fieldName, Rea
Mark Miller-3 wrote
> On Feb 15, 2013, at 6:04 AM, o.mares <
> ota.mares+nabble@
> > wrote:
>
>> Hey when running a solr cloud setup with 4 servers, managing 3 cores each
>> splitted on 2 shards, what are the proper steps to do a full index
>> import?
>>
>> Do you have to import the index on al
In step 4, once the node 1 gets all the responses, it merges and sorts
them: Lets say you requested 15 docs from each shard (because the rows
parameter is 15), at this point node 1 merges the results from all the
responses and gets the "top 15" across all of them. The second request is
only to get
> i *think* you are saying that you want the sum of term frequencies for all
> terms in all matching documents -- but i'm not sure, because i don't see
> how TermVectorComponent is helping you unless you are iterating over every
> doc in the result set (ie: deep paging) to get the TermVectors fo
On Feb 15, 2013, at 6:04 AM, o.mares wrote:
> Hey when running a solr cloud setup with 4 servers, managing 3 cores each
> splitted on 2 shards, what are the proper steps to do a full index import?
>
> Do you have to import the index on all of the solr instances? Or is it
> sufficient enough to
Hi,
Fixed the issue with document and formatting.
My schema is like below.
My need is to search only three subject fields and boost those subjects which
has a higher Mark(Mark can be in between 1 - 10).
Again Top subjects will get a higher boost than preceding
Á_o wrote
> As I see (and I may be wrong) Solr's external file fields are some kind of
> maps, aren't them?
Actually I was wrong ;)
The key does not have to be necessarily the docID. It can be some other
field. Anyway, even in that case, it's still a 'docKey' which I can't see
h
I've seen references to upping the packet limit that your servlet container
allows, but
I don't have the details offhand. It's possible that you're never even
getting to Solr,
looking at the solr log and seeing if anything gets there when you issue
that request
should help.
Best
Erick
On Thu, Fe
Hi,
I need to get the filterCache for SOLR-4280. I can create a new issue patching
SolrIndexSearcher and adding the missing caches (non-user caches) to the
cacheMap so they can be returned using getCache(String) but i'm not sure this
is intented. It does work but is this the right path?
https:
OK, "problem" solved...
I my tests I only reloaded the core "master" and queried the core "slave".
So config changes on "slave" where not in place :-\
Sorry guys!
Kai
It was/is a client issue. The googlebot executed a request where the search
term was CP1252 encoded, maybe the brokenn url got indexed from a third
party site.
We solved the issue by filtering all non valid utf8 chars from the search
input string because there is no proper way to check what encodi
Hey when running a solr cloud setup with 4 servers, managing 3 cores each
splitted on 2 shards, what are the proper steps to do a full index import?
Do you have to import the index on all of the solr instances? Or is it
sufficient enough to perform the import on one instance and it will get
replic
> I tried patching my SOLR 4.1 source , as well as a freshly downloaded
> SOLR trunk, to no avail. I guess I just need some tips on how and what
> to patch. I tried to patch the base directory as well as the lucene
> directory. If there's something I need to hack in the patch, do let
> me know.
T
Query time boost function seem to be loaded via local params while boost
functions defined in solrconfig.xml get added to a request globally.
QParser.geParams
public String getParam(String name) {
String val;
if (localParams != null) {
val = localParams.get(name);
if (val != nul
Hi Tim!
Thank you for bringing in some light ;)
I have read your slides (in fact, I had already read them in the last days)
but I'm still missing something.
So, let's see...
As I see (and I may be wrong) Solr's external file fields are some kind of
maps, aren't them? I understand the power of
32 matches
Mail list logo