HI
I want unsubscribe the mailing list od solr and lucene so plz do the same..
Regards,
Kishan Parmar
Software Developer
+91 95 100 77394
Jay Shree Krishnaa !!
On 2/26/2015 11:14 PM, Damien Kamerman wrote:
I've run into an issue with starting my solr cloud with many collections.
My setup is:
3 nodes (solr 4.10.3 ; 64GB RAM each ; jdk1.8.0_25) running on a single
server (256GB RAM).
5,000 collections (1 x shard ; 2 x replica) = 10,000 cores
1 x
On 27 February 2015 at 12:10, Kishan Parmar kishan@gmail.com wrote:
HI
I want unsubscribe the mailing list od solr and lucene so plz do the same..
Please follow the standard procedure for unsubscribing from most
mailing lists, and send a mail to
solr-user-unsubscr...@lucene.apache.org .
Oh, and I was wondering if 'leaderVoteWait' might help in Solr4.
On 27 February 2015 at 18:04, Damien Kamerman dami...@gmail.com wrote:
This is going to push SolrCloud beyond its limits. Is this just an
exercise to see how far you can push Solr, or are you looking at setting
up a production
This is going to push SolrCloud beyond its limits. Is this just an
exercise to see how far you can push Solr, or are you looking at setting
up a production install with several thousand collections?
I'm looking towards production.
In Solr 4.x, the clusterstate is one giant JSON structure
On 2/26/2015 11:41 PM, Danesh Kuruppu wrote:
My application is a standalone application. I though of embedding solr
server, so I can pack it inside my application.
In solr 5.0.0, solr is no longer distributed as a war file. how I can
find the war file from the distribution.
I am glad to see
Thanks shawn,
I am doing some feasibility studies for moving directly to solr 5.0.0.
One more thing, It is related to standalone server.
How security handle in solr standalone server. lets say, I configured my
application to use remote solr standalone server.
1. How I would enable secure
Hi all,
I need to include embed solr server into my maven project. I am going to
use latest solr 5.0.0.
Need to know which dependencies I need to include in my project. As I
understand, I need to have solr-core[1] and solr-solrj[2]. Do I need to
include lucene dependency in my project. If so,
On 2/26/2015 10:07 PM, Danesh Kuruppu wrote:
I need to include embed solr server into my maven project. I am going to
use latest solr 5.0.0.
Need to know which dependencies I need to include in my project. As I
understand, I need to have solr-core[1] and solr-solrj[2]. Do I need to
include
Hi,
I want to get suggestion of each term/word in query.
Condition:
i) Either word/term is correct or incorrect.
ii) Either word/term has high frequency or has low frequency.
Whatever the condition of term/word, I need to suggestion all time.
I've run into an issue with starting my solr cloud with many collections.
My setup is:
3 nodes (solr 4.10.3 ; 64GB RAM each ; jdk1.8.0_25) running on a single
server (256GB RAM).
5,000 collections (1 x shard ; 2 x replica) = 10,000 cores
1 x Zookeeper 3.4.6
Java arg -Djute.maxbuffer=67108864 added
HI,
Only return suggestions that result in more hits for the query
than the existing query
What does it means the existing query in above sentence for
spellcheck.onlymorepopular?
what happens when I make true to spellcheck.onlymorepopular or false to
spellcheck.onlymorepopular? Any
Thanks Shawn,
My application is a standalone application. I though of embedding solr
server, so I can pack it inside my application.
In solr 5.0.0, solr is no longer distributed as a war file. how I can
find the war file from the distribution.
I need some advanced features like synonyms search,
I’m sorry, I’m not following exactly.
Somehow you no longer have a gettingstarted collection, but it is not clear how
that happened.
Could you post the exact script steps you used that got you this error?
What collections/cores does the Solr admin show you have?What are the
results
Sorry, I'm afraid I have not encountered such errors when launch.
Seems something wrong around Pivot's, but I have no idea about it.
Would you tell me java version you're using ?
Tomoko
2015-02-26 21:15 GMT+09:00 Dmitry Kan solrexp...@gmail.com:
Thanks, Tomoko, it compiles ok!
Now launching
Hello,
Giving
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201301.mbox/%3c711daae5-c366-4349-b644-8e29e80e2...@gmail.com%3E
you can add qt.shards into handler defaults/invariants.
On Thu, Feb 26, 2015 at 5:40 PM, Benson Margulies bimargul...@gmail.com
wrote:
A query I posted
I was hoping that Benson was hinting at adding a qt.shards.auto=true
parameter to so that would magically use on the path from the incoming
request - and that this would be the default, since that's what most people
would expect.
Or, maybe just add a commented-out custom handler that has the
Below is the filed definition that we used its just a basic definition ::
analyzer type=index
tokenizer class=solr.ClassicTokenizerFactory/
filter class=solr.StopFilterFactory ignoreCase=true
words=stopwords.txt /
filter class=solr.LowerCaseFilterFactory/
filter
What is the best backup and restore strategy for Solr 3.6.1?
Oh, I see. I used the start -e cloud command, then ran through a setup with
one core and default options for the rest, then tried to post the json
example again, and got another error:
buntu@ubuntu-VirtualBox:~/crawler/solr$ bin/post -c gettingstarted
example/exampledocs/*.json
What data did you have in the 1.3 version? Because the bin/solr -e techproduts
process only indexes 30+ docs total. So if your 1.3 installation is
returning more
docs as your note seems to imply, you somehow have a lot more docs indexed.
There is no mention of hotel in any of the sample docs that
I’ll be working on this at some point:
https://issues.apache.org/jira/browse/SOLR-6237
- Mark
http://about.me/markrmiller
On Feb 25, 2015, at 2:12 AM, longsan longsan...@sina.com wrote:
We used HDFS as our Solr index storage and we really have a heavy update
load. We had met much problems
Hi
Thanks for your quick reply.
since your time_to_live_s and expire_at_dt fields are both
stored, can you confirm that a expire_at_dt field is getting popularted by
the update processor by doing as simple query for your doc (ie
q=id:10seconds)
No, expire_at_dt field does not get populated
Alex,
Same results on recursive=true / recursive=false.
I also tried importing plain text files instead of epub (still using
TikeEntityProcessor though) and get exactly the same result - ie. all
files fetched, but only one document indexed in Solr.
With verbose output, I get a row for each
On 2/26/2015 12:11 AM, Nitin Solanki wrote:
Why Solr is taking too much of time to start all nodes/ports?
Very slow Solr startup is typically caused by one of two things. Both
are described here:
https://wiki.apache.org/solr/SolrPerformanceProblems#Slow_startup
There could be
Alex,
That's great. Thanks for the pointers. I'll try and get more info on
this and file a JIRA issue.
Kind regards,
Gary.
On 26/02/2015 14:16, Alexandre Rafalovitch wrote:
On 26 February 2015 at 08:32, Gary Taylor g...@inovem.com wrote:
Alex,
Same results on recursive=true /
A query I posted yesterday amounted to me forgetting that I have to
set qt.shards when I use a URL other than plain old '/select' with
SolrCloud. Is there any way to configure a query handler to automate
this, so that all queries addressed to '/RNI' get that added in?
If your expire_at_dt field is not populated automatically, let's step
back and recheck a sanity setting. You said it is a managed schema? Is
it a schemaless as well? With an explicit processor chain? If that's
the case, your default chain may not be running AT ALL.
So, recheck your
Sure, it is:
java version 1.7.0_76
Java(TM) SE Runtime Environment (build 1.7.0_76-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.76-b04, mixed mode)
On Thu, Feb 26, 2015 at 2:39 PM, Tomoko Uchida tomoko.uchida.1...@gmail.com
wrote:
Sorry, I'm afraid I have not encountered such errors when
How did you start Solr? If you started with `bin/solr start -e cloud` you’ll
have a gettingstarted collection created automatically, otherwise you’ll need
to create it yourself with `bin/solr create -c gettingstarted`
—
Erik Hatcher, Senior Solutions Architect
http://www.lucidworks.com
On 26 February 2015 at 08:32, Gary Taylor g...@inovem.com wrote:
Alex,
Same results on recursive=true / recursive=false.
I also tried importing plain text files instead of epub (still using
TikeEntityProcessor though) and get exactly the same result - ie. all files
fetched, but only one
Hi Alex,
Thanks for the reply.
Yes, we have already tried to set the autoDeletePeriodSeconds period to
some low value like 5 seconds and tried checking the document expiration
after 30 seconds or minute or even after an hour. But result is same and
document does not get expired automatically.
: If your expire_at_dt field is not populated automatically, let's step
: back and recheck a sanity setting. You said it is a managed schema? Is
: it a schemaless as well? With an explicit processor chain? If that's
: the case, your default chain may not be running AT ALL.
yeah ... my only
Great! Thank you!
I had a 4 shard setup - no replicas. Index size was 2.0TBytes stored in
HDFS with each node having approximately 500G of index. I added four
more shards on four other machines as replicas. One thing that happened
was the 4 replicas all ran out of HDFS cache size
This is very, very strange. How are you indexing the docs? SolrJ? XML? DIH?
What happens if you _only_ index the doc?
You say nothing comes out in the log file indicating an error, but
what _does_ come out? Particularly at the end?
And in general note that attachments don't come through the
Hi everybody,
hi have a very strange issue in my solr (version 4.10.2) installation (on a
windows server 2008 java jdk 1.7).
I am sure nobody met this problem before (at least googling around i found
nothing). I have a simple configuration with a base text_general field. I
was able to index
All,
I am currently using 4.10.3 running Solr Cloud.
I have configured my index analyzer to leverage the
solr.ReversedWildcardFilterFactory with various settings for the
maxFractionAsterisk, maxPosAsterisk,etc. Currently I am running with the
defaults (ie not configured)
Using the Analysis
Please post your field type... or at least confirm a comparison to the
example in the javadoc:
http://lucene.apache.org/solr/4_10_3/solr-core/org/apache/solr/analysis/ReversedWildcardFilterFactory.html
-- Jack Krupansky
On Thu, Feb 26, 2015 at 2:38 PM, jaime spicciati jaime.spicci...@gmail.com
Most of the magic is done internal to the query parser which actually
inspects the index analyzer chain when a leading wildcard is present. Look
at the parsed_query in the debug response, and you should see that special
prefix query.
-- Jack Krupansky
On Thu, Feb 26, 2015 at 3:49 PM, jaime
Thanks for the quick response.
The index I am currently testing with has the following configuration which
is the default for the text_general_rev
The field type is solr.TextField
maxFractionAsterisk=.33
maxPosAsterisk=3
maxPosQuestion=2
withOriginal=true
Through additional review I think it
Hi Tom,
Thanks for your inputs.
I was planning to use stopword filter, but will definitely make sure they are
unique and not to step over each other. I think for our system even going with
length of 50-75 should be fine, will definitely up that number after doing some
analysis on our input.
Hi,
I thought that we were using the edismax query parser, but it seems that
we had configured the dismax parser.
I have made some tests with the edismax parser and it works fine, so I'll
change it in our production Solr.
Regards,
David Dávila
DIT - 915828763
De: Alvaro Cabrerizo
Thank you for your replies, added q and it works! I agree the examples are
a bit confusing. It turned out also that points are clustered around the
center and had to increase d as well.
On Wed, Feb 25, 2015 at 11:46 PM, Alexandre Rafalovitch arafa...@gmail.com
wrote:
In the examples it used to
Hi, I've just installed Solr (will be controlling with Solarium and using
to search Nutch queries.) I'm working through the starting tutorials
described here:
https://cwiki.apache.org/confluence/display/solr/Running+Solr
When I try to run $ bin/post -c gettingstarted example/exampledocs/*.json,
Thank you for checking out it!
Sorry, I've forgot to note important information...
ivy jar is needed to compile. Packaging process needs to be organized, but
for now, I'm borrowing it from lucene's tools/lib.
In my environment, Fedora 20 and OpenJDK 1.7.0_71, it can be compiled and
run as
Hi,
I am new in Solr and using Solr 5.0.0 search server. After installing when
I’m going to search any keyword in solr 5.0.0 it dose not give any results
back. But when I was using a previous version of Solr (1.3.0)(previously
installed) it gives each and every results of the queried Keyword.
Thanks, Tomoko, it compiles ok!
Now launching produces some errors:
$ java -cp dist/* org.apache.lucene.luke.ui.LukeApplication
Exception in thread main java.lang.ExceptionInInitializerError
at org.apache.lucene.luke.ui.LukeApplication.main(Unknown Source)
Caused by:
Does a query for *:* return all documents? Pick one of those documents and
try a query using a field name and the value of that field for one of the
documents and see if that document is returned.
Maybe you skipped a step in the tutorial process or maybe there was an
error that you ignored.
Hello,
I'm deploying Solr5.0.0. on Windows 2008 server.
I'm planning to add a task to the task scheduler to start the Solr server at
system boot time.
So I call the bin\solr.bat start options from the task scheduler.
Is this the preferred method om Windows, because I read that running under
I apparently am feeling dense; the following does not worl.
requestHandler name=/RNI class=solr.SearchHandler default=false
list name=defaults
str name=shards.qt/RNI/str
/list
arr name=components
strname-indexing-query/str
strname-indexing-rescore/str
Hi,
I've wired situation. Starting yesterday restart I've issue with log encoding.
My log looks like:
DEBUG - 2015-02-27 10:47:01.432;
If I'm reading your suggestion right, Tim fixed this for 5.1 with
http://issues.apache.org/jira/browse/SOLR-6311
On Thu, Feb 26, 2015 at 10:03 PM, Jack Krupansky jack.krupan...@gmail.com
wrote:
I was hoping that Benson was hinting at adding a qt.shards.auto=true
parameter to so that would
Hi Benson,
Do not use shards.qt with a leading '/'. See
https://issues.apache.org/jira/browse/SOLR-3161 for details. Also note that
shards.qt will not be necessary with 5.1 and beyond because of SOLR-6311
On Fri, Feb 27, 2015 at 8:16 AM, Benson Margulies bimargul...@gmail.com
wrote:
I
53 matches
Mail list logo