...@sloan.mit.edu wrote:
You can update the document in the index quite frequently. IDNK what
your requirement is, another option would be to boost query time.
On Sun, Jan 22, 2012 at 5:51 AM, Bing Li lbl...@gmail.com wrote:
Dear Shashi,
Thanks so much for your reply!
However, I think the value
Kant sk...@sloan.mit.edu wrote:
Lucene has a mechanism to boost up/down documents using your custom
ranking algorithm. So if you come up with something like Pagerank
you might do something like doc.SetBoost(myboost), before writing to index.
On Sat, Jan 21, 2012 at 5:07 PM, Bing Li lbl
Dear all,
I am using SolrJ to implement a system that needs to provide users with
searching services. I have some questions about Solr searching as follows.
As I know, Lucene retrieves data according to the degree of keyword
matching on text field (partial matching).
But, if I search data by
data you have
got.
Sent from my iPad
On Jan 21, 2012, at 1:33 PM, Bing Li lbl...@gmail.com wrote:
Dear all,
I am using SolrJ to implement a system that needs to provide users with
searching services. I have some questions about Solr searching as
follows.
As I know, Lucene
Dear all,
I have a question when sorting retrieved data from Solr. As I know, Lucene
retrieves data according to the degree of keyword matching on text field
(partial matching).
If I search data by string field (complete matching), how does Lucene sort
the retrieved data?
If I add some filters,
Hi, there,
I am trying to look into the performance impact of data replication on
query response time. To get a clear picture, I would like to know how
to get the size of data being replicated for each commit. Through the
admin UI, you may read a x of y G data is being replicated; however,
y is
modify catalina.sh(bat)
adding java startup params:
-Dsolr.solr.home=/your/path
On Mon, Oct 31, 2011 at 8:30 PM, 刘浪 liu.l...@eisoo.com wrote:
Hi,
After I start tomcat, I input http://localhost:8080/solr/admin. It
can display. But in the tomcat, I find an exception like Can't find
set JAVA_OPTS=%JAVA_OPTS% -Dsolr.solr.home=c:\xxx
On Mon, Oct 31, 2011 at 9:14 PM, 刘浪 liu.l...@eisoo.com wrote:
Hi Li Li,
I don't know where I should add in catalina.bat. I have know Linux
how to do it, but my OS is windows.
Thank you very much.
Sincerely,
Amos
we have implemented one supporting did you mean and preffix suggestion
for Chinese. But we base our working on solr 1.4 and we did many
modifications so it will cost time to integrate it to current solr/lucene.
Here are our solution. glad to see any advices.
1. offline words and
for indexing, your can make use of multi cores easily by call
IndexWriter.addDocument with multi-threads
as far as I know, for searching, if there is only one request, you can't
make good use of cpus.
On Sat, Oct 15, 2011 at 9:37 PM, Rob Brown r...@intelcompute.com wrote:
Hi,
I'm running Solr
hi
建议你自己搭个环境测试一下吧,1M这点儿数据一点儿问题没有
2011/9/30 秦鹏凯 qinpeng...@yahoo.cn:
Hi all,
Now I'm doing research on solr distributed search, and it
is said documents more than one million is reasonable to use
distributed search.
So I want to know, does anyone have the test
result(Such as time cost)
anything.
How can I generate a dump otherwise to see, why solr hangs?
Regards,
Rohit
--
Best Regards.
Jerry. Li | 李宗杰
hi all,
I am using spellcheck in solr 1.4. I found that spell check is not
implemented as SolrCore. in SolrCore, it uses reference count to track
current searcher. oldSearcher and newSearcher will both exist if oldSearcher
is servicing some query. But in FileBasedSpellChecker
public void
hi all
I am interested in vertical crawler. But it seems this project is not
very active. It's last update time is 11/16/2009
hi all,
I tested it following the instructions in
http://wiki.apache.org/solr/SpellCheckComponent. but it seems something
wrong.
the sample url in the wiki is
hi all,
I follow the wiki http://wiki.apache.org/solr/SpellCheckComponent
but there is something wrong.
the url given my the wiki is
this may need something like language models to suggest.
I found an issue https://issues.apache.org/jira/browse/SOLR-2585
what's going on with it?
On Thu, Aug 18, 2011 at 11:31 PM, Valentin igorlacro...@gmail.com wrote:
I'm trying to configure a spellchecker to autocomplete full sentences
directly, not in url, but should
work the same.
Maybe an issue in your spell request handler.
2011/8/19 Li Li fancye...@gmail.com
hi all,
I follow the wiki http://wiki.apache.org/solr/SpellCheckComponent
but there is something wrong.
the url given my the wiki is
http://solr:8983/solr
I haven't used suggest yet. But in spell check if you don't
provide spellcheck.q, it will
analyze the q parameter by a converter which tokenize your query.
else it will use the analyzer of the field to process parameter q.
If you don't want to tokenize query, you should pass spellcheck.q
NullPointerException? do you have the full exception print stack?
On Fri, Aug 19, 2011 at 6:49 PM, Valentin igorlacro...@gmail.com wrote:
Li Li wrote:
If you don't want to tokenize query, you should pass spellcheck.q
and provide your own analyzer such as keyword analyzer.
That's already
Line 476 of SpellCheckComponent.getTokens of mine is assert analyzer != null;
it seems our codes' versions don't match. could you decompile your
SpellCheckComponent.class ?
On Fri, Aug 19, 2011 at 7:23 PM, Valentin igorlacro...@gmail.com wrote:
My beautiful NullPointer Exception :
SEVERE:
or your analyzer is null? any other exception or warning in your log file?
On Fri, Aug 19, 2011 at 7:37 PM, Li Li fancye...@gmail.com wrote:
Line 476 of SpellCheckComponent.getTokens of mine is assert analyzer !=
null;
it seems our codes' versions don't match. could you decompile your
hi all,
I read Apache Solr 3.1 Released Note today and found that
MMapDirectory is now the default implementation in 64 bit Systems.
I am now using solr 1.4 with 64-bit jvm in Linux. how can I use
MMapDirectory? will it improve performance?
NIOFSDir. I'm pretty sure in Trunk/4.0 it's the default for Windows and
maybe Solaris. In Windows, there is a definite advantage for using
MMapDirectory on a 64-bit system.
James Dyer
E-Commerce Systems
Ingram Content Group
(615) 213-4311
-Original Message-
From: Li Li
I am asking the question again. Hope someone knows the answer. Basically i
just don't want the boosting scoring(generated from the formula) to be
normalized. Can I do it without hacking the source code?
Elaine
On Wed, Jul 20, 2011 at 3:07 PM, Elaine Li elaine.bing...@gmail.com wrote:
Hi Folks
Hi Folks,
My boost function bf=div(product(num_clicks,0.3),sum(num_clicks,25))
I would like to directly add the score of it to the final scoring instead of
letting it be normalized by the queryNorm value.
Is there anyway to do it?
Thanks.
Elaine
Hi,
I am trying to trace the exception I get from the deletedPkQuery I am
running.
When I kick off the delta-import, the statusMessage has the following
message after 2 hours, but no single document was modified or deleted.
str name=Total Rows Fetched2813450/str
and then it bailed out when i
, Elaine Li elaine.bing...@gmail.com
wrote:
Hi Folks,
I am trying to use the deletedPkQuery to enable deltaImport to remove
the inactive products from solr.
I am keeping getting the syntax error saying the query syntax is not
right. I have tried many alternatives to the following query
Hi Folks,
I am trying to use the deletedPkQuery to enable deltaImport to remove
the inactive products from solr.
I am keeping getting the syntax error saying the query syntax is not
right. I have tried many alternatives to the following query. Although
all of them work in the mysql prompt
Hi Folks,
I need to filter out the returns if the document's field activated = false.
I tried the following in order to retrieve similar products which are
all activated products. But this does not work.
http://localhost:8983/solr/mlt?q=(id:2043144 AND activated:true)mlt.count=10
Can you
Thanks for the link. Using the fq with MLT request handler works.
Elaine
On Tue, Jul 12, 2011 at 12:55 PM, Koji Sekiguchi k...@r.email.ne.jp wrote:
(11/07/13 1:40), Elaine Li wrote:
Hi Folks,
I need to filter out the returns if the document's field activated =
false.
I tried
Guan and Koji, thank you both!
After I changed to termVectors = true, it returns the results as expected.
I flipped the stored=true|false for two fields: text and category_text
and compared the results and don't see any difference. The
documentation seems to suggest to have stored=true for the
will take.
My 'text'(a product's description) can be very long, it can be 2K's
long paragraphs.
Is it crazy to create term vectors out of it?
Elaine
On Fri, Jul 8, 2011 at 11:08 AM, Elaine Li elaine.bing...@gmail.com wrote:
Guan and Koji, thank you both!
After I changed to termVectors = true
Hi Folks,
This is my configuration for mlt in solrconfig.xml
requestHandler name=/mlt class=org.apache.solr.handler.MoreLikeThisHandler
lst name=defaults
str name=mlt.flname,text,category_text/str
int name=mlt.mintf2/int
int name=mlt.mindf1/int
int name=mlt.minwl3/int
hi all,
I want to provide full text searching for some small websites.
It seems cloud computing is popular now. And it will save costs
because it don't need employ engineer to maintain
the machine.
For now, there are many services such as amazon s3, google app
engine, ms azure etc. I am
Can you post the dataconfig.XML? Probably you didn't use batch size
Sent from my iPhone
On Apr 21, 2011, at 5:09 PM, Scott Bigelow eph...@gmail.com wrote:
Thanks for the e-mail. I probably should have provided more details,
but I was more interested in making sure I was approaching the
Looks like dependencies. Did you or him included the dependencies in the
solrconfig?
Sent from my iPhone
On Apr 19, 2011, at 8:35 AM, Oleg Tikhonov o...@apache.org wrote:
Hello everybody,
Recently, I got a message from a guy who was asking about
TikaEntityProcessor.
He uses Solr 1.4 and
Hello guys, how do you guys output the solr data into frontend? I know you
guys have 30M documents. Are you guys writing an application to do it? or
are you guys using a CMS with solr intergation? Thanks
@cominvent.com wrote:
Hi Li,
Who are you referring to in your question, having 30M docs?
Solr is possible to integrate in tons of different ways. Perhaps if you
describe your use case and requirements, we can suggest the best way for
your particular situation. Please elaborate on what you
You should just ask me.
Sent from my iPhone
On Apr 13, 2011, at 11:27 AM, soumya rao soumrao...@gmail.com wrote:
Thanks for the reply Josh.
And where should I make changes in ruby to add filters?
Soumya
On Wed, Apr 13, 2011 at 11:20 AM, Joshua Bouchair
Hey guys, how do you curl update all the XML inside a folder from A-D?
Example: curl http://localhost:8080/solr update
Sent from my iPhone
I have 1 master, and 2 slaves setup with 1.30 collection distribution. My
frontwed web application does query to the master, do I need to change any
code in the web application to query on the slaves? or does the master
requests query from the slaves automatcially? Please help thx.
On Apr 12, 2011, at 11:47 AM, Erick Erickson erickerick...@gmail.com wrote:
Yes. You need to put, say, a load balancer on front of your slaves
and distribute the requests to the slave.
Best
Erick
On Tue, Apr 12, 2011 at 2:20 PM, Li Tan litan1...@gmail.com wrote:
I have 1 master, and 2
I have 1 Master, and 3 slaves. The master holds the solr index. How do I
connect the slaves to the master? I have the script in the bin folders. I
have rsyncd installed and snapshooter enabled in the master. Thanks, please
help.
post.jar only support utf8. you must do the transformation.
2011/4/1 Jan Høydahl jan@cominvent.com:
Hi,
Testing the new Solr 3.1 release under Windows XP and Java 1.6.0_23
When trying to post example\exampledocs\gb18030-example.xml using post.jar I
get this error:
% java -jar post.jar
there are 3 conditions that will trigger an auto flushing in lucene
1. size of index in ram is larger than ram buffer size
2. documents in mamory is larger than the number set by setMaxBufferedDocs.
3. deleted term number is larger than the ratio set by
setMaxBufferedDeleteTerms.
auto flushing by
has master updated index during replication?
this could occur when it failed to download any file becuase network problem.
209715200!=583644834 means the size of the file slave fetched is 583644834
but it only download 209715200 bytes. maybe the connection is time out.
2011/2/16 Markus Jelsma
That's the job your analyzer should concern
2011/3/17 Andy angelf...@yahoo.com:
Hi,
For my Solr server, some of the query strings will be in Asian languages such
as Chinese or Japanese.
For such query strings, would the Standard or Dismax request handler work? My
understanding is that
will UseCompressedOops be useful? for application using less than 4GB
memory, it will be better that 64bit reference. But for larger memory
using application, it will not be cache friendly.
JRocket the definite guide says: Naturally, 64 GB isn't a
theoretical limit but just an example. It was
hi
it seems my mail is judged as spam.
Technical details of permanent failure:
Google tried to deliver your message, but it was rejected by the recipient
domain. We recommend contacting the other email provider for further
information about the cause of this error. The error that the other
to use
some synchronization mechanism to allow only 1 or 2 ReplicationHandler
threads are doing CMD_GET_FILE command.
Is that solution feasible?
2011/3/11 Li Li fancye...@gmail.com
hi
it seems my mail is judged as spam.
Technical details of permanent failure:
Google tried to deliver
, in most Internet systems, the amount of mutable data is much
less than that of immutable one.
How do you think about my solution?
Best,
LB
On Sat, Mar 5, 2011 at 2:45 AM, Michael McCandless
luc...@mikemccandless.com wrote:
On Fri, Mar 4, 2011 at 10:09 AM, Bing Li lbl...@gmail.com wrote
results I need.
I guess the operation performance is lower than relational database, right?
Could you please give me an explanation to that?
Best regards,
Li Bing
Dear Lance,
Could you tell me where I can find the unit tests code?
I appreciate so much for your help!
Best regards,
LB
On Sat, Jan 22, 2011 at 3:58 PM, Lance Norskog goks...@gmail.com wrote:
The unit tests are simple and show the steps.
Lance
On Fri, Jan 21, 2011 at 10:41 PM, Bing Li
Dear all,
I started to learn how to use Solr three months ago. My experiences are
still limited.
Now I crawl Web pages with my crawler and send the data to a single Solr
server. It runs fine.
Since the potential users are large, I decide to scale Solr. After
configuring replication, a single
- the fieldcache - that can't be
commented out. This cache will always jump into the picture
If I need to do such things, I restart the whole tomcat6 server to flush ALL
caches.
2011/2/11 Li Li fancye...@gmail.com
do you mean queryResultCache? you can comment related paragraph
Dear all,
I need to construct a site which supports searching for a large index. I
think scaling Solr is required. However, I didn't get a tutorial which helps
me do that step by step. I only have two resources as references. But both
of them do not tell me the exact operations.
1)
do you mean queryResultCache? you can comment related paragraph in
solrconfig.xml
see http://wiki.apache.org/solr/SolrCaching
2011/2/8 Isan Fulia isan.fu...@germinait.com:
Hi,
My solrConfig file looks like
config
updateHandler class=solr.DirectUpdateHandler2 /
requestDispatcher
Dear Adam,
I also got the OutOfMemory exception. I changed the JAVA_OPTS in catalina.sh
as follows.
...
if [ -z $LOGGING_MANAGER ]; then
JAVA_OPTS=$JAVA_OPTS
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
else
JAVA_OPTS=$JAVA_OPTS -server -Xms8096m -Xmx8096m
Dear all,
I got an exception when querying the index within Solr. It told me that too
many files are opened. How to handle this problem?
Thanks so much!
LB
[java] org.apache.solr.client.solrj.
SolrServerException: java.net.SocketException: Too many open files
[java] at
Dear all,
I got a weird problem. The number of searched documents is much more than
10. However, the size of SolrDocumentList is 10 and the getNumFound() is the
exact count of results. When I need to iterate the results as follows, only
10 are displayed. How to get the rest ones?
:
The unit tests are simple and show the steps.
Lance
On Fri, Jan 21, 2011 at 10:41 PM, Bing Li lbl...@gmail.com wrote:
Hi, all,
In the past, I always used SolrNet to interact with Solr. It works great.
Now, I need to use SolrJ. I think it should be easier to do that than
SolrNet since
Hi, all,
In the past, I always used SolrNet to interact with Solr. It works great.
Now, I need to use SolrJ. I think it should be easier to do that than
SolrNet since Solr and SolrJ should be homogeneous. But I cannot find a
tutorial that is easy to follow. No tutorials explain the SolrJ
Hi, all,
Now I cannot search the index when querying with Chinese keywords.
Before using Solr, I ever used Lucene for some time. Since I need to crawl
some Chinese sites, I use ChineseAnalyzer in the code to run Lucene.
I know Solr is a server for Lucene. However, I have no idea know how to
Dear all,
After reading some pages on the Web, I created the index with the following
schema.
..
fieldtype name=text class=solr.TextField
positionIncrementGap=100
analyzer type=index
tokenizer
Dear Jelsma,
My servlet container is Tomcat 7. I think it should accept Chinese
characters. But I am not sure how to configure it. From the console of
Tomcat, I saw that the Chinese characters in the query are not displayed
normally. However, it is fine in the Solr Admin page.
I am not sure
Dear Jelsma,
After configuring the Tomcat URIEncoding, Chinese characters can be
processed correctly. I appreciate so much for your help!
Best,
LB
On Wed, Jan 19, 2011 at 3:02 AM, Markus Jelsma
markus.jel...@openindex.iowrote:
Hi,
Yes but Tomcat might need to be configured to accept, see
or
http://localhost:8080/myindex/update?stream.body=optimize/
Thanks.
On Mon, Dec 27, 2010 at 7:12 AM, Li Li fancye...@gmail.com wrote:
maybe you can consult log files and it may show you something
btw how do you post your command?
do you use curl 'http://localhost:8983/solr/update?optimize
see maxMergeDocs(maxMergeSize) in solrconfig.xml. if the segment's
documents size is larger than this value, it will not be merged.
2010/12/27 Rok Rejc rokrej...@gmail.com:
Hi all,
I have created an index, commited the data and after that I had run the
optimize with default parameters:
maybe you can consult log files and it may show you something
btw how do you post your command?
do you use curl 'http://localhost:8983/solr/update?optimize=true' ?
or posting a xml file?
2010/12/27 Rok Rejc rokrej...@gmail.com:
On Mon, Dec 27, 2010 at 3:26 AM, Li Li fancye...@gmail.com wrote
I think it will not because default configuration can only have 2
newSearcher threads but the delay will be more and more long. The
newer newSearcher will wait these 2 ealier one to finish.
2010/12/1 Jonathan Rochkind rochk...@jhu.edu:
If your index warmings take longer than two minutes, but
write the document
into log file.
and after flushing, we delete corresponding lines in the log file
if the program corrput. we will redo the log and add them into RAMDirectory.
Any one has done similar work?
2010/12/1 Li Li fancye...@gmail.com:
you may implement your own MergePolicy to keep
indexversion returned by the indexversion command is 0 while the
same information from the details command is 292192351652 ...
This only happens to a Slave machine. For a Master machine,
indexversion returns the same number as details command.
On Mon, Dec 13, 2010 at 11:06 AM, Ralf Mattes
did you double check
http://machine:port/solr/website/admin/replication/ to see the
master is indeed a master?
On Mon, Dec 13, 2010 at 1:01 PM, Ralf Mattes r...@seid-online.de wrote:
On Mon, 13 Dec 2010 12:31:27 -0500, Xin Li wrote:
indexversion returned by the indexversion command is 0 while
the indexes are put under
$TOMCAT_HOME/bin. This is NOT what I expect. I hope indexes are under
SolrHome.
Could you please give me a hand?
Best,
Bing Li
numbers although the replication handler's
source code seems to agree with you judging from the comments.
On Monday 06 December 2010 17:49:16 Xin Li wrote:
I think this is expected behavior. You have to issue the details
command to get the real indexversion for slave machines.
Thanks,
Xin
I think this is expected behavior. You have to issue the details
command to get the real indexversion for slave machines.
Thanks,
Xin
On Mon, Dec 6, 2010 at 11:26 AM, Markus Jelsma
markus.jel...@openindex.io wrote:
Hi,
The indexversion command in the replicationHandler on slave nodes returns
Dear all,
I am a new user of Solr. Now I am just trying to try some basic samples.
Solr can be started correctly with Tomcat.
However, when putting a new schema.xml under SolrHome/conf and starting
Tomcat again, I got the following two exceptions.
The Solr cannot be started correctly unless
wish to import the Lucene indexes into Solr, may I have any other
approaches? I know that Solr is a serverized Lucene.
Thanks,
Bing Li
For solr replication, we can send command to disable replication. Does
anyone know where i can verify the replication enabled/disabled
setting? i cannot seem to find it on dashboard or details command
output.
Thanks,
Xin
Does anything know?
Thanks,
-Original Message-
From: Xin Li [mailto:xin.li@gmail.com]
Sent: Thursday, December 02, 2010 12:25 PM
To: solr-user@lucene.apache.org
Subject: disabled replication setting
For solr replication, we can send command to disable replication. Does
anyone know
PM, Gora Mohanty g...@mimirtech.com wrote:
On Wed, Dec 1, 2010 at 10:56 AM, Jerry Li zongjie...@gmail.com wrote:
Hi team
My solr version is 1.4
There is an ArrayIndexOutOfBoundsException when i sort one field and the
following is my code and log info,
any help will be appreciated
Hi
It seems work fine again after I change author field type from text to
string, could anybody give some info about it? very appriciated.
field name=author type=string indexed=true stored=true
required=true default= /
On Wed, Dec 1, 2010 at 5:20 PM, Jerry Li zongjie...@gmail.com wrote
.27t_Sorting_Working_on_my_Text_Fields.3F
And also see Erick's explanation
http://search-lucene.com/m/7fnj1TtNde/sort+on+a+tokenized+fieldsubj=Re+Solr+sorting+problem
--
Best Regards.
Jerry. Li
1. make sure the Server port=8005 shutdown=SHUTDOWN the port is not used.
2. ./bin/shutdown.sh tail -f logs/xxx to see what the server is doing
if you just feed data or modified index, and don't flush/commit,
when shutdowning, it will do something.
2010/12/1 Robert Petersen rober...@buy.com:
you may implement your own MergePolicy to keep on large index and
merge all other small ones
or simply set merge factor to 2 and the largest index not be merged by
set maxMergeDocs less than the docs in the largest one.
So there is one large index and a small one. when adding a little
docs, they
)
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:619)
--
Best Regards.
Jerry. Li | 李宗杰
. After transmission, how to append them to the old indexes? Does
the appending block searching?
Thanks so much for your help!
Bing Li
indexes?
Does the appending affect the querying?
I am learning Solr. But it seems that Solr does that for me. However, I have
to set up Tomcat to use Solr. I think it is a little bit heavy.
Thanks!
Bing Li
, the queries must be responded instantly. That's
what I mean appending. Does it happen in Solr?
Best,
Bing
On Sat, Nov 20, 2010 at 1:58 AM, Gora Mohanty g...@mimirtech.com wrote:
On Fri, Nov 19, 2010 at 10:53 PM, Bing Li lbl...@gmail.com wrote:
Hi, all,
Since I didn't find that Lucene presents
successful replication.
Older versions of Solr used rsynch etc.
Best
Erick
On Fri, Nov 19, 2010 at 10:52 AM, Bing Li lbl...@gmail.com wrote:
Hi, all,
I am working on a distributed searching system. Now I have one server
only.
It has to crawl pages from the Web, generate indexes
large indexes in a large scale distributed environment, right?
Thanks!
Bing
On Sat, Nov 20, 2010 at 3:01 AM, Gora Mohanty g...@mimirtech.com wrote:
On Sat, Nov 20, 2010 at 12:05 AM, Bing Li lbl...@gmail.com wrote:
Dear Erick,
Thanks so much for your help! I am new in Solr. So I have no idea
hi all
I confronted a strange problem when feed data to solr. I started
feeding and then Ctrl+C to kill feed program(post.jar). Then because
XML stream is terminated unnormally, DirectUpdateHandler2 will throw
an exception. And I goto the index directory and sorted it by date.
newest files are
a few users.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Adding-new-field-after-data-is-alread
y-indexed-tp1862575p1862575.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Best Regards.
Jerry. Li | 李宗杰
I don't think current lucene will offer what you want now.
There are 2 main tasks in a search process.
One is understanding users' intension. Because natural language
understanding is difficult, Current Information Retrival systems
force users input some terms to express their needs.
If you just want a quick way to query Solr server, Perl module
Webservice::Solr is pretty good.
On Mon, Nov 1, 2010 at 4:56 PM, Lance Norskog goks...@gmail.com wrote:
Yes, you can write your own app to read the file with SVNkit and post
it to the ExtractingRequestHandler. This would be
is there anyone could help me?
2010/10/11 Li Li fancye...@gmail.com:
hi all,
I want to know the detail of IndexReader in SolrCore. I read a
little codes of SolrCore. Here is my understanding, are they correct?
Each SolrCore has many SolrIndexSearcher and keeps them in
_searchers
As we know we can use browser to check if Solr is running by going to
http://$hostName:$portNumber/$masterName/admin, say
http://localhost:8080/solr1/admin. My questions is: are there any ways to check
it using command line? I used curl http://localhost:8080; to check my Tomcat,
it worked
Thanks Bob and Ahmet,
curl http://localhost:8080/solr1/admin/ping; works fine :)
Xin
-Original Message-
From: Ahmet Arslan [mailto:iori...@yahoo.com]
Sent: Monday, October 25, 2010 4:03 PM
To: solr-user@lucene.apache.org
Subject: Re: command line to check if Solr is up running
My
Hi,
I am looking for a quick solution to improve a search engine's spell checking
performance. I was wondering if anyone tried to integrate Google SpellCheck API
with Solr search engine (if possible). Google spellcheck came to my mind
because of two reasons. First, it is costly to clean up
101 - 200 of 278 matches
Mail list logo