Anyone?
On Mon, Jan 17, 2011 at 7:48 PM, Salman Akram
salman.ak...@northbaysolutions.net wrote:
Hi,
I am trying to use CommonGrams with SOLR - 1604 patch but doesn't seem to
work.
If I don't add {!complexphrase} it uses CommonGramsQueryFilterFactory and
proper bi-grams are made but of
Real NRT in Solr is not implementet yet. But you can configure a near
NRT-Search.
http://lucene.472066.n3.nabble.com/Tuning-Solr-caches-with-high-commit-rates-NRT-td1461275.html
-
--- System
One Server, 12 GB RAM, 2 Solr
why do you dont want to restart ? its a downtime 1 Minute ... !?
-
--- System
One Server, 12 GB RAM, 2 Solr Instances, 7 Cores,
1 Core with 31 Million Documents other under 100.000
- Solr1 for Search-Requests - commit
Hallo..
i don't know who i can indexing zip Dokuments, richtext, pdf and office
documents works pretty fine, but from the zip files i only get the Name of
ziped dokumentds, not the Content.
maybe i have to do some other thinks bye indexing zip, but i have read that
Tika can read zip and jar and
Hello,
With reference to below links I haven't found Hebrew support in Solr.
http://wiki.apache.org/solr/LanguageAnalysis
http://lucene.apache.org/java/3_0_3/api/all/index.html
If I want to index and search Hebrew files/data then how would I achieve
this?
Thanks,
Prasad
You may need to use Hebrew analyzer.
http://www.findbestopensource.com/search/?query=hebrew
Regards
Aditya
www.findbestopensource.com
On Tue, Jan 18, 2011 at 2:34 PM, prasad deshpande
prasad.deshpand...@gmail.com wrote:
Hello,
With reference to below links I haven't found Hebrew support
Thanks for answers,
So could i do something like that :
fieldType name=string class=solr.TextField sortMissingLast=true
omitNorms=true
analyzer
tokenizer class=solr.KeywordTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter
Both solutions are working fine for me. I guess the fq performance is
slower though, or?
Thanks for your feedback.
On 1/17/11 7:51 PM, Erick Erickson wrote:
As Ahmet says, this is what dismax does. You could also append a
filter query (fq=crawl:DIGITALDATA) to your query.
eDismax supports
On Mon, Jan 17, 2011 at 11:10 PM, Dennis Gearon gear...@sbcglobal.net wrote:
First of all, seems like a good book,
Solr-14-Enterprise-Search-Server.pdf
Question, is it possible to choose locale at search time? So if my customer is
querying across cultural/national/linguistic boundaries and I
Hey,
here are my needs :
- a query that has tagged and untagged contents
- facets that ignore the tagged contents
I tryed :
q=({!tag=toExclude} ignored) taken into account
q={tag=toExclude v='ignored'} take into account
Both resulted in a error.
Is this possible or do I have to try another
-- Forwarded message --
From: kun xiong xiongku...@gmail.com
Date: 2011/1/18
Subject: HTTP Status 400 - org.apache.lucene.queryParser.ParseException
To: solr-user@lucene.apache.org
Hi all,
I got a ParseException when I query solr with Lucene BooleanQuery
expression
Hi
A simple solution to this could be, for all such searches (foo and bar), search
them as it is from 1st(primary index) and while sending these queries to
secondary index replace and with or.
But in this particular scenario u could also have problem with proximity and
phrase queries that
Hi all,
I got a ParseException when I query solr with Lucene BooleanQuery
expression (toString()).
I use the default parser : LuceneQParserPlugin,which should support whole
lucene syntax,right?
Java Code:
BooleanQuery bq = new BooleanQuery();
Query q1 = new TermQuery(new Term(I_NAME_ENUM,
Hi,
Can anyone help me to solve the error:
Class org.carrot2.util.pool.SoftUnboundedPool does not implement the
requested interface org.carrot2.util.pool.IParameterizedPool
at
org.carrot2.core.PoolingProcessingComponentManager.init(PoolingProcessingComponentManager.java:77)
at
Hi,
I think the exception is caused by the fact that you're trying to use the
latest version of Carrot2 with Solr 1.4.x. There are two alternative
solutions here:
* as described in http://wiki.apache.org/solr/ClusteringComponent,
invoke ant get-libraries
to get the compatible JAR files.
or
*
take a look at :
http://github.com/synhershko/HebMorph with more info at
http://www.code972.com/blog/hebmorph/
On Tue, Jan 18, 2011 at 11:04 AM, prasad deshpande
prasad.deshpand...@gmail.com wrote:
Hello,
With reference to below links I haven't found Hebrew support in Solr.
Ahhh, I see. I don't know of any way to do what you want.
Best
Erick
On Mon, Jan 17, 2011 at 7:25 PM, 5 Diamond IT
i...@smallbusinessconsultingexperts.com wrote:
I want to start at row 1000, 2000, and 3000 and retrieve those 3 rows ONLY
from the result set of whatever search was used. Yes, I
That should work, but do take a look at solr/admin, the
schema browser (or use Luke) to verify that what you get
is what you expect.
Oh, and please don't name it string, it'll cause you
endless confusion G...
Best
Erick
On Tue, Jan 18, 2011 at 4:16 AM, Philippe Vincent-Royol
Hi folks,
I've noticed an unexpected behavior while working with the various
built-in integer field types (int, tint, pint). It seems as the first
two ones are subject to type checking, while the latter one is not.
I'll give you an example based on the example schema that is shipped out
Why do you want to do this? Because toString has never been
guaranteed to be re-parsable, even in Lucene, so it's not
surprising that taking a Lucene toString() clause and submitting
it to Solr doesn't work.
Best
Erick
On Tue, Jan 18, 2011 at 4:49 AM, kun xiong xiongku...@gmail.com wrote:
I suspect you missed this comment in the schema file:
***
Plain numeric field types that store and index the text
value verbatim (and hence don't support range queries, since the
lexicographic ordering isn't equal to the numeric ordering)
***
So what's happening is that the field is
Hi all,
I got the following error on solr with m/c configuration 4GB RAM and Intel
Dual Core Processor.Can you please help me out.
java.lang.OutOfMemoryError: Java heap space
2011-01-18 18:00:27.655:WARN::Committed before 500 OutOfMemoryError likely
caused by the Sun VM Bug described in
Hi
I haven't seen one like this before. Please provide JVM settings and Solr
version.
Cheers
On Tuesday 18 January 2011 15:08:35 Isan Fulia wrote:
Hi all,
I got the following error on solr with m/c configuration 4GB RAM and
Intel Dual Core Processor.Can you please help me out.
what's the alternative?
--- On Tue, 1/18/11, Erick Erickson erickerick...@gmail.com wrote:
From: Erick Erickson erickerick...@gmail.com
Subject: Re: HTTP Status 400 - org.apache.lucene.queryParser.ParseException
To: solr-user@lucene.apache.org
Date: Tuesday, January 18, 2011, 5:24 AM
Why do
Hi,
Maybe I'm missing something obvious.
I'm trying to use the dismax parser and it doesn't seem like I'm using it
properly.
When I do this:
http://localhost:8080/solr/cs/select?q=(poi_id:3)
I get a row returned.
When I incorporate dismax and say mm=1, no results get returned.
I ran other tests : when I execute the checkIndex on the master I got
random errors, but when I scp the file on another server (same software
exactly) no error occurs...
We will start using another server.
Just one question concerning checkIndex :
What does tokens mean ?
How is it possible
with dismax you must specifiy fields to query upon in the qf parameter and the
value for which you want to search through those fields in q.
defType=luceneq=poi_id:3
defType=dismaxq=3qf=poi_id
See the DisMaxQParser wiki for more
On Tuesday 18 January 2011 15:50:34 Tri Nguyen wrote:
Hi,
Hi
I am using pivots extensively in my search,and they work well for searching and
displaying. But I find the need to be able to sort by the sum of a certain
pivot, after it is collapsed.
So if my pivot term is:student_id,test_grade
I'd want to be able to sort on the number of tests a student
near Near Real Time? Is that even less real time than NRT? --wunder
On Jan 18, 2011, at 12:34 AM, stockii wrote:
Real NRT in Solr is not implementet yet. But you can configure a near
NRT-Search.
Hi Erick,
I see the point. But what is pint (plong, pfloat, pdouble) actually
intended for (sorting is not possible, no type checking is performed)?
Seems to me as it is something very similar to the string type (both
store and index the value verbatim).
-Sascha
On 18.01.2011 14:38, Erick
So if my pivot term is:student_id,test_grade
I'd want to be able to sort on the number of tests a
student has taken. and also get an average. something like:
:sort = sum( student_id,test_grade )/ count(
student_id,test_grade )
where the values would be summed and counted over all of
the
Both solutions are working fine for
me. I guess the fq performance is
slower though, or?
http://wiki.apache.org/solr/FilterQueryGuidance
what's the alternative?
q=kfc+mdcdefType=dismaxmm=1qf=I_NAME_ENUM
See more: http://wiki.apache.org/solr/DisMaxQParserPlugin
Hi,
Is there an example of how to use dismax with embedded Solr?I am currently
creating my query like this:
QueryParser parser = new
QueryParser(Version.LUCENE_CURRENT,content, new
StandardAnalyzer(Version.LUCENE_CURRENT));
Query q = parser.parse(query);
searcher.search(q,
What version of Solr are you on?
On Jan 13, 2011, at 8:23 PM, Adam Estrada wrote:
According to the documentation here:
http://wiki.apache.org/solr/SpatialSearch the field that identifies the
spatial point data is sfield. See the console output below.
Jan 13, 2011 6:49:40 PM
Hi Marc,
Have you looked at the grouping stuff that has been committed?
http://wiki.apache.org/solr/FieldCollapsing
-Grant
On Jan 17, 2011, at 5:11 AM, Marc Sturlese wrote:
I need to dive into search grouping / field collapsing again. I've seen there
are lot's of issues about it now.
Is there an example of how to use dismax with embedded
Solr?I am currently
creating my query like this:
QueryParser parser = new
QueryParser(Version.LUCENE_CURRENT,content, new
StandardAnalyzer(Version.LUCENE_CURRENT));
Query q = parser.parse(query);
Hi,
I would like make a search on two core with differents schemas.
Sample :
Schema Core1
- ID
- Label
- IDTaxon
...
Schema Core2
- IDTaxon
- Label
- Hierarchy
...
Schemas are very differents, i can't group them. Have you an idea to
realize this search ?
Thanks,
Damien
Search on two cores but combine the results afterwards to present them in
one group, or what exactly are you trying to do Damien?
On Tue, Jan 18, 2011 at 5:04 PM, Damien Fontaine dfonta...@rosebud.frwrote:
Hi,
I would like make a search on two core with differents schemas.
Sample :
Schema
Thanks Otis
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from others’ mistakes, so you do not have to make them yourself.
from 'http://blogs.techrepublic.com.com/security/?p=4501tag=nl.e036'
Thanks Robert.
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from others’ mistakes, so you do not have to make them yourself.
from
On my first schema, there are informations about a document like title,
lead, text etc and many UUID(each UUID is a taxon's ID)
My second schema contains my taxonomies with auto-complete and facets.
Le 18/01/2011 17:06, Stefan Matheis a écrit :
Search on two cores but combine the results
Thanks Ofer :-)
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from others’ mistakes, so you do not have to make them yourself.
from
OK thanks for bringing closure!
The tokens output is the total number of indexed tokens (ie, as if
you had a counter that counted all tokens produced by analysis as the
indexer consumes them).
My guess is the faulty server's hardware problem also messed up this count?
Mike
On Tue, Jan 18, 2011
If you're trying to get to a dismax parser (named dismax in
solrconfig.xml),
you need to specify qt=dismax. NOTE: the Wiki is a bit confusing on this
point, the fact that the dismax parser is *named* dismax in the
solrconfig.xml
file is coincidence, you could name it erick and specify qt=erick and
These are legacy types that aren't, frankly, very useful in recent Solr. So
you can probably safely ignore them.
BTW, you probably want to go with Trie fields (tint, tfloat, etc) as a first
choice unless you have a definite reason not to.
Hope this helps
Erick
On Tue, Jan 18, 2011 at 10:35 AM,
Erick,
The wt parameter does not specifiy the parser but the request handler to use.
Except the confusion between parser and request handler you're entirely right.
Cheers
On Tuesday 18 January 2011 17:37:41 Erick Erickson wrote:
If you're trying to get to a dismax parser (named dismax in
Okay .. and .. now .. you're trying to do what? perhaps you could give us an
example, w/ real data .. sample queries - results.
because actually i cannot imagine what you want to achieve, sorry
On Tue, Jan 18, 2011 at 5:24 PM, Damien Fontaine dfonta...@rosebud.frwrote:
On my first schema,
I want execute this query :
Schema 1 :
field name=id type=string indexed=true stored=true
required=true /
field name=title type=string indexed=true stored=true
required=true /
field name=UUID_location type=string indexed=true stored=true
required=true /
Schema 2 :
field name=UUID_location
Solr can't do that. Two cores are two seperate cores, you have to do two
seperate queries, and get two seperate result sets.
Solr is not an rdbms.
On 1/18/2011 12:24 PM, Damien Fontaine wrote:
I want execute this query :
Schema 1 :
field name=id type=string indexed=true stored=true
Hi,
I have a solr server that is failing to acquire a lock with the exception
below. I think that the server has a lot of uncommitted data (I am not sure
how to verify this) and if so I would like to salvage it.
Any suggestions how to proceed?
(btw i tried removing the lock file but it did not
Hi, all,
Now I cannot search the index when querying with Chinese keywords.
Before using Solr, I ever used Lucene for some time. Since I need to crawl
some Chinese sites, I use ChineseAnalyzer in the code to run Lucene.
I know Solr is a server for Lucene. However, I have no idea know how to
Le 18/01/2011 18:31, Jonathan Rochkind a écrit :
Solr can't do that. Two cores are two seperate cores, you have to do
two seperate queries, and get two seperate result sets.
Solr is not an rdbms.
Yes Solr can't do that but if i want this :
1. Core 1 call Core 2 to get the label
2. Core 1
Schemas are very differents, i can't group them.
In contrast to what you're saying above, you may rethink the option of
combining both type of documents in a single core.
It's a perfectly valid approach to combine heteregenous documents in a
single core in Solr. (and use a specific field -say
Whoops, picked the wrong email to reply thanks to. Wasn't actually in this
thread.
Dennis Gearon
- Original Message
From: Dennis Gearon gear...@sbcglobal.net
To: solr-user@lucene.apache.org
Sent: Tue, January 18, 2011 8:25:04 AM
Subject: Re: Does Solr supports indexing search for
I would like to use the following field declaration to store my own, COMB
UUIDs,
(same length and format, a kind of cross between version 1 and version 4). If I
leave out default value in the declaration, would that work? I.E.:
fieldType name=id_uuid class=solr.UUIDField indexed=true
sorry, never did find a solution to that.
if you do happen to figure it out, pls post a reply to this thread. thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/what-would-cause-large-numbers-of-executeWithRetry-INFO-messages-tp1453417p2281087.html
Sent from the Solr -
Hi,
This is a slave polling the master for its index version but it seems the
master fails to respond.
From the javadoc:
public class NoHttpResponseException
extends IOException
Signals that the target server failed to respond with a valid HTTP
response.
Cheers,
I see a large number
Oh, and this should not have the INFO level in my opinion. Other log lines
indicating a problem with the master (such as a time out or unreachable host)
are not flagged as INFO.
Maybe you could file a Jira ticket? Don't forget to specifiy your Solr version.
Also, please check the master log
Dear all,
After reading some pages on the Web, I created the index with the following
schema.
..
fieldtype name=text class=solr.TextField
positionIncrementGap=100
analyzer type=index
tokenizer
Bing Li,
You can configure different analyzers in your Solr's schema.xml. Have a look
at
the example Solr schema.xml to see how that's done.
http://search-lucene.com/?q=%2Bchinese+analyzer+schemafc_project=Solrfc_type=wiki
There is also SmartCN Analyzer in Lucene that you could configure in
Why creating two threads for the same problem? Anyway, is your servlet
container capable of accepting UTF-8 in the URL? Also, is SolrNet capable of
handling those characters? To confirm, try a tool like curl.
Dear all,
After reading some pages on the Web, I created the index with the
Bing Li,
Go to your Solr Admin page and use the Analysis functionality there to enter
some Chinese text and see how it's getting analyzed at index and at search
time. This will tell you what is (or isn't) going on.
Here it looks like you just defined index-time analysis, so you should see your
It's FFRT (pronounced ...) - Far From Real Time.
To help the o.p., there is a page on Solr Wiki about what one can do with Solr
and NRT search today.
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original
Udi,
Hm, don't know off the top of my head, but sounds like an interesting problem.
Are you getting this error while still writing to the index or did you stop all
writing?
Do you get this error when you issue a commit or?
Is the index on the local disk or?
Otis
Sematext ::
Dear Jelsma,
My servlet container is Tomcat 7. I think it should accept Chinese
characters. But I am not sure how to configure it. From the console of
Tomcat, I saw that the Chinese characters in the query are not displayed
normally. However, it is fine in the Solr Admin page.
I am not sure
Hi,
Yes but Tomcat might need to be configured to accept, see the wiki for more
information on this subject.
http://wiki.apache.org/solr/SolrTomcat#URI_Charset_Config
Cheers,
Dear Jelsma,
My servlet container is Tomcat 7. I think it should accept Chinese
characters. But I am not sure how
i have not stopped writing so i am getting this error all the time.
the commit actually seems to go through with no errors but it does not seem
to write anything to the index files (i can see this because they are old
and i cannot see new stuff in search results).
my index folder is on an amazon
Dear Jelsma,
After configuring the Tomcat URIEncoding, Chinese characters can be
processed correctly. I appreciate so much for your help!
Best,
LB
On Wed, Jan 19, 2011 at 3:02 AM, Markus Jelsma
markus.jel...@openindex.iowrote:
Hi,
Yes but Tomcat might need to be configured to accept, see
Udi,
It's hard for me to tell from here, but it looks like your writes are really
not
going in at all, in which case there may be nothing (much) to salvage.
The EBS volume is mounted? And fast (try listing a bigger dir or doing
something that involves some non-trivial disk IO)?
No errors
Too bad for me I guess! I was hoping there was a hidden field, perhaps, offset
one could query on. That one thing would have made this possible to do by
simply querying on it.
On Jan 18, 2011, at 7:06 AM, Erick Erickson wrote:
Ahhh, I see. I don't know of any way to do what you want.
Best
the ebs volume is operational and i cannot see any error in dmesg etc.
the only errors in catalina.out are the lock related ones (even though i
removed the lock file) and when i do a commit everything looks fine in the
log.
i am using the following for the commit:
curl
: fieldType name=id_uuid class=solr.UUIDField indexed=true
: required=true/
:
: The above won't generate a UUID on it's own, right?
correct.
-Hoss
: problem, disk space is cheap. What I wanted to know was whether it is best
: to make the single field multiValued=true or not. That is, should my
: 'content' field hold data like:
...
: or would it be better to make it a concatenated, single value field like:
functionally, the only
btw where will i find the writes that have not been committed? are they all
in memory or are they in some temp files somewhere?
The writes'll be gone if they haven't been committed yet and the
process fails.
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
If it's removed
Hello and Thanks for the reply.
I've been over that page, and it doesn't seem like it helps with the pivoting
aspect.
That is if I am sorting via an existing pivot 'sum(student_id,test_grade)' I
want my groups of student_id sorted by the sum of test_grade with that
student_id.
The data is all
Hi,
You get an error because LocalParams need to be in the beginning of a
parameter's value. So no parenthesis first. The second query should not give an
error because it's a valid query.
Anyway, i assume you're looking for :
http://wiki.apache.org/solr/SimpleFacetParameters#Multi-
i have not restarted the process yet.
if i restart it, will i lose any data that is in memory? if so, is there a
way around it?
is there a way to know if there is any data waiting to be written? (if not,
i will just restart...)
thanks.
On Tue, Jan 18, 2011 at 12:23 PM, Jason Rutherglen
if i restart it, will i lose any data that is in memory? if so, is there a
way around it?
Usually I've restarted the process, and on restart Solr using the
unlockOnStartuptrue/unlockOnStartup in solrconfig.xml will
automatically remove the lock file (actually I think it may be removed
As devs of Lucene/Solr, due to the way ASF mirrors, etc. works, we really don't
have a good sense of how people get Lucene and Solr for use in their
application. Because of this, there has been some talk of dropping Maven
support for Lucene artifacts (or at least make them external). Before
[X] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[X] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a
downstream
And here's mine:
On Jan 18, 2011, at 4:04 PM, Grant Ingersoll wrote:
Where do you get your Lucene/Solr downloads from?
[] ASF Mirrors (linked in our release announcements or via the Lucene website)
[x] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[x] I/we build them
Ahmet Arslan iorixxx at yahoo.com writes:
I've got a DataImportHandler set up
with 5 entities. I would like to do a full
import on just one entity. Is that possible?
Yes, there is a parameter named entity for that.
solr/dataimport?command=full-importentity=myEntity
That seems
Where do you get your Lucene/Solr downloads from?
[X] ASF Mirrors (linked in our release announcements or via the Lucene website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[X] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company
[] ASF Mirrors (linked in our release announcements or via the Lucene website)
[X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a downstream
project)
Where do you get your Lucene/Solr downloads from?
[x] ASF Mirrors (linked in our release announcements or via the Lucene website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build them from source via an SVN/Git checkout.
-Glen Newton
--
-
[X] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a
downstream project)
[] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a
downstream project)
[X] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a downstream
[x] ASF Mirrors (linked in our release announcements or via the Lucene website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build them from source via an SVN/Git checkout.
On Tue, Jan 18, 2011 at 1:24 PM, Glen Newton glen.new...@gmail.com wrote:
Where do you
Where do you get your Lucene/Solr downloads from?
[] ASF Mirrors (linked in our release announcements or via the Lucene website)
[X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[X] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company
[x] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[x] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[x] I/we build them from source via an SVN/Git checkout.
Where do you get your Lucene/Solr downloads from?
[] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors
That seems to delete the entire index and replace it with
only the contents of
that one entity. Is there no way to leave the index
alone for the other
entities and just redo that one?
Yes, there is a parameter named clean for that.
Where do you get your Lucene/Solr downloads from?
[x] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[x] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your
Depending on the project, I either pull from ASF Mirrors or Source. However, I
do reference Maven repository when writing Java code that is built by Maven.
And it's often a pain getting it to work!
On Jan 18, 2011, at 4:23 PM, Ryan Aylward wrote:
[X] ASF Mirrors (linked in our release
Where do you get your Lucene/Solr downloads from?
[] ASF Mirrors (linked in our release announcements or via the Lucene website)
[X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[X] I/we build them from source via an SVN/Git checkout.
On 18.01.2011, at 22:04, Grant Ingersoll wrote:
As devs of Lucene/Solr, due to the way ASF mirrors, etc. works, we really
don't have a good sense of how people get Lucene and Solr for use in their
application. Because of this, there has been some talk of dropping Maven
support for Lucene
[X] ASF Mirrors (linked in our release announcements or via the Lucene website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[X] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a downstream
project)
THX, Chris!
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from others’ mistakes, so you do not have to make them yourself.
from 'http://blogs.techrepublic.com.com/security/?p=4501tag=nl.e036'
1 - 100 of 130 matches
Mail list logo