red lun, that can be started on any of two hosts.
>
> Can you explain my benefits with two solr instances via servlet, maybe more
> performance?
>
> Regards,
> Nikola
>
> --
> Nikola Garafolic
> SRCE, Sveucilisni racunski centar
> tel: +385 1 6165 804
> email: nikola.garafo...@srce.hr
>
>
>
--
Lance Norskog
goks...@gmail.com
7;t
> running correctly. I will go get a copy of the default xml's and if I find it
> there, I will try and merge them. Does this sound I am on the right path now?
>
> -Original Message-
> From: Lance Norskog [mailto:goks...@gmail.com]
> Sent: Sunday, Novemb
clustering.enabled=true -jar start.jar
>
>
>
> Nov 7, 2010 11:35:16 AM org.apache.solr.common.SolrException log
>
> SEVERE: java.lang.RuntimeException: [solrconfig.xml] requestHandler: missing
> mandatory attribute 'class'
>
>
>
> Anyone run into issues with Carrot2?
>
>
>
> Eric
>
>
--
Lance Norskog
goks...@gmail.com
der execute
> INFO: Time taken = 0:0:0.0
> 2010-nov-04 12:32:17 org.apache.solr.core.SolrCore execute
> INFO: [] webapp=/solr path=/datapush
> params={clean=false&entity=suLIBRIS&command=full-import} status=0 QTime=0
>
>
> What am I doing wrong?
>
> Regards
> Theodor Tolstoy
> Developer Stockholm university library
>
>
--
Lance Norskog
goks...@gmail.com
t;> balanced with auto-warming, leading to overlapping warming, leading to
>> spiraling RAM/CPU usage -- but NOT an exception being thrown or HTTP error
>> delivered.
>>
>> I can't find it on the wiki, but here's a listserv post with someone
>> reporting findings t
en Stanley :
>
>> On Fri, Oct 22, 2010 at 11:52 PM, wrote:
>>
>>>
>>>
>>>
>>>
>>>
>>> >> processor="FileListEntityProcessor" fileName=".*xml" recursive="true"
>>> baseDir="C:\data\sample_records\mods\starr">
>>> >> processor="XPathEntityProcessor"
>>> url="${f.fileAbsolutePath}" stream="false" forEach="/mods"
>>> transformer="DateFormatTransformer,RegexTransformer,TemplateTransformer">
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> >> />
>>>
>>>
>>>
>>>
>>
>>
>> The documentation says you don't need a dataSource for your
>> XPathEntityProcessor entity; in my configuration, I have mine set to the
>> name of the top-level FileListEntityProcessor. Everything else looks fine.
>> Can you provide one record from your data? Also, are you getting any
>> errors
>> in your log?
>>
>> - Ken
>>
>
>
>
--
Lance Norskog
goks...@gmail.com
How can I match PART of the cityname just like the SQL LIKE command,
>> cityname LIKE '%'
>>
>>
>> Thanks!
>> --
>> View this message in context:
>> http://lucene.472066.n3.nabble.com/Solr-like-for-autocomplete-field-tp1829480p1829480.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
>
--
Lance Norskog
goks...@gmail.com
; parameter for testing) "Overlapping warming queries, you're committing too
> fast" or something. Because it's easy to make this happen without realizing
> it, and then your Solr does what Simon says, runs out of RAM and/or uses a
> whole lot of CPU and disk io.
>
rceLoader.java:390)
>>>>>>> (...)
>>>>>>>
>>>>>>> Question is: How to make and work with that
>>>>>>> Stempel? :)
>>>>>>>
>>>>>>> Cheers,
>>>>>>> Jakub Godawa.
>>>>>>>
>>>>>>> 2010/10/29 Bernd Fehling :
>>>>>>>> Hi Jakub,
>>>>>>>>
>>>>>>>> I have ported the KStemmer for use in most recent Solr trunk version.
>>>>>>>> My stemmer is located in the lib directory of Solr
>>>>>>> "solr/lib/KStemmer-2.00.jar"
>>>>>>>> because it belongs to Solr.
>>>>>>>>
>>>>>>>> Write it as FilterFactory and use it as Filter like:
>>>>>>>> >>>>>> protected="protwords.txt" />
>>>>>>>>
>>>>>>>> This is how my fieldType looks like:
>>>>>>>>
>>>>>>>> >>>>>> positionIncrementGap="100">
>>>>>>>>
>>>>>>>>
>>>>>>>> >>>>>> words="stopwords.txt" enablePositionIncrements="false" />
>>>>>>>> >>>>>> generateWordParts="1" generateNumberParts="1" catenateWords="1"
>>>>>>> catenateNumbers="1"
>>>>>>>> catenateAll="0" splitOnCaseChange="1" />
>>>>>>>>
>>>>>>>> >>>>>> protected="protwords.txt" />
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> >>>>>> words="stopwords.txt" />
>>>>>>>> >>>>>> generateWordParts="1" generateNumberParts="1" catenateWords="0"
>>>>>>> catenateNumbers="0"
>>>>>>>> catenateAll="0" splitOnCaseChange="1" />
>>>>>>>>
>>>>>>>> >>>>>> protected="protwords.txt" />
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Bernd
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Am 28.10.2010 14:56, schrieb Jakub Godawa:
>>>>>>>>> Hi!
>>>>>>>>> There is a polish stemmer http://www.getopt.org/stempel/ and I have
>>>>>>>>> problems connecting it with solr 1.4.1
>>>>>>>>> Questions:
>>>>>>>>>
>>>>>>>>> 1. Where EXACTLY do I put "stemper-1.0.jar" file?
>>>>>>>>> 2. How do I register the file, so I can build a fieldType like:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> >>>>>>>> class="org.geoopt.solr.analysis.StempelTokenFilterFactory"/>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 3. Is that the right approach to make it work?
>>>>>>>>>
>>>>>>>>> Thanks for verbose explanation,
>>>>>>>>> Jakub.
>>
>
--
Lance Norskog
goks...@gmail.com
; - Muneeb
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Disk-usage-per-field-tp934765p1827739.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Lance Norskog
goks...@gmail.com
t
> nothing definiive)
>
> Our team would rather use the "out of the box" solr rather than manually
> apply patches and worry about consistency during upgrades...
>
> Thanks in advance,
>
> will
>
--
Lance Norskog
goks...@gmail.com
choose to
> use.
>
> I've installed the Solr package under ubuntu Lucyd.
> I've completed steps 1-3.
>
> Where do I put the jar files?
> How do I make Solr use the analyzer?
>
> Thanks
>
--
Lance Norskog
goks...@gmail.com
t.
I start solr again, and it cannot open the index because of the write lock.
Why is there a write lock file when I have not tried to index anything?
--
Lance Norskog
goks...@gmail.com
his? I'm reluctant to increase the
> heap space since I suspect that will mean that there's just a longer
> period between failures. Might Zoie help here? Or should we just query
> against the Master?
>
>
> Thanks,
>
> Simon
>
--
Lance Norskog
goks...@gmail.com
-1][SnapPuller.java(1037)]readFully1048576 cost
> 979
>
>
> It's saying it cost about 1000ms for transfering 1M data every 2 times. I
> used jetty as server and embeded solr in my app.I'm so confused.What I have
> done wrong?
>
>
> At 2010-11-01 10:12:38,"L
ection-method would suffice.
>
> On 11/01/2010 03:23 AM, Lance Norskog wrote:
>>
>> 2.
>> The SolrJ library handling of content streams is "pull", not "push".
>> That is, you give it a reader and it pulls content when it feels like
>> it. If your s
at
>> > >
>> >
>> org.apache.solr.request.XMLResponseWriter.write(XMLResponseWriter.java:34)
>> > > at
>> > >
>> >
>> org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:325)
>> >
make a
> database query in an entity with 4 threads that would select 1 row per
> thread?
>
> Thanks,
> Mark
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/How-does-DIH-multithreading-work-tp1776111p1776111.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Lance Norskog
goks...@gmail.com
reads=50,
> spinWaiting=49, fetchQueues.totalSize=2500
> Can anyone help me out? Did I miss something should i be using Tomcat? One
> interesting part of this is when I try and change the nutch setting post url
> and urls by score to 1 they stay at 10 no matter what I do.
>
>
would cause performance issues apart from making no sense anyway.
>
> 3. Are there any disadvantages using Solrj over some other HTTP based
> solution e.g. creating & sending my own HTTP requests? Do I even
> have to use HTTP?
> I see the EmbeddedSolrServer exists. Any drawbacks using that?
>
> Any hints are welcome, Thanks!
>
--
Lance Norskog
goks...@gmail.com
ur to replacate 6G index for solr in my env. But my
>> network can transfer file about 10-20M/s using scp. So solr's http
>> replcation is too slow, it's normal or I do something wrong?
>>
>
>
--
Lance Norskog
goks...@gmail.com
How many items for each query?
On Sat, Oct 30, 2010 at 7:34 PM, Chamnap Chhorn wrote:
> Well, I use Solr 1.4.
>
> There are 30698 lines in my elevation file. I need only 20 results response
> back at a time.
>
> On Sun, Oct 31, 2010 at 9:12 AM, Lance Norskog wrote:
&
max'
>
> That's to be expected. Dismax doesn't even support fielded queries
> (where you specify the fieldname in the query itself) so this clause
> is treated all as text:
>
> (location_details_s:dngythdb25fu^1.0
>
> and dismax QP will be looking for tokens like "location_details_s"
> "dngythdb25fu" (assuming tokenization would split on the
> non-alphanumeric chars) in your text fields.
>
> -Yonik
> http://www.lucidimagination.com
>
--
Lance Norskog
goks...@gmail.com
t; > -Original Message-
>> > From: Michael Sokolov [mailto:soko...@ifactory.com]
>> > Sent: Thursday, October 28, 2010 9:55 PM
>> > To: 'solr-user@lucene.apache.org'
>> > Subject: Ensuring stable timestamp ordering
>> >
>> > I'm
ing on.
>>
>>>
>>> Searches are very common in this system, and it's very rare
>>> that someone actually opens up one of these attachments
>>> so I'm not really worried about the time it takes to fetch
>>> them when someone does actually want one.
>>
>>
>> You would be adding some overhead to the system in that Solr now has to
>> manage these files as stored fields. I guess I would do some benchmarking
>> to see.
>
>
--
Lance Norskog
goks...@gmail.com
tion Component is taking too much time which is
> unacceptable. The size of elevation file is only 1 Mb. I wonder other
> people using this component without problems (related to speed)? Am I
> using it the wrong way or there is a limit when using this component?
>
> On 10/29/10, Lance N
gt;> As you could see, QueryElevationComponent takes quite a lot of time. Any
>>> suggestion how to improve this?
>>>
>>> --
>>> Chhorn Chamnap
>>> http://chamnapchhorn.blogspot.com/
>>>
>>
>>
>>
>> --
>> Chhorn Chamnap
>> http://chamnapchhorn.blogspot.com/
>>
>
>
> --
> Chhorn Chamnap
> http://chamnapchhorn.blogspot.com/
>
--
Lance Norskog
goks...@gmail.com
heap size to java or to tomcat where the solr is running
>
>
> Regards,
> satya
>
--
Lance Norskog
goks...@gmail.com
significantly, I see no reason to create a new mail lists.
>
>
--
Lance Norskog
goks...@gmail.com
g issues that you should definitley take a look at...
>
> https://issues.apache.org/jira/browse/SOLR-2202
>
> -Hoss
>
--
Lance Norskog
goks...@gmail.com
e.post=%3C%2Fb%3E&hl.mergeContiguous=false
>
> It involves highlighting on a multivalued field with more than 600 short
> values inside. It takes 200 or 300 ms because of highlighting.
>
> After restarting tomcat all went fine again.
>
> I'm trying to understand why I had to restart tomcat and solr and what
> should I do to have it working 7/7 24/24.
>
> Xavier
>
>
>
--
Lance Norskog
goks...@gmail.com
Revision 414 - Directory Listing
> Modified Mon Nov 20 10:49:29 2006 UTC (3 years, 11 months ago) by martin
> 'arsen' as exceptional p1 position, to prevent 'arsenic' and
> 'arsenal' conflating
>
> In my opinion, it would be best to re-index.
>
--
Lance Norskog
goks...@gmail.com
I understanding the 'literal.title' processing correctly? Does anybody
> have experience/suggestions on how to handle this?
>
>
> Thanks - Tod
>
>
--
Lance Norskog
goks...@gmail.com
I'm not quite sure what Tika exceptions mean in this context.
You can give the 'fl=field1,field2' option to only return some fields in
a query.
You can get google-like results using highlighting and 'snippetizing'.
These are documented on the wiki.
satya swaroop wrote:
Hi ,
Can the resu
Yes, you can declare each field with the Spanish, French, etc. types.
The _t and other types are "dynamic" and don't have to be declared. This
feature is generally used when you have hundreds or thousands of fields.
It is more clear to declare your fields.
You're right- that error should not b
The XPathEntityProcessor does not do full XPath. It is a very limited
set intended to be very fast.
You can add code in any scripting language, but that is not really
performant.
Is it possible to use the RegexTransformer to find your records with
regular expressions?
Ken Stanley wrote:
On Fr
CLOB is probably better for what you want.
Also, make sure the table is declared UTF-8 (or Unicode or whatever
mysql calls it.)
virtas wrote:
As it turns out issue was somewhere in mysql. Not sure exactly where, but
something to do to with BLOB.
Now, I changed text field from BLOB to varchar
Please start new threads for new topics.
Xin Li wrote:
As we know we can use browser to check if Solr is running by going to
http://$hostName:$portNumber/$masterName/admin, say http://localhost:8080/solr1/admin. My questions
is: are there any ways to check it using command line? I used "curl
These directories are shown at the top of the admin/index.jsp page.
Check out all of the pages off of admin/index.jsp- there is a lot of
information there about what solr is doing.
Israel Ekpo wrote:
The Solr home is the -Dsolr.solr.home Java System property
Also make sure that -Dsolr.data.di
Did you restart all of these slave servers? That would help.
What garbage collection options do you use?
Which release of Solr?
How many Searchers are there in admin/stats.jsp?
Searchers hold open all kinds of memory. They are supposed to cycle out.
These are standard questions, but- what you are
It requires all of the jars that are packed into solr.war. It is a full
and complete implementation of indexing and searching.
Tharindu Mathew wrote:
Hi everyone,
Do we need all lucene jars in the class path for this? Seems that the
solr-solrj and solr-core jars are not enough
(http://wiki.apa
You may not sort on a tokenized field. You may not sort on a multiValued
field. You can only have one term in a field.
If there are more search terms than documents, A) sorting doesn't mean
anything and B) Lucene will throw an exception.
Erick Erickson wrote:
In general, the behavior when so
There is also a feature called a 'filter'. If you use certain words a
lot, you can make filter queries with just those words. Look for
'filter' and 'fq=' on the wiki.
But really you can have hundreds of words in a query and not have a
performance problem. Solr/Lucene is very fast. In benchmar
de off
>> is you get speed, and flexible ways to set up relevancy (that still perform
>> well). Took a couple decades for rdbms to get as brainless to use as they
>> are, maybe in a couple more we'll have figured out ways to make indexing
>> engines like solr equally brainless, but not yet -- but it's still pretty
>> damn easy for what it is, the lucene/Solr folks have done a remarkable job.
>>
>
>
>
> --
> Regards,
>
> Tharindu
>
--
Lance Norskog
goks...@gmail.com
.com/FieldCollapsing-and-Stats-or-Sum-tp1773842p1773842.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Lance Norskog
goks...@gmail.com
ery important: do not make a spelling or autosuggest index from a
text field which some people can see and other people can't.
On Tue, Oct 26, 2010 at 12:06 AM, Lance Norskog wrote:
> Filter queries are a set of bits which is ANDed against query results
> at a very early stage of query proc
> however you want it, using whatever kind of query parser you
>> want too (dismax, whatever), and just add on the 'fq'
>> without touching the 'q'. This is a lot
>> easier to do, and especially when you're using it for access
>> control like this, a lot harder for a bug to creep in.
>>
>> Jonathan
>>
>>
>>
>
--
Lance Norskog
goks...@gmail.com
you think you need a seperate
>> core
>> >> for every user?
>> >> mike anderson wrote:
>> >>>
>> >>> I'm exploring the possibility of using cores as a solution to "bookmark
>> >>> folders" in my solr a
ot; in my solr application. This would mean I'll need tens of
>>> thousands
>>> of cores... does this seem reasonable? I have plenty of CPUs available for
>>> scaling, but I wonder about the memory overhead of adding cores (aside
>>> from
>>> needing to fit the new index in memory).
>>>
>>> Thoughts?
>>>
>>> -mike
>>>
>>>
>>
>
>
>
> --
> Regards,
>
> Tharindu
>
--
Lance Norskog
goks...@gmail.com
failed.
>
> check tomcat or jetty logs
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Import-From-MYSQL-database-tp1738753p1745246.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Lance Norskog
goks...@gmail.com
rming
> queries are happening at the same time 'add's are continuing to stream,
> could that be enough to somehow increase memory usage enough to run into
> OOM?
>
--
Lance Norskog
goks...@gmail.com
ed in 3.1 compatible with both 3.1 and 1.4.1? If not, that's going to
> make a graceful upgrade of my replicated distributed installation a little
> harder.
>
> Thanks,
> Shawn
>
>
--
Lance Norskog
goks...@gmail.com
n arbitrary function is
> powerful and I am comfortable with the idea that many others would
> appreciate. Especially for BI needs and so on... :-D
> Is there a way to do it easily that I would have not been able to
> find, or is it actually impossible ?
>
> Thank you very much in advance for your help.
>
> --
> Tanguy
>
--
Lance Norskog
goks...@gmail.com
Correct. We used the Latin1 filter back then.
Chris Hostetter wrote:
: I am using solr 1.3. I get the below mentioned error when included the
: solr.ASCIIFoldingFilterFactory on 'text' field while index and query
: time:
I fairly certain ASCIIFoldingFilterFactory did not exist in Solr 1.3.
-Ho
2007-06-19 09:08:48
Solr's input format is '2007-06-19T09:08:48Z'.
More to the point: you are creating a string and passing that in. The
date type will accept this, but the DIH has code to accept Java JDBC
datetime values directly. So, in your select you want to somehow cast
your field data
r immediately by telephone, or e-mail and delete all copies
> of this message and any attachments from your system. Thank you.
>
>
--
Lance Norskog
goks...@gmail.com
the implications of using approach #2? Will I have to constantly
>> check around for code with security checks since only a single index is
>> used?
>>
>> Any feedback for the above concerns would be really appreciated.
>>
>> Thanks in advance.
>>
>> --
>> Regards,
>>
>> Tharindu
>>
>
>
>
> --
> Regards,
>
> Tharindu
>
--
Lance Norskog
goks...@gmail.com
;
> hello,
> Is there any api in SolrJ that calls the dataImportHandler to execute
> commands like full-import and delta-import. Please help..
>
--
Lance Norskog
goks...@gmail.com
at 14.19, Allistair Crossley wrote:
>> >>>>> Morning all,
>> >>>>>
>> >>>>> I would like to ngram a company name field in our index. I have read
>> >>>>> about
>> >>
>> >> the costs of doing so in the great David Smiley Solr 1.4 book and just
>> >> to get started I have followed his example in setting up an ngram field
>> >> type as
>> >>
>> >> follows:
>> >>>>> > >>>>> positionIncrementGap="100" stored="false"
>> >>>>> multiValued="true">
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> > >>>>> class="solr.StandardTokenizerFactory" />
>> >>>>> > >>>>> class="solr.LowerCaseFilterFactory" />
>> >>>>> minGramSize="4"
>> >>>>> maxGramSize="15" />
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> > >>>>> class="solr.StandardTokenizerFactory" />
>> >>>>> > >>>>> class="solr.LowerCaseFilterFactory" />
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> I have restarted/reindexed everything but I still cannot search
>> >>>>>
>> >>>>> hoot
>> >>>>>
>> >>>>> and get back the company named Shooter. searching shooter is fine.
>> >>>>>
>> >>>>> I have followed other examples on the internet regards an ngram field
>> >>>>> type. Some examples seem to use an index analyzer that has an ngram
>> >>>>> tokenizer rather than filter if this makes a difference. But in all
>> >>>>> cases I am not seeing the expected result, just 0 results.
>> >>>>>
>> >>>>> Is there anything else I should be considering here? I feel like I
>> >>>>> must be very close, it doesn't seem complicated but yet it's not
>> >>>>> working like everything else I have done with solr to date :)
>> >>>>>
>> >>>>> Any guidance appreciated,
>> >>>>>
>> >>>>> Allistair
>
> --
> Markus Jelsma - CTO - Openindex
> http://www.linkedin.com/in/markus17
> 050-8536600 / 06-50258350
>
--
Lance Norskog
goks...@gmail.com
yParser.ParseException: Cannot parse
> 'term OR': Encountered "<EOF>" at line 1, column 7.
> Was expecting one of:
> <NOT> ...
> "+" ...
> "-" ...
> "(" ...
> "*" ...
> <QUOTED> ...
> <TERM> ...
> <PREFIXTERM> ...
> <WILDTERM> ...
> <REGEXPTERM> ...
> "[" ...
> "{" ...
> <NUMBER> ...
> <TERM> ...
> "*" ...
> Powered by Jetty://
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
--
Lance Norskog
goks...@gmail.com
ultiple spellcheck indexes.
>
> -Peter
>
> --
> Peter M. Wolanin, Ph.D. : Momentum Specialist, Acquia. Inc.
> peter.wola...@acquia.com : 978-296-5247
>
> "Get a free, hosted Drupal 7 site: http://www.drupalgardens.com";
>
--
Lance Norskog
goks...@gmail.com
the performance implications of a script transform vs.
> the same transform done in java.
>
>
> thanks,
> Tim
>
--
Lance Norskog
goks...@gmail.com
.mysql.com/doc/refman/5.1/en/time-zone-support.html
> for how to set the timezone for mysql. (It is also possible for
> the client connection to set a connection-specific timezone,
> but I do not think that is what is happening here.)
> * The type of the columns is different, e.g., one could be a
> DATETIME, and the other a TIMESTAMP. The mysql timezone
> link above also explains how these are handled.
>
> Without going through the above could you not just set the timezone
> for "reg_date" to UTC to get the result that you expect?
>
> Regards,
> Gora.
>
--
Lance Norskog
goks...@gmail.com
p working after schema
>> change?
> [...]
>
> You will need to reindex if the schema is changed.
>
> Regards,
> Gora
>
--
Lance Norskog
goks...@gmail.com
z wrote:
>>
>> > Good Evening and Morning.
>> >
>> > I noticed that if I do a facet search on a field which value contains
>> > umlaute (öäü),
>> > the facet list returned converted the value of the field into a normal
>> > charact
ut of filehandles and that merging 10,000 segments during an
>> optimize might not be efficient.
>>
>> We would like to find some optimum mergeFactor somewhere between 0 (noMerge
>> merge policy) and 1,000. (We are also planning to raise the ramBufferSizeMB
>> significantly).
>>
>> What experience do others have using a large mergeFactor?
>>
>> Tom
>>
>>
>>
>>
>
--
Lance Norskog
goks...@gmail.com
From another thread:
Spake Grant Ingersoll:
You can have multiple lat/lons per document, you just can't have multiple per
field.
Is this a temporary limitation, a quirk of the LatLon type, or is it an
architectural limitation in the compound type design?
Lance Norskog
Start a new thread.
Dennis Gearon wrote:
What's the difference between the filter/anayzers that have 'factory' in their
name, and the ones that don't?
Dennis Gearon
Signature Warning
EARTH has a Right To Life,
otherwise we all die.
Read 'Hot, Flat, and Crowded'
Laugh at
n in Solr,i.e. I could feed it something like:
>
> 2010-10-15T23:59:59
>
> And it's indexable of course :-)
>
>
> Dennis Gearon
>
> Signature Warning
>
> EARTH has a Right To Life,
> otherwise we all die.
>
> Read 'Hot, Flat, an
l use 8 bytes per document in your index, rather than 4 bytes
> per doc + an array of unique string-date values per index.
>
> Trunk (4.0-dev) is also much more efficient at storing string-based
> fields in the FieldCache - but that will only help you if you're
> comfortable with using development versions.
>
> -Yonik
> http://lucenerevolution.org Lucene/Solr Conference, Boston Oct 7-8
>
--
Lance Norskog
goks...@gmail.com
Oracle has a bunch of functions you can use in the SELECT statement to
translate types. You may want to translate a NULL into an empty string.
harrysmith wrote:
Anyone ever see this error on an import?
Caused by: java.lang.NullPointerException
at
oracle.jdbc.driver.DBConversion._CHARB
Please start a new email thread for this instead of replying to an
existing one with a new subject and question.
Sharma, Raghvendra wrote:
I have been able to load around a million rows/docs in around 5+ minutes. The
schema contains around 250+ fields. For the moment, I have kept everything
Please start a new email thread instead of replying to an existing one
with a new subject and question.
Sharma, Raghvendra wrote:
Is there a way to specify a xslt at the server side, and make it default, i.e.
whenever a response is returned, that xslt is applied to the response
automatically.
Yes. stream.file and stream.url are independent of the request handler.
They do their magic at the very top level of the request.
However, there are no unit tests for these features, but they are widely
used.
Tod wrote:
I can do this using GET:
http://localhost:8983/solr/update?stream.body=
The score of a document has no scale: it only has meaning against other
score in the same query.
Solr does not rank these documents correctly. Without sharing the TF/DF
information across the shards, it cannot.
If the shards each have "a lot" of the same kind of document, this
problem averag
ungodly amount of RAM, so it is. It's done some basic cleanup of
>> young gen (ParNew) but because the heap size has never gone above 50GB,
>> it hasn't found any reason to actualy start a CMS GC to look for dea
>> objects in Old Gen that it can clean up.
>>
>>
>> (Can someone who understands GC and JVM tunning better then me please
>> sanity check me on that?)
>>
>>
>> -Hoss
>>
>> --
>> http://lucenerevolution.org/ ... October 7-8, Boston
>> http://bit.ly/stump-hoss ... Stump The Chump!
>>
>>
>
--
Lance Norskog
goks...@gmail.com
sed about the query performance. Do you have any suggestions?
>
> btw, the cache setting is as follows:
>
> filterCache: 256, 256, 0
> queryResultCache: 1024, 512, 128
> documentCache: 16384, 4096, n/a
>
> Thanks.
>
>
>
--
Lance Norskog
goks...@gmail.com
> **
>>> This message may contain confidential or proprietary information intended
>>> only for the use of the
>>> addressee(s) named above or may contain information that is legally
>>> privileged. If you are
>>> not the intended addressee, or the person responsible for delivering it to
>>> the intended addressee,
>>> you are hereby notified that reading, disseminating, distributing or
>>> copying this message is strictly
>>> prohibited. If you have received this message by mistake, please
>>> immediately notify us by
>>> replying to the message and delete the original message and any copies
>>> immediately thereafter.
>>>
>>> Thank you.
>>> **
>>> CLLD
>>>
>>
>>
>>
>>
>
> --
> Grant Ingersoll
> http://lucenerevolution.org Apache Lucene/Solr Conference, Boston Oct 7-8
>
>
--
Lance Norskog
goks...@gmail.com
that will be the next thing I
> try. Any help would be appreciated.
>
> Thanks,
>
> -Jeff
>
--
Lance Norskog
goks...@gmail.com
t; the CPUs, you _can_ have improved performance due to better locality, cache
> hits, etc. It takes some tuning and experimentation. YMWV
>
> -Glen
> http://zzzoot.blogspot.com/
>
> [1]http://linuxmanpages.com/man8/numactl.8.php
>
>
>
> --
>
> -
>
--
Lance Norskog
goks...@gmail.com
pure text into different fields of the index?
>> How
>> do I make nutch/solr understand these different parts belong to different
>> fields? Maybe I can use existing content in the fields in my index?
>> Thanks.
>>
>>
>>
>
>
>
>
--
Lance Norskog
goks...@gmail.com
ecchi ha scritto:
>>
>> Hi everybody,
>>
>> I`m implementing my first solr engine for conceptual tests, I`m crawling
>> my
>> wiki intranet to make some searches, the engine is working fine already,
>> but
>> I need some interface to make my searchs.
&g
index but only 2.8 million
>>documents. My search query times on a smaller box than you specify are 6533
>>milliseconds on an unwarmed (newly rebooted) instance.
>>--
>>View this message in context:
>>http://lucene.472066.n3.nabble.com/Re-The-search-response-time-is-too-loong-tp1587395p1588554.html
>>Sent from the Solr - User mailing list archive at Nabble.com.
>>
--
Lance Norskog
goks...@gmail.com
As it turns out, the problem is not trivial in general, and shoehorning
it into an existing search system nicely is also not simple. I think the
current spatial stuff is the third go-round on doing GIS in Lucene/Solr.
PeterKerk wrote:
It would be such a shame if there's no way to get it now
al
The TikaEntityProcessor is the class in the DIH that calls the Tika
libraries.
TikaEntityProcessor is not in Solr 1.4 or 1.4.1. It is in the trunk and
the 3.x branch.
I have set it up from the 3.x branch. I discovered that the
"DefaultParser" does not work, and you have to explicitly name the
In answer to your actual question: "geo" is a "request handler" that is
configured in solrconfig.xml. I don't know what it needs. The LocalSolr
stuff should supply samples of how to change solrconfig.xml and schema.xml.
Lance
PeterKerk wrote:
I've configured LocalSolr according to description
The XPath implementation only supports a few features of xpath.
But, you can give it an XSL script that transforms the incoming XML
before it hits the XPathEntityProcessor. You can use this to boil down
the incoming to a simple format with the things you want.
yklxmas wrote:
Hi guys,
I'm tr
Developers, like marketers, have this confusing habit of speaking both
in the present and an imaginary utopian future.
Dennis Gearon wrote:
This is what made me think it was doable now.
http://wiki.apache.org/solr/SpatialSearch
Dennis Gearon
Signature Warning
EARTH has a Ri
Yes. Look at the jsp page solr/admin/analysis.jsp . This does calls to
Solr which do exactly what you want. They use the AnalysisComponent.
Lance
zackko wrote:
Hi to all the Forum from a new subscriber,
I’m working on the Server Side Search solution of the Company when I’m
currently employed
Does this do what you want?
http://wiki.apache.org/solr/StatsComponent
I can see that "group by" is a possible enhancement to this component.
Kjetil Ødegaard wrote:
Hi all,
we're currently using Solr 1.4.0 in a project for statistical data, where we
group and sum a number of "double" values.
There is a third-party add-on for Solr 1.4 called LocalSolr. It has a
different API than the upcoming SpatialSearch stuff, and will probably
not live on in future releases.
The LatLonType stuff is definitely only on the trunk, not even 3.x.
PeterKerk wrote:
Hi Dennis,
Good suggestion, but I
and ad_description
>> fields to make them
>> > more important to your search.
>> >
>> > My guess is, although I haven't done this myself, the
>> default Scoring
>> > algorithm can be augmented or replaced with your own.
>> That may be a route
>> > to
>> > take if you are comfortable with java.
>> > --
>> > View this message in context:
>> >
>> > http://lucene.472066.n3.nabble.com/Can-i-do-relavence-and-sorting-together-t
>> > p1516587p1516691.html
>> > Sent from the Solr - User mailing list archive at
>> Nabble.com.
>> >
>> >
>>
>
--
Lance Norskog
goks...@gmail.com
ontents" it still highlights risk as well as 1, because
>> it is specified in the
>> query.. now if i split the query as "+Contents:risk" is
>> given as main query and
>> "+Form:1" as filter query and specify "Contents" as
>> highlighting field, it works
>> fine, can any body tell me the reason.
>>
>>
>> Regards
>> Ahsan
>>
>>
>>
>>
>
--
Lance Norskog
goks...@gmail.com
ay to search through the archive.
>>>>
>>>> Thanks for your help
>>>>
>>> Markus Jelsma - Technisch Architect - Buyways BV
>>> http://www.linkedin.com/in/markus17
>>> 050-8536620 / 06-50258350
>>
>>
>
>
--
Lance Norskog
goks...@gmail.com
ler null5m
> nullanullimale nullualitätnull
> • für innen und aunullen
> • langlebig und nulletterfest
> • nullarm und pnullegeleicht
> nullunullenfensterbanknullnullnull,null cm
> 1nullnullnullnullnulllfm
> nullelnullpal cnullnullnullacnullminullnullnullfacnulls cnullnullnullnull
> fnull m anullernullrnullnullFassanulle nullFenullsnuller
>
> Thanks for your time
>
--
Lance Norskog
goks...@gmail.com
solr.add(docs)
>>>>> solr.commit
>>>>> end
>>>>>
>>>>> Scott
>>>>>
>>>>> On Thu, Sep 16, 2010 at 2:27 PM, Shashi Kant wrote:
>>>>>>
>>>>>> Start with a *:*, then the “numFound” attribute of the
>>>>>> element should give you the rows to fetch by a 2nd request.
>>>>>>
>>>>>>
>>>>>> On Thu, Sep 16, 2010 at 4:49 PM, Christopher Gross
>>>>>> wrote:
>>>>>>> That will stil just return 10 rows for me. Is there something else in
>>>>>>> the configuration of solr to have it return all the rows in the
>>>>>>> results?
>>>>>>>
>>>>>>> -- Chris
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Sep 16, 2010 at 4:43 PM, Shashi Kant
>>>>>>> wrote:
>>>>>>>> q=*:*
>>>>>>>>
>>>>>>>> On Thu, Sep 16, 2010 at 4:39 PM, Christopher Gross
>>>>>>>> wrote:
>>>>>>>>> I have some queries that I'm running against a solr instance (older,
>>>>>>>>> 1.2 I believe), and I would like to get *all* the results back (and
>>>>>>>>> not have to put an absurdly large number as a part of the rows
>>>>>>>>> parameter).
>>>>>>>>>
>>>>>>>>> Is there a way that I can do that? Any help would be appreciated.
>>>>>>>>>
>>>>>>>>> -- Chris
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>
>>>>
>>
>>
>>
>>
>>
>>
>
--
Lance Norskog
goks...@gmail.com
ll-indexing-by-MSSQL-or-MySQL-tp1515572p1516763.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Lance Norskog
goks...@gmail.com
Rolling logfiles is configured in the servlet container, not Solr.
Indexing logfiles is a pain because of multiline log outputs like
Exceptions.
Vladimir Sutskever wrote:
Can SOLR be configured out of the box to handle rolling log files?
Kind regards,
Vladimir Sutskever
Investment Bank - Te
Andrew, you should download Solr from the apache site. This packaging is
wrong-headed.
As to Java, a Linux person would know the system for picking which is
the standard Java.
andrewdps wrote:
Also,the solr Java properties looks like this using gcj,despite setting
java_home in /etc/profile
Good eye, Thomas! Yes, GCJ is a non-starter. You're best off downloading
Java 1.6 yourself, but I understand that it is easier to use the public
package repositories.
Thomas Joiner wrote:
My guess would be that Jetty has some configuration somewhere that is
telling it to use GCJ. Is it possib
501 - 600 of 1436 matches
Mail list logo