Does anyone know of sample code that illustrates how to use the
DocumentWriterPerThread class in indexing?
Thanks,
Mike
Thanks for your reply :)
I have some new questions now:
1. How stable is trunk version? Has anyone used it on any kind of highload
project in production?
2. Does version 3.6 support near real time index update?
3. What is scheme of Solr index storing? Is it all in memory for each shard
or in disk w
I created to track this. https://issues.apache.org/jira/browse/SOLR-3362
On Mon, Apr 16, 2012 at 11:18 PM, Jamie Johnson wrote:
> doing some debugging this is the relevant block in FacetComponent
>
> String name = shardCounts.getName(j);
> long count = ((Number)shardCounts.getV
doing some debugging this is the relevant block in FacetComponent
String name = shardCounts.getName(j);
long count = ((Number)shardCounts.getVal(j)).longValue();
ShardFacetCount sfc = dff.counts.get(name);
sfc.count += count;
the issue is sfc is null. I d
worth notingthe error goes away at times depending on the number
of facets asked for.
On Mon, Apr 16, 2012 at 10:38 PM, Jamie Johnson wrote:
> I found (what appears to be) the issue I am experiencing here
> http://lucene.472066.n3.nabble.com/NullPointerException-with-distributed-facets-td3528
Hi, all,
I am working on Solr3.3. Recently I found out a new feature (Field
Aliasing/Renaming) in Solr3.6, and I want to use it in Solr3.3. Can I do
that, and how?
Thank you.
Best Regards,
Bing
--
View this message in context:
http://lucene.472066.n3.nabble.com/Can-I-use-Field-Aliasing-R
One of big weaknesses of Solr Cloud (and ES?) is the lack of the
ability to redistribute shards across servers. Meaning, as a single
shard grows too large, splitting the shard, while live updates.
How do you plan on elastically adding more servers without this feature?
Cassandra and HBase handle
Is there some way to index docs (extracted from main documenet) in a second
core when Solr is indexing the main document in a first core?
I guess it can be done by an UpdateProcessor in /core0 that prepares the new
docs and just calls /core1/update but maybe someone has already done this in
a bett
2012/4/16 Tomás Fernández Löbbe :
> I'm wondering if Solr is the best tool for this kind of usage. Solr is a
> "text search engine"
Well, Lucene is a "full-text search library", but Solr has always been far more.
Dating back to it's first use in CNET, it was used as a browse engine
(faceted search
I'm wondering if Solr is the best tool for this kind of usage. Solr is a
"text search engine", so even if it supports all those features, it is
design for text search, which doesn't seem to be what you need. Which are
the reasons for moving from a DB implementation to Solr?
Don't misunderstand me,
> Hi everyone :)
Hi :)
> So, these are my 3 questions:
> 1. Does Solr provide searching among different count fields with different
> types like in WHERE condition?
Yes. As long as these are not full-text you should use filter queries for
these, e.g.
&q=*:*
&fq=country:USA
&fq=language:SPA
&fq=
wow. I have nothing else to say. Worked perfectly once I fixed
that, thanks.
On Mon, Apr 16, 2012 at 4:16 PM, Yonik Seeley
wrote:
> On Mon, Apr 16, 2012 at 4:13 PM, Jamie Johnson wrote:
>> I tried to execute the following on my cluster, but it had no results.
>> Should this work?
>>
>> cu
Hi everyone :)
Our company is very interesting in Solr engine for searching people.
I have 3 questions below about extended capabilities of Solr, but first I'd
like to present you the problem
Let's say we have ~100 mln users with many characteristics - some of them
described below.
We want to sear
On Mon, Apr 16, 2012 at 4:13 PM, Jamie Johnson wrote:
> I tried to execute the following on my cluster, but it had no results.
> Should this work?
>
> curl http://host:port/solr/collection1/update/?commit=true -H
> "Contenet-Type: text/xml" --data-binary
> '*:*'
Is this a cut-n-paste of what you
I tried to execute the following on my cluster, but it had no results.
Should this work?
curl http://host:port/solr/collection1/update/?commit=true -H
"Contenet-Type: text/xml" --data-binary
'*:*'
I found myself wanting to write ...
OR _query_:{!lucene fq=\"a:b\"}c:d
And then I started looking at query trees in the debugger, and found
myself thinking that there's no possible representation for this -- a
subquery with a filter, since the filters are part of the
RequestBuilder, no
Hello,
First of all, Solr intends to use top level reader at contrast to Lucene's
per segment readers strategy. The closest analog in Lucene, which i'm aware
of is
new CachingWrappingFilter(new
QueryWrapperFilter(youBooleanQuery)).getDocIdSet(reader)
It creates per segment bitsets. Not really stra
I just started trying to integrate solr with Nagios and I have it
reading the JMX information from solr but I ran into an issue where
the solr jmx data is all strings (as identified here
https://issues.apache.org/jira/browse/SOLR-3083). Was this a recent
change? Is there a reason this would be th
> Not really - it changes what tokens are indexed for them numbers and
> range queries won't work correctly.
> Sorting (FieldCache), function queries, etc, would still work, and
> exact match queries would still work.
Thanks. So it is just range queries that won't work correctly? That's okay for
On Mon, Apr 16, 2012 at 12:12 PM, Michael Ryan wrote:
> Is it safe to change the precisionStep for a TrieField without doing a
> re-index?
Not really - it changes what tokens are indexed for them numbers and
range queries won't work correctly.
Sorting (FieldCache), function queries, etc, would s
Is it safe to change the precisionStep for a TrieField without doing a re-index?
Specifically, I want to change a field from this:
to this:
By "safe", I mean that searches will return the correct results, a FieldCache
on the field will still work, clowns won't eat me...
-Michael
On 04/16/2012 06:45 PM, Roman K wrote:
On 04/16/2012 04:31 PM, Jan Høydahl wrote:
Hi,
Solr3.6 is just out with Tika 1.0. Can you try that? Also, Solr TRUNK
now has Tika 1.1...
I recommend downloading Tika-App and testing your offending files
directly with that http://tika.apache.org/1.1/getti
On 04/16/2012 04:31 PM, Jan Høydahl wrote:
Hi,
Solr3.6 is just out with Tika 1.0. Can you try that? Also, Solr TRUNK now has
Tika 1.1...
I recommend downloading Tika-App and testing your offending files directly with
that http://tika.apache.org/1.1/gettingstarted.html
--
Jan Høydahl, search s
Hi,
If anyone is interested, I am available for full-time assignments; I am
involved in Hadoop/Lucene/Solr world since 2005 (Nutch). Recently
implemented Lily-Framework-based distributed task executor which is
currently used for Vertical Search by lead insurance companies and media:
RSS, CVS, Web
Hi,
If anyone is interested, I am available for full-time assignments; I am
involved in Hadoop/Lucene/Solr world since 2005 (Nutch). Recently
implemented Lily-Framework-based distributed task executor which is
currently used for Vertical Search by lead insurance companies and media:
RSS, CVS, Web
You could check the index version. See
http://wiki.apache.org/solr/SolrReplication. Every time a commit is issued,
the index version is incremented.
but you could also use the "backupAfter" feature, explained also in
http://wiki.apache.org/solr/SolrReplication
Tomás
On Mon, Apr 16, 2012 at 11:07
Hi there,
we have a Solr index which is mostly used for search queries. We add new
documents to the index only once a day. We want to backup the index files once
the new documents have been indexed and committed.
Our current solution with Solr 1.4 is to monitor the "optimized" flag of the
inde
Hi,
Solr3.6 is just out with Tika 1.0. Can you try that? Also, Solr TRUNK now has
Tika 1.1...
I recommend downloading Tika-App and testing your offending files directly with
that http://tika.apache.org/1.1/gettingstarted.html
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominven
Hi,
There is no geocoding API in Solr as per now so you could handle that in your
PHP app.
But check out https://issues.apache.org/jira/browse/SOLR-2833 for potential
solution. I have a first version of the processor which I can upload to that
JIRA if you're interested.
--
Jan Høydahl, search
Hello,
When I have seen this it usually means the SOLR you are trying to connect to is
not available.
Do you have it installed on:
http://localhost:8080/solr
Try opening that address in your browser. If your running the example solr
using the embedded Jetty you wont be on 8080 :D
Hope that
Hello,
I am running some tests to see, whether we can use Solr in our organization.
I have to be able to process MS Word .docx files and then be able to
search them as they were simple plain text.
The problem is that when processing the docx files, the result that I
get while running the *:* q
Hello!
What is the context your web application is available at ? Because I
see you try to connect to:
String url = "http://localhost:8080/solr";;
Which may be different in your case.
--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
> Hi All,
> I am trying to
Hi All,
I am trying to integrate solr with my Spring application.
I have performed following steps:
1) Added below list of jars to my webapp lib folder.
apache-solr-cell-3.5.0.jar
apache-solr-core-3.5.0.jar
apache-solr-solrj-3.5.0.jar
commons-codec-1.5.jar
commons-httpclient-3.1.jar
lucene-analy
Hi,
If I want to do a proximity search and they have provided me with a name of
a city, for example, “London”. How do I search this by proximity within
Solr?
I am assuming I first need a process to convert the city name to a long and
lat, so that Solr can understand where London is. Is this somet
have a look at
http://wiki.apache.org/solr/SimpleFacetParameters#facet.query_:_Arbitrary_Query_Faceting
--
View this message in context:
http://lucene.472066.n3.nabble.com/Faceting-and-Variable-Buckets-tp3913947p3914017.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello,
Just wondering if the following is possible:
We need to produce facets on ranges but they do not follow a steady increment
which is all I can see SOLR can produce. Im looking for a way to produce
facets on a price field:
0-1000
1000-5000
5000-1
1-2
Any suggestions with out
I'm trying to get the default operator of a schema in solr 3.6 but unfortunately
everything is deprecated.
The API solr 3.6 says:
getQueryParserDefaultOperator() - Method in class
org.apache.solr.schema.IndexSchema
Deprecated.
use getSolrQueryParser().getDefaultOperator()
getSolrQueryP
37 matches
Mail list logo