I have a collection containing n shards.
Now I want to create a new collection and perform a data import from old to
new one.
How can I make hash ranges of new collection be the same as old one , in
order to make data import be locally ( on the same machine ) ?
I mean , if shard#3 of old
First of all, make sure you use docvalues for facet fields with many unique
values.
If that still does not help you can try the following.
My kollega Toke Eskildsen has made a huge improvement when faceting IF the
number of results in the facets are less than 8% of the total number of
documents.
Thanks for your reply.
How can I support suffix search?
Name: Hello_world
Search: *world
And I'll get hello_world as a result.
Thanks in advance.
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Wednesday, June 11, 2014 5:47 PM
To: solr-user@lucene.apache.org
Hi guys,
After I upgraded to Solr 4.8.1 I got a few warning messages in the log
at startup:
WARN o.a.s.c.SolrResourceLoader - Solr loaded a deprecated
plugin/analysis class [solr.ThaiWordFilterFactory]. Please consult
documentation how to replace it accordingly.
I fixed this with
q.op=OR
On 2014??06??06?? 20:48, ?? wrote:
hi,
I have two docs,
a) aa bb cc and,
b) aa cc bb.
The query is aa bb. What I expected is the doc a comes first with a higher
score than doc b because the term distance in query and that in doc a are more similar.
After google
According to CWIKI
https://cwiki.apache.org/confluence/display/solr/Installing+Solr ,
the working Jetty server in SOLR example folder is optimized and is
recommended for optimal SOLR performance.
1. What settings are optimized in the example folder and
how a stand alone Jetty installation can
Hi,I have integrated Solr 4.6 with Apache ManifoldCF 1.5 to crawl sharepoint,
shared drives. Now i am able to index content from these sources along
with ACL details which are stored in solr index.Now i want to perform search
queries on solr index to get search results containing these ACLs. E.g.
Hello,
Is it possible, in solr 4.2.1, to split a multivalued field with a json
update as it is possible do to with a csv update?
with csv
/update/csv?f.address.split=truef.address.separator=%2Ccommit=true
with json (using a post)
/update/json
Thanks,
Elisabeth
There is always UpdateRequestProcessor.
Regards,
Alex
On 12/06/2014 7:05 pm, elisabeth benoit elisaelisael...@gmail.com wrote:
Hello,
Is it possible, in solr 4.2.1, to split a multivalued field with a json
update as it is possible do to with a csv update?
with csv
Hi Lalitjangra,
MCF in Action book is publicly available to anyone :
https://manifoldcfinaction.googlecode.com/svn/trunk/pdfs/
You need to download/use mcf-solr4x-plugin to filter results. There are two
separate options, SearchComponent and QParserPlugin.
Thanks Ahmet ,
I have already setup mcf-solr4x-plugin in MCF 1.5.1 and i can see ACLs
indexed into solr indexes.
But now i assume i need to write Solr query to put a user's permission
details into in it which can be compared to ACL stored in solr. This is why
i have posted it here. Also i have
Thanks for the info. I will look at that.
On Wed, Jun 11, 2014 at 3:47 PM, Joel Bernstein joels...@gmail.com wrote:
In Solr 4.9 there is a feature called RankQueries, that allows you to
plugin your own ranking collector. So, if you wanted to write a
ranking/sorting collector that used a thread
I'm having a SolrCloud setup using Solr 4.6 with several configuration sets
and multiple collections, some sharing the same config set.
I would like now to update the schema inside a config set, adding a new
field.
1. Can i do this directly downloading the schema file and re-uploading after
Hi,
Can anyone please look into this issue. I want to implement this query in
solr.
Thanks,
Vivek
-- Forwarded message --
From: Vivekanand Ittigi vi...@biginfolabs.com
Date: Thu, Jun 12, 2014 at 11:08 AM
Subject: Implementing Hive query in Solr
To: solr-user@lucene.apache.org
you might want to take a look at the rpm building scripts i have here
https://github.com/boogieshafer/jetty-solr-rpm
gives an example of taking the included jetty and tweaking it in a few ways to
make it more production ready by adding init script, configuring JMX, tuning
logging and putting
bq: Now I want to create a new collection and perform a data import from old to
new one.
Let's start there before considering hash ranges. exactly how do you intend
to do this? Forget about mapping the hash ranges, how do you expect to
move the data?
And, even more important, what is the
Thanks for your answer,
best regards,
Elisabeth
2014-06-12 14:07 GMT+02:00 Alexandre Rafalovitch arafa...@gmail.com:
There is always UpdateRequestProcessor.
Regards,
Alex
On 12/06/2014 7:05 pm, elisabeth benoit elisaelisael...@gmail.com
wrote:
Hello,
Is it possible, in solr
Stop and back up..
It's very unusual to use KeywordTokenizer with WDDF, it's far
more common to use something like StandardTokenizer, WhitespaceTokenizer, etc.
Using keyword along with WDDF is kind of working, but probably not
doing what you
expect. Get familiar with the admin/analysis page
Hi ,
I am using lucene solr , would like to use Data import handler for to index
files but millions of files are there to import so indexing process will
take more time. I decided to import files month by month,so could you please
provide an suggestion to import files month by month basis.
Any time I see a question like this I break out in hives (little pun there).
Solr is _not_ a replacement for Hive. Or any other SQL or SQL-like
engine. Trying to make it into one is almost always a mistake. First I'd ask
why you have to form this query.
Now, while I have very little knowledge of
Partition your files into month-size folders and have DIH work on one
directory at a time
What I'd do is move away from DIH and use SolrJ. That way
1 you can take full control over what you do
2 you can offload the heavy lifting of parsing the various files
(I'm assuming here that you're
You can easily write a JavaScript snippet using the stateless script update
processor and do whatever string manipulation you want on an input value,
and then write extracted strings to whatever field(s) you want. My e-book
has plenty of script examples.
-- Jack Krupansky
-Original
Hi,
I am using solr4.8, solrj for to do searching, would like to get response
of search query in html format,for that purpose i have written this code,
private static final String urlString = http://localhost:8983/solr;;
private SolrServer solrServer;
public SolrJ() {
Hi,
I see that you have ampersand left when setting various parameters.
query.set(wt, xslt);
should be
query.set(wt, xslt);
On Thursday, June 12, 2014 6:12 PM, Venkata krishna venkat1...@gmail.com
wrote:
Hi,
I am using solr4.8, solrj for to do searching, would like to
You may have to implement this yourself. In Solr 4.9 you'll be able to
implement your own analytic functions in java and plug them in using the
AnalyticsQuery API. This is a new Solr API for plugging in custom
analytics.
http://heliosearch.org/solrs-new-analyticsquery-api/
Joel Bernstein
Search
Hello,
I've found https://github.com/kawasima/solr-jdbc recently. Haven't checked
it so far, but the idea is fairly cool. I wonder if it can be relevant to
your challenge.
On Thu, Jun 12, 2014 at 9:38 AM, Vivekanand Ittigi vi...@biginfolabs.com
wrote:
Hi,
My requirements is to execute this
Yeah, solr-jdbc does look interesting. Has an Apache license as well.
Joel Bernstein
Search Engineer at Heliosearch
On Thu, Jun 12, 2014 at 1:18 PM, Mikhail Khludnev
mkhlud...@griddynamics.com wrote:
Hello,
I've found https://github.com/kawasima/solr-jdbc recently. Haven't checked
it so
Il giorno giovedì 12 giugno 2014 11:31:07 UTC-7, Diego Marchi ha scritto:
Hi all,
I have a distributed environment in SOLR with 4 cores. Each core has
approx 100m documents. We are maintaining the database of documents since
version 2 of solr I think, so many documents do not respect the
I realize I never responded to this thread, shame on me!
Jorge/Giovanni Kelvin looks pretty cool -- thanks for sharing it. When we
use Quepid ,we sometimes do it at places with existing relevancy test
scripts like Kelvin. Quepid/test scripts tend to satisfy different nitches.
In addition to
(NOTE: cross-posted announcement, please confine any replies to
general@lucene)
As you may be aware, ApacheCon will be held this year in Budapest, on
November 17-23. (See http://apachecon.eu for more info.)
### ### 1 - Call For Papers - June 25
The CFP for the conference is still open, but
: set with the latest solr version. (Now we are running version 4.8 - the
: current schema has a uniqueid field set, while it wasn't present in the
: earlier versions. This unique field is unsurprisingly called id but not
: all the documents have it.)
this is going to be the source of a lot
We've managed to fix our issue, but just in case anyone has the same problem,
I wanted to identify our solution.
We were originally using the version of Tomcat that was packaged with CentOS
(Tomcat 6.0.24). We tried downloading a newer version of Tomcat (7.0.52)
and running Solr there, and this
Thanks Hoss for your reply...
yeah I thought so... and I don't think it's even possible to add the id
field to the document missing it, right? Also because some of the fields
are not stored and it is my understanding that it is one of the
requirements to have the update query work... right?
But
Take a look at this:
http://www.slideshare.net/lucenerevolution/wright-nokia-manifoldcfeurocon-2011
Karl has an old Jira patch somewhere for doing the ACLs processing in Solr.
-- Jack Krupansky
-Original Message-
From: lalitjangra
Sent: Thursday, June 12, 2014 9:28 AM
To:
Hi there,
I have an solr index with 14+ million records. We facet on quite a few fields
with very high-cardinality such as author, person, organization, brand and
document type. Some of the records contain thousands of persons and
organizations. So the person and organization fields can be
The reason is the following :
I have a collection named col1 , which has n Shards deployed on n machines.
( on each machine - one shard with one replica )
Now I want to create col2 , with new config and import data from col1 to
col2.
What I need is that shards on col2 will be on the same
Hi,
I am deploying Solr in a larger web application. The standalone solr
instance works fine. The path-prefix I use is raptorslrweb. A standalone
SOLR query to my instance that works is as follows:
http://hostname:8080/raptorslrweb/solr/reviews/select?q=*%3A*wt=jsonindent=true
However, when I
Hello.
- We currently have solr 4 in master-slave mode across 2 DataCenters.
- We are planning to run the system in active-active mode, meaning that
search requests will go to Solr Slaves in both DC-A and DC-B.
- We have a highly available and cross DC database that feeds the
SolrMaster in both
Why are you doing your conversion on Solr side and not on SolrJ
(client) side? Seems more efficient and you can control the lifecycle
of XSLT objects better yourself.
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
Current project: http://www.solr-start.com/ - Accelerating your
Or, ahem, use VelocityResponseWriter :)
On Jun 12, 2014, at 21:07, Alexandre Rafalovitch arafa...@gmail.com wrote:
Why are you doing your conversion on Solr side and not on SolrJ
(client) side? Seems more efficient and you can control the lifecycle
of XSLT objects better yourself.
Hi,
We are trying an implementation where we use a custom PostingsFormat for
one field to write the postings directly to a third party stable storage.
The intention is partial update for this field. But for now, I want to ask
one specific problem regarding merge.
Main Issue:
*
In the
41 matches
Mail list logo