handle.
Upayavira
On Thu, Jul 23, 2015, at 04:52 PM, Aaron Gibbons wrote:
I originally started using Ansible playbooks which did install the JDK
(with the same error), but have been doing manual installs to take
Ansible
completely out of the equation.
Safari wasn't giving showing the XML
hard (which, unfortunately, is common), you might
find the Velocity response writer an easier, more conventional way to do
it. You will be generating XML, but you'll be treating it much more as
just text output.
Upayavira
On Thu, Jul 23, 2015, at 01:32 PM, Sreekant Sreedharan wrote:
That worked
you are correct, I should have said:
xsl:template match=doc
ID NewID={str[@name='id']}.../
/xsl:template
On Thu, Jul 23, 2015, at 10:15 AM, Sreekant Sreedharan wrote:
Well, if you had a result say:
...
doc
str name=id589587B2B1CA4C4683FC106967E7C326/str
str name=arEE3YYK/str
int
That's really simple, as I said earlier, it could just require:
xsl:template match=doc
ID NewID={@id} ... /
/xsl:template
Upayavira
On Wed, Jul 22, 2015, at 11:59 AM, Sreekant Sreedharan wrote:
Well I guess I oversimplified things. My goal is to transform a SOLR
response
that looks like
I'd be curious to see the parsed query that you get when adding
debugQuery=true to the URL. I bet that the clustering component is
extracting terms from the parsed query, and perhaps each of those
queries is parsed in some way differently?
Upayavira
On Wed, Jul 22, 2015, at 08:29 PM, Joseph
']//thumbnail
...
/xsl:template
Dunno if that helps.
Upayavira
On Wed, Jul 22, 2015, at 11:17 AM, Sreekant Sreedharan wrote:
Hi,
I am using the SOLR XSLT response writer. My challenge is to convert
some
fields in the schema from one value to another. I am using a map to do
this.
Here's
How many documents do you have? What makes you think that a 28Gb index
is large? How much memory do you have in your Solr server?
Upayavira
On Wed, Jul 22, 2015, at 11:27 AM, Daniel Holmes wrote:
Hi All
I have problem with index size in solr 4.7.2. My OS is Ubuntu 14.10
64-bit.
my fields
and the same Java version in order to succeed at
that. Something weird is going on around that area, it seems.
Make sure you are using the same Java to start Solr as you are using to
run the bin/solr create script, and make sure you are using the same
version of Solr, too.
Upayavira
On Wed, Jul
no way to say take n from here and m from there -
you'd have to do that in your front-end code - or write your own request
handler to do it for you.
Upayavira
Keeping to the user list (the right place for this question).
More information is needed here - how are you getting these documents
into Solr? Are you posting them to /update/extract? Or using DIH, or?
Upayavira
On Tue, Jul 21, 2015, at 06:31 PM, Andrew Musselman wrote:
Dear user and dev lists
+Data+with+Solr+Cell+using+Apache+Tika
You could provide literal.filename=blah/blah
Upayavira
On Tue, Jul 21, 2015, at 07:37 PM, Andrew Musselman wrote:
I'm not sure, it's a remote team but will get more info. For now,
assuming
that a certain directory is specified, like /user/andrew
Are you making sure that every document has a unique ID? Index into an
empty Solr, then look at your maxdocs vs numdocs. If they are different
(maxdocs is higher) then some of your documents have been deleted,
meaning some were overwritten.
That might be a place to look.
Upayavira
On Tue, Jul
PM, Upayavira u...@odoko.co.uk wrote:
Solr generally does not interact with the file system in that way (with
the exception of the DIH).
It is the job of the code that pushes a file to Solr to process the
filename and send that along with the request.
See here for more info
Note, when you start up the instances, you can pass in a hostname to use
instead of the IP address. If you are using bin/solr (which you should
be!!) then you can use bin/solr -h my-host-name and that'll be used in
place of the IP.
Upayavira
On Tue, Jul 21, 2015, at 05:45 AM, Erick Erickson
as it isn't
what Solr/Lucene are designed for.
Upayavira
On Tue, Jul 21, 2015, at 06:02 AM, Bhawna Asnani wrote:
Thanks, I tried turning off auto softCommits but that didn't help much.
Still seeing stale results every now and then. Also load on the server
very light. We are running this just on a test
/fieldType
Note the protected=x attribute. I suspect if you put Yahoo! into a
file referenced by that attribute, it may survive analysis. I'd be
curious to hear whether it works.
Upayavira
On Tue, Jul 21, 2015, at 12:51 AM, Sathiya N Sundararajan wrote:
Question about WordDelimiterFilter. The search
.
Upayavira
On Tue, Jul 21, 2015, at 05:51 AM, mesenthil1 wrote:
Thanks Erick for clarifying ..
We are not explicitly setting the compositeId. We are using numShards=5
alone as part of the server start up. We are using uuid as unique field.
One sample id is :
possting.mongo-v2.services.com-intl
your chosen programming
language and you should see how to do this.
Upayavira
On Tue, Jul 21, 2015, at 04:12 AM, Zheng Lin Edwin Yeo wrote:
Hi Shawn,
So it means that if my following is in a text file called update.txt,
{id:testing_0001,
popularity:{inc:1}
This text file must still
assume the rest of it will work fine - my test system
didn't have any data in it for me to confirm that.
Upayavira
other than the inbuilt Jetty, you might end up
with issues later on down the line when developers decide to make an
optimisation or improvement that isn't compatible with the Servlet spec.
Upayavira
On Wed, Jul 15, 2015, at 07:43 AM, Adrian Liew wrote:
Hi all,
Will like to ask your opinion
document and forms a Lucene query based upon them.
It takes the frequency of the terms in your index and in the document
into account when scoring the terms (much like TF/IDF). For this to
really work, you need a reasonable amount of content.
Upayavira
On Tue, Jul 14, 2015, at 07:40 AM, Zheng Lin
There's two ways to tweak MLT. Use the parameters (such as minimum
term frequency) and so on, or use stop words when indexing.
I'd suggest you try those as a means to improve quality!
Upayavira
On Tue, Jul 14, 2015, at 09:28 AM, Zheng Lin Edwin Yeo wrote:
Thanks for your advice. I've indexed
Problems between keyboard and chair are the best kind. They are the
easiest to resolve. If I were you, I'd be feeling *glad* it wasn't a
bug.
Upayavira
On Tue, Jul 14, 2015, at 07:31 AM, Shawn Heisey wrote:
On 7/13/2015 10:02 PM, Erick Erickson wrote:
Uggghh. Not persistence again
You could do
q={!boost b=$b v=$qq}
qq=your query
b=YOUR-FACTOR
If what you want is to provide a value outside.
Also, with later Solrs, you can use ${whatever} syntax in your main
query, which might work for you too.
Upayavira
On Tue, Jul 14, 2015, at 09:28 PM, Olivier Lebra wrote
the bin/ directory.
Would this work?
Upayavira
On Tue, Jul 14, 2015, at 02:53 AM, Adrian Liew wrote:
Hi Edwin,
Sorry for the late reply. Was caught up yesterday.
Yes I did not use the start.jar command and followed this article using
solr.cmd -
http://www.norconex.com/how-to-run-solr5
the latest Solr,
using the JSON facet or the JSON query API, which encapsulate similar
functionality in a JSON snippet.
Upayavira
You need to xml encode the tags. So instead of em, put lt;emgt;
and instead of /em put lt;/emgt;
Upayavira
On Mon, Jul 13, 2015, at 05:19 PM, Paden wrote:
Hello,
I'm trying to get some Solr highlighting going but I've run into a small
problem. When I set the pre and post tags with my own
If you have hundreds of files, the post command (SimplePostTool) can
also push a directory of files up to Solr.
(It is called Simple under the hood, but it is far from simple!)
Upayavira
On Mon, Jul 13, 2015, at 09:28 PM, Alexandre Rafalovitch wrote:
Solr ships with XML processing example
=enum
outside json parameter also doesn't help.
Can you provide the whole exception, including stack trace? This looks
like a bug to me, as it should switch to using the FieldValueCache for
multivalued fields rather than fail to use the FieldCache.
Upayavira
on scoring.
I'll dig into that patch to see if I can work it out.
Upayavira
On Fri, Jul 10, 2015, at 04:15 PM, Mikhail Khludnev wrote:
I've heard that people use
https://issues.apache.org/jira/browse/SOLR-6234
for such purpose - adding scores from fast moving core to the bigger slow
moving
from a key value to a float value faster?
NB. I hope to contribute this if I can make it perform.
Thanks!
Upayavira
Hi Erick,
You are right that I could actually be asking for a stored field. That's
an exceptionally good point, and yes, would suck. Better would be to
retrieve a docValue from document. I'll look into that.
Upayavira
On Fri, Jul 10, 2015, at 06:28 PM, Erick Erickson wrote:
Upayavira:
bq
are not
supposed to think of Solr as a servlet container.
If you *must*, then you can place a war file in the webapps directory
next to solr.war and it will expand and be available when you start
Jetty.
You cannot be sure that this behaviour will work long term.
Upayavira
If the zookeeper used isnt visible via the ui, it should be. Does it
show on the main dashboard under 'args'?
Upayavira
On Tue, Jul 7, 2015, at 04:30 AM, Zheng Lin Edwin Yeo wrote:
Thanks Erick for the info.
So mine should be running on external ZooKeeper, since I'm using -
bin\solr.cmd
fields are used and when.
Upayavira
On Tue, Jul 7, 2015, at 06:55 PM, Paden wrote:
Hello,
I'm trying to tune a search handler to get the results that I want. In
the
solrconfig.xml I specify several different query fields for the edismax
query parser but it always seems to use the default fields
But why do you want that?
On Wed, Jul 8, 2015, at 05:31 AM, Lee Chunki wrote:
Hi Markus,
Thank you for your reply.
I have more questions.
what I want to do is sort document by tfidf score + function query
score”
there are problems to do this :
* if I use function query (
What happens if you don't specify the df?
On Tue, Jul 7, 2015, at 08:36 PM, Paden wrote:
Well I've just been using an authors name. Last Name, First Name Middle
Initial. Like *Snowman, Frosty T.*
As for the debugging I'm not really seeing anything that would help me
understand why the query
the {!parent} or {!child} queries to select documents based upon
parent/child relationships.
Upayavira
On Mon, Jul 6, 2015, at 04:41 AM, SHANKAR REDDY wrote:
Team,
I have a requirement like getting the list of the child tables along
with
parent records as the below pattern.
id
CoreContainer and EmbeddedSolrServer
File solrXml = new File(solrHome, solr.xml);
CoreContainer coreContainer = CoreContainer.createAndLoad(solrHome,
solrXml);
EmbeddedSolrServer newServer = new EmbeddedSolrServer(coreContainer,
myCore);
}
Upayavira
On Sun, Jul 5, 2015
Use bin/solr create to make a new collection.
Upayavira
On Mon, Jul 6, 2015, at 05:17 AM, Zheng Lin Edwin Yeo wrote:
Hi,
I've just migrated to Solr 5.2.1 with external ZooKeeper 3.4.6.
Whenever I tried to start Solr using these command, the Solr servers gets
started, but none
into anything other than a brand new segment.
Hence, the idea of using a second level of sharding at the segment level
does not fit with how a lucene index is structured.
Upayavira
you can call the same API as the admin UI does. Pass it strings, it
returns tokens in json/xml/whatever.
Upayavira
On Tue, Jun 30, 2015, at 06:55 PM, Dinesh Naik wrote:
Hi Alessandro,
Lets say I have 20M documents with 50 fields in each.
I have applied text analysis like compression
We need to work out why your performance is bad without optimise. What
version of Solr are you using? Can you confirm that your config is using
the TieredMergePolicy?
Upayavira
Oe, Jun 30, 2015, at 04:48 AM, Summer Shire wrote:
Hi Upayavira and Erick,
There are two things we are talking
Use the schema browser on the admin UI, and click the load term info
button. It'll show you the terms in your index.
You can also use the analysis tab which will show you how it would
tokenise stuff for a specific field.
Upayavira
On Mon, Jun 29, 2015, at 06:53 PM, Dinesh Naik wrote:
Hi Eric
) to keep the segment size
under budget.
Upayavira
On Mon, Jun 29, 2015, at 08:55 PM, Toke Eskildsen wrote:
Reitzel, Charles charles.reit...@tiaa-cref.org wrote:
Is there really a good reason to consolidate down to a single segment?
In the scenario spawning this thread it does not seem
- in that sense the index is optimized.
However, future merges become very expensive. The best way to handle
this topic is to leave it to Lucene/Solr to do it for you. Pretend the
optimize option never existed.
This is, of course, assuming you are using something like Solr 3.5+.
Upayavira
On Mon
Bigger question, why are you optimizing? Since 3.6 or so, it generally
hasn't been requires, even, is a bad thing.
Upayavira
On Sun, Jun 28, 2015, at 09:37 PM, Summer Shire wrote:
Hi All,
I have two indexers (Independent processes ) writing to a common solr
core.
If One indexer process
That is one way to implement wildcarda, but isnt the most efficient.
Just index normally, tokenized, and search with an asterisk suffix, e.g.
foo*
This will build a finite state transformer that will make wildcard
handling efficient.
Upayavira
On, Jun 27, 2015, at 11:27 AM, pus wrote:
Hi, I'm
of text showing you precisely where the phrase occurred.
Upayavira
On Fri, Jun 26, 2015, at 01:36 AM, Mike Thomsen wrote:
I need to be able to do exact phrase searching on some documents that are
a
few hundred kb when treated as a single block of text. I'm on 4.10.4 and
it
complains when I try
, you will end up with a huge amount of churn in your index which
will substantially affect performance.
Consider doing some kind of delta update where you only push the
things that have changed.
Upayavira
On Fri, Jun 26, 2015, at 12:15 PM, rbkumar88 wrote:
Hi,
I wanted to run full import
getfile
/clusterstate.json clusterstate.json
You can use -cmd putfile to push it back to Zookeeper. As Erick says,
have all nodes on your cluster down at the time. And as Erick says, this
is not something that people are recommended to be doing generally.
Upayavira
On Wed, Jun 24, 2015, at 07:54
stack trace of the error you are seeing,
not just the error message itself.
Without this extra information, anything we say will be speculation.
Thanks,
Upayavira
On Wed, Jun 24, 2015, at 08:41 PM, sudeepgarg wrote:
Hi,
can someone help me in this regard?
What additional help do you need?
Upayavira
. What I'd suggest is
simply that you try it. You'd know pretty quickly if it were to cause
you issues.
Upayavira
in index order.
Upayavira
On Wed, Jun 24, 2015, at 08:26 PM, Shai Erera wrote:
Ah thanks. I see it was added in 5.1 - is there any other way prior to
that
(like 4.7)?
if not, I guess the only option is to not use fq if we don't intend to
cache it, and on 5.1 use the ^= syntax.
Shai
On Wed, Jun 24, 2015, at 03:27 PM, Upayavira wrote:
On Wed, Jun 24, 2015, at 02:50 PM, sudeep kumar wrote:
I want to know what is impact to disable term vector to existing
production environment, I mean how new segments create and how old
segments will merge with new segments because
. Generally,
you shouldn't be optimizing these days - it actually, ironically, makes
things much less optimal.
Upayavira
resultset.
I've no idea how this would perform, but I'd expect it to be better than
the grouping option.
Upayavira
Do you have a managed-schema file, or such?
You may have used the configs that have a managed schema, i.e. one that
allows you to change the schema via HTTP.
Upayavira
On Wed, Jun 17, 2015, at 02:33 PM, TK Solr wrote:
With Solr 5.2.0, I ran:
bin/solr create -c foo
This created solrconfig.xml
Thanks Ramkumar, will dig into these next week.
Upayavira
On Wed, Jun 17, 2015, at 02:08 PM, Ramkumar R. Aiyengar wrote:
I started with an empty Solr instance and Firefox 38 on Linux. This is
the
trunk source..
There's a 'No cores available. Go and create one' button available in the
old
that, then we can't make it default, then we create a divided
experience for want a working UI and want the cool new features.
A decent collections API tab really won't take that long I don't think
once we've given the new version a good shake-down.
Upayavira
On Wed, Jun 17, 2015, at 02:50 PM, Anshum
On Wed, Jun 17, 2015, at 02:49 PM, TK Solr wrote:
On 6/17/15, 2:35 PM, Upayavira wrote:
Do you have a managed-schema file, or such?
You may have used the configs that have a managed schema, i.e. one that
allows you to change the schema via HTTP.
I do see a file named managed-schema
We can get things like this in. If you want, feel free to have a go. As
much as I want to work on funky new stuff, I really need to focus on
finishing stuff first.
Upayavira
On Wed, Jun 17, 2015, at 02:53 PM, Anshum Gupta wrote:
Also, while you are at it, it'd be good to get SOLR-4777 in so
the
joined field in the result.
Upayavira
On jun 6, Advait Suhas Pandit wrote:
Hi,
We have some master data and some content data. Master data would be
things like userid, name, email id etc.
Our content data for example is a blog.
The blog has certain fields which are comma separated ids
I think it makes it bold on bold, which won't be particularly visible.
On Tue, Jun 16, 2015, at 06:52 AM, Sznajder ForMailingList wrote:
Hi,
I was testing the highlight feature and played with the techproducts
example.
It appears that the highlighting works on Mozilla Firefox, but not on
been
found so far.
Keep the bug reports coming!!
Upayavira
On Mon, Jun 15, 2015, at 01:53 AM, Erick Erickson wrote:
And anyone who, you know, really likes working with UI code please
help making it better!
As of Solr 5.2, there is a new version of the Admin UI available, and
several
admin
pane. I'd love to add HDFS support to the UI if there were APIs worth
exposing (I haven't dug into HDFS support yet).
Make sense?
Upayavira
On Mon, Jun 15, 2015, at 07:49 AM, Mark Miller wrote:
I didn't really follow this issue - what was the motivation for the
rewrite?
Is it entirely
When in 2012? I'd give it a go with Solr 3.6 if you don't want to modify
the library.
Upayavira
On Sun, Jun 14, 2015, at 04:14 AM, Zheng Lin Edwin Yeo wrote:
I'm still trying to find out which version it is compatible for, but the
document which I've followed is written in 2012.
http
Oooh, yes. Thx! Keep them coming!
Upayavira
On Sat, Jun 13, 2015, at 04:52 AM, William Bell wrote:
1. With the angular index.html, when selecting a CORE, the right side of
the screen does not refresh and show info for the core I selected.
2. It looks like it just needs whitespace
abbr
Use the analysis tab of the admin UI to try out your sentence against
the text_general analyzer. See how your sentence is analysed at index
and query time.
Upayavira
On Sat, Jun 13, 2015, at 10:54 AM, Test Test wrote:
Hi,
I have solr document, composed like this, with 2 fields : id = 1details
on every shard/replica
of that original index. It cannot be sharded.
HTH
Upayavira
On Thu, Jun 11, 2015, at 06:06 PM, Reitzel, Charles wrote:
So long as the fields are indexed, I think performance should be ok.
Personally, I would also look at using a single document per user with a
multi-valued
that was :-( Hopefully he is listening.
Upayavira
On Fri, Jun 12, 2015, at 07:25 AM, vineet yadav wrote:
Hi,
I am using keepword filter to identify key phrases. I have made following
schema changes in schema.xml
!--added field --
field name=keyphrase_words type=keyphraseType stored=true
indexed=true
for this list of IDs. The more IDs there are, the worse
the performance.
Upayavira
will definitely be needed to make it work.
Upayavira
On Fri, Jun 12, 2015, at 08:28 AM, Zheng Lin Edwin Yeo wrote:
I'm trying to use Paoding to index Chinese characters in Solr.
I'm using Solr 5.1, have downloaded the dictionary to shard1\dic and
shard2\dic, and have configured the following
. You are advised to download
a PDF for your specific version if you want to be sure to get directions
relevant to your version.
Upayavira
On Fri, Jun 12, 2015, at 07:22 PM, Phanindra R wrote:
Hi,
According to
https://cwiki.apache.org/confluence/display/solr/Faceting#Faceting
what is being queried for.
Also, as a corollary to this, you can use the schema browser (or
faceting for that matter) to view what terms are being indexed, to see
if they should match.
HTH
Upayavira
Am 11.06.2015 12:00 schrieb Upayavira:
Have you used the analysis tab in the admin UI? You can
Yes! It only needs to be done!
On Thu, Jun 11, 2015, at 11:38 AM, Ahmet Arslan wrote:
Hi Upayavira,
I was going to suggest SOLR-3479 to Edwin, I saw your old post.
Regarding your suggestion, there is an existing ticket :
https://issues.apache.org/jira/browse/SOLR-3479
I think SOLR
seeing matches in
your queries.
Upayavira
On Thu, Jun 11, 2015, at 10:26 AM, Thomas Michael Engelke wrote:
Hey,
in german, you can string most nouns together by using hyphens, like
this:
Industrie = industry
Anhänger = trailer
Industrie-Anhänger = trailer for industrial use
Here [1
to optimize in the majority of scenarios. In fact, it
transformed optimizing from being a necessary thing to being a bad
thing in most cases.
So yes, let the algorithm take care of it, so long as you are using the
TieredMergePolicy, which has been the default for over 2 years.
Upayavira
On Thu, Jun 11
(or
some form of summary?) from them.
If they are all Word documents, do they start with a Heading style? In
which case you could extract that. As I say, most likely this will have
to be done outside of Solr.
Upayavira
On Wed, Jun 10, 2015, at 10:31 AM, Zheng Lin Edwin Yeo wrote:
The main
Note the clean= parameter to the DIH. It defaults to true. It will wipe
your index before it runs. Perhaps it succeeded at wiping, but failed to
connect to your database. Hence an empty DB?
clean=true is, IMO, a very dangerous default option.
Upayavira
On Wed, Jun 10, 2015, at 10:59 AM, Midas
I was only speaking about full import regarding the default of
clean=true. However, looking at the source code, it doesn't seem to
differentiate especially between a full and a delta in relation to the
default of clean=true, which would be pretty crappy. However, I'd need
to try it.
Upayavira
not exist right now, but would make a good contribution to
Solr itself, I'd say.
Upayavira
On Wed, Jun 10, 2015, at 09:57 AM, Alessandro Benedetti wrote:
Erick will correct me if I am wrong but this function query I don't think
it exists.
But maybe can be a nice contribution.
It should take
, great - real feedback! :-)
What does the old UI say at that point? Could you use inspect element
in your browser, and paste a few nodes around this for both the old and
the new UI?
We can, and probably should, do this in a JIRA ticket. You willing to
file one?
Many thanks!
Upayavira
it, but useful to confirm.
Thx!
Upayavira
(e.g.
jQuery) to modify the page based upon the results of an Ajax request,
but how to do that is really out of scope of this list.
Upayavira
On Sun, Jun 7, 2015, at 12:33 AM, Tom Running wrote:
Hello,
I have customized my Solr results so that they display only 3 fields: the
document ID, name
their collection statistics and aggregate them into a single
result sounds very complicated, and likely overkill.
Are you needing to collect this information often? Do you have a lot of
collections?
Upayavira
On Fri, Jun 5, 2015, at 06:29 AM, Zheng Lin Edwin Yeo wrote:
I'm trying to write a SolrJ program
, and directing the update their all
behind the scenes for you.
Upayavira
On Wed, Jun 3, 2015, at 08:15 AM, Ксения Баталова wrote:
Hi!
Thanks for your quick reply.
The problem that all my index is consists of several parts (several
cores)
and while updating I don't know in advance in which part
Jeetendra,
Just be aware that /browse is great for demo UIs, but shouldn't be used
in production, as it has no authentication.
Also, from a development perspective, it lacks a controller meaning you
may need to use javascript to fix URLs/etc.
Upayavira
On Tue, Jun 2, 2015, at 02:50 PM, Michał
Please post the whole question here. Many people read mail offline -
they won't be able to understand your request.
Thanks!
On Tue, Jun 2, 2015, at 02:43 PM, antoine charron wrote:
Hi, I have a problem with an encode to index in solr. You can have more
information on :
I have many. My SolrCloud code has the app push configs to zookeeper.
I am afk at the mo. Feel free to bug me about it!
Upayavira
On Mon, Jun 1, 2015, at 07:29 PM, Walter Underwood wrote:
Anyone have Chef recipes they like for deploying Solr?
I’d especially appreciate one for uploading
What I'm suggesting is that you have two fields, one for searching, one
for faceting.
You may find you can't use docValues for your field type, in which case
Solr will just use caches to improve faceting performance.
Upayavira
On Sat, May 30, 2015, at 01:50 AM, Aman Tandon wrote:
Hi Upayavira
. Write a little app that pushes docs to
Solr and commits, then look at the file sizes on disk. Then repeat with
more documents, see what impact on file sizes. I suspect you can answer
your question relatively easily.
Upayavira
Use copyField to clone the field for faceting purposes.
Upayavira
On Fri, May 29, 2015, at 08:06 PM, Aman Tandon wrote:
Hi Erick,
Thanks for suggestion, We are this query parser plugin (
*SynonymExpandingExtendedDismaxQParserPlugin*) to manage multi-word
synonym. So it does work slower
In your solr home directory, create a lib directory, and put your jar
there. Then you wont have to declare it in solrconfig.xml. That's what
Alan is suggesting.
Upayavira
On Wed, May 27, 2015, at 09:39 AM, adfel70 wrote:
Hi Alan, thanks for the reply.
I am not sure what did you mean. Currently
In this case, optimising makes sense, once the index is generated, you
are not updating It.
Upayavira
On Wed, May 27, 2015, at 06:14 AM, Modassar Ather wrote:
Our index has almost 100M documents running on SolrCloud of 5 shards and
each shard has an index size of about 170+GB (for the record
I wonder if, Dean, you are using an older. Take a look in the bin/
directory of any newer Solr, preferably 5.x and you'll see quite
substantial start scripts.
Upayavira
On Wed, May 27, 2015, at 07:11 PM, Erick Erickson wrote:
Hmmm, this is a little confused I think.
bq: copies all necessary
a
parent doc and each of your colours as child docs, then you could return
which doc matched. You could use the ExpandComponent to retrieve details
of the parent doc (http://heliosearch.org/expand-block-join/)
Dunno if any of that helps.
Upayavira
On Tue, May 26, 2015, at 08:33 AM, Rodolfo
Why is your app tied that closely to Solr? I can understand if you are
talking about SolrJ, but normal usage you use a different application in
a different JVM from Solr.
Upayavira
On Tue, May 26, 2015, at 05:14 AM, Robust Links wrote:
I am stuck in Yet Another Jarmagedon of SOLR
Correct. The relevancy score simply states that we think result #1 is
more relevant than result #2. It doesn't say that #1 is relevant.
The score doesn't have any validity across queries either, as, for
example, a different number of query terms will cause the score to
change.
Upayavira
On Tue
301 - 400 of 854 matches
Mail list logo