Are you sure the requests are getting queued because the LB is detecting
that Solr won't handle them?
The reason why I'm asking is I know that ELB doesn't handle bursts well.
The load balancer needs to warm up, which essentially means it might
be underpowered at the beginning of a burst. It
Glad you are sorted out!
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g+:
plus.google.com/appinions
https://plus.google.com/u/0/b
happens at query time. Not sure if that's significant for you.
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g+:
plus.google.com
is
the query supposed to retrieve the lower-case version?
(sorry, if this sounds like a naive question, but I have a feeling that I
am missing something really basic here).
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East
the most.
Solr on HDFS currently doesn't have any sort of rack locality like there is
with say HBase colocated on the HDFS nodes. So you can expect that even
with Solr installed on the same nodes as your datanodes for HDFS, that
there will be remote IO.
Michael Della Bitta
Senior Software Engineer
time,
but on the other hand, you don't have to maintain a Zookeeper ensemble or
devote brain cells to understanding collections/shards/etc.
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
Benson:
Are you trying to run independent invocations of Solr for every node?
Otherwise, you'd just want to create a 8 shard collection with
maxShardsPerNode set to 8 (or more I guess).
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence
You're probably launching Solr using the older version of Java somehow. You
should make sure your PATH and JAVA_HOME variables point at your Java 8
install from the point of view of the script or configuration that launches
Solr.
Hope that helps.
Michael Della Bitta
Senior Software Engineer
o
At the layer right before you send that XML out, have it have a fallback
option on error where it sends each document one at a time if there's a
failure with the batch.
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East
If you're trying to do a bulk ingest of data, I recommend committing less
frequently. Don't soft commit at all until the end of the batch, and hard
commit every 60 seconds.
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18
If you'd like to reduce the amount of lines Solr logs, you need to edit the
file example/resources/log4j.properties in Solr's home directory. Change
lines that say INFO to WARN.
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing
Good call, it could easily be the tlog Nitin is talking about.
As for which definition of high, I was making assumptions as well. :)
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t
Yep, you'll have to increase the heap size for your Tomcat container.
http://stackoverflow.com/questions/6897476/tomcat-7-how-to-set-initial-heap-size-correctly
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st
if
there are any errors on the Oracle side?
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g+:
plus.google.com/appinions
https://plus.google.com
Another way of doing it is by setting the -Dhost=$hostname parameter when
you start Solr.
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions
I would do one of either:
1. Set a different Solr home for each instance. I'd use the
-Dsolr.solr.home=/d/2 command line switch when launching Solr to do so.
2. RAID 10 the drives. If you expect the Solr instances to get uneven
traffic, pooling the drives will allow a given Solr instance to
The downsides that come to mind:
1. Every write gets amplified by the number of nodes in the cloud. 1000
write requests end up creating 1000*N HTTP calls as the leader forwards
those writes individually to all of the followers in the cloud. Contrast
that with classical replication where only
The Jetty servlet container that Solr uses doesn't understand those
files. It would not use them to determine access, and would likely make
them accessible to web requests in plain text.
On 1/6/15 16:01, Craig Hoffman wrote:
Thanks Otis. Do think a .htaccess / .passwd file in the Solr admin
I've been experiencing this problem. Running VisualVM on my instances
shows that they spend a lot of time creating WeakReferences
(org.apache.lucene.util.WeakIdentityMap$IdentityWeakReference that is).
I think what's happening here is the heap's not big enough for Lucene's
caches and it ends up
, Michael Della Bitta wrote:
Only thing you have to worry about (in both the CUSS and the home grown
case) is a single bad document in a batch fails the whole batch. It's up
to you to fall back to writing them individually so the rest of the
batch makes it in.
With CUSS, your program will never
Tom:
ConcurrentUpdateSolrServer isn't magic or anything. You could pretty
trivially write something that takes batches of your XML documents and
combines them into a single document (multiple doc tags in the add
section) and sends them up to Solr and achieve some of the same speed
benefits.
Hi, Manohar,
1. Does posting-list and term-list of the index reside in the memory? If
not, how to load this to memory. I don't want to load entire data, like
using DocumentCache. Either I want to use RAMDirectoryFactory as the data
will be lost if you restart
If you use MMapDirectory, Lucene
Good discussion topic.
I'm wondering if Solr doesn't need some sort of shoot the other node in
the head functionality.
We ran into one of failure modes that only AWS can dream up recently,
where for an extended amount of time, two nodes in the same placement
group couldn't talk to one
?
On Nov 18, 2014 11:49 AM, Michael Della Bitta
michael.della.bi...@appinions.com wrote:
We're achieving some success by treating aliases as collections and
collections as shards.
More specifically, there's a read alias that spans all the collections,
and a write alias that points at the 'latest
We're achieving some success by treating aliases as collections and
collections as shards.
More specifically, there's a read alias that spans all the collections,
and a write alias that points at the 'latest' collection. Every week, I
create a new collection, add it to the read alias, and
You could also find a natural key that doesn't look like an ID and
create a name-based (Type 3) UUID out of it, with something like Java's
nameUUIDFromBytes:
https://docs.oracle.com/javase/7/docs/api/java/util/UUID.html#nameUUIDFromBytes%28byte%5B%5D%29
Implementations of this exist in other
On Mon, Nov 10, 2014 at 11:50 AM, Michael Della Bitta
michael.della.bi...@appinions.com wrote:
Hi Michal,
Is there a particular reason to shard your collections like that? If it
was
mainly for ease of operations, I'd consider just using CompositeId to
prevent specific types of queries
Hi Michal,
Is there a particular reason to shard your collections like that? If it
was mainly for ease of operations, I'd consider just using CompositeId
to prevent specific types of queries hotspotting particular nodes.
If your ingest rate is fast, you might also consider making each
I generally turn off the console logging when I install Tomcat. It
flushes after every line, unlike the other handlers, and that's sort of
a performance problem (although if you need that, you need that).
Basically, find logging.properties in Tomcat's conf directory, and
change these two
1. The new replica will not begin serving data until it's all there and
caught up. You can watch the replica status on the Cloud screen to see
it catch up; when it's green, you're done. If you're trying to automate
this, you're going to look for the replica that says recovering in
Pretty sure what you need is called KeywordMarkerFilterFactory.
|filter class=solr.KeywordMarkerFilterFactory
protected=protwords.txt /|
On 11/5/14 17:24, Tang, Rebecca wrote:
Hi there,
For some hyphenated terms, I want them to stay as is instead of being
tokenized. For example:
http://sematext.com/spm/
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g+:
plus.google.com/appinions
https://plus.google.com/u/0/b
You probably just need to put double quotes around the url.
On 10/30/14 15:27, Craig Hoffman wrote:
Thanks! One more question. WGET seems to choking on a my URL in particular the #
and the character . What’s the best method escaping?
http://My Host
Check this out:
http://www.slideshare.net/cloudera/solrhadoopbigdatasearch
On 10/29/14 16:31, Pritesh Patel wrote:
What exactly does this API do?
--Pritesh
We index directly from mappers using SolrJ. It does work, but you pay
the price of having to instantiate all those sockets vs. the way
MapReduceIndexerTool works, where you're writing to an
EmbeddedSolrServer directly in the Reduce task.
You don't *need* to use MapReduceIndexerTool, but it's
toward becoming
proficient with one, I would recommend against it.
On 10/28/14 15:27, S.L wrote:
I m using Apache Hadoop and Solr , do I nee dto switch to Cloudera
On Tue, Oct 28, 2014 at 1:27 PM, Michael Della Bitta
michael.della.bi...@appinions.com wrote:
We index directly from mappers
This doesn't answer your question, but unless something is changed,
you're going to want to set this to false. It causes index corruption at
the moment.
On 10/25/14 03:42, Norgorn wrote:
bool name=solr.hdfs.blockcache.write.enabledtrue/bool
You want external zookeepers. Partially because you don't want your Solr
garbage collections holding up zookeeper availability, but also because
you don't want your zookeepers going offline if you have to restart Solr
for some reason.
Also, you want 3 or 5 zookeeepers, not 4 or 8.
On
I'm curious, could you elaborate on the issue and the partial fix?
Thanks!
On 10/27/14 11:31, Markus Jelsma wrote:
It is an ancient issue. One of the major contributors to the issue was resolved
some versions ago but we are still seeing it sometimes too, there is nothing to
see in the logs.
Andrei,
I'm wondering if you've considered using Classic replication for this use
case. It seems better suited for it.
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions
Yes, that's what I'm suggesting. It seems a perfect fit for a single shard
collection with an offsite remote that you don't always want to write to.
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New
Hi Scott,
Any chance this could be an IPv6 thing? What if you start both server and
client with this flag:
-Djava.net.preferIPv4Stack=true
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY
take advantage of some of the file format improvements.
However, it is somewhat of a design smell that you can't reindex. In my
experience, it is extremely valuable to be able to reindex your data at
will.
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science
Yes, you can just do something like curl
http://mysolrserver:mysolrport/solr/mycollectionname/update?optimize=true;.
You should expect heavy disk activity while this completes. I wouldn't do
more than one collection at a time.
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
Grainne,
I would recommend that you do not do this. In fact, I would recommend you not
use NFS as well, although that’s more likely to work, just not ideally. Solr’s
going to do best when it’s working with fast, local storage that the OS can
cache natively.
Michael Della Bitta
Senior Software
Yes, there's SolrInputDocumentWritable and MapReduceIndexerTool, plus the
Morphline stuff (check out
https://github.com/markrmiller/solr-map-reduce-example).
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
1. What version of Solr are you running?
2. Have you made substantial changes to solrconfig.xml?
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com
/CommonQueryParameters#Deep_paging_with_cursorMark
That's the magic knock that will get you what you want.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https
, but
my hunch is that you'll be happier in general with the behavior of a field
type that does tokenizing and stemming for plain text search anyway.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York
into a different field that has the keyword
tokenizer?
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g+:
plus.google.com/appinions
https
not really aimed at preserving uptime as
far as I know.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g+:
plus.google.com/appinions
https
If all you need is better availability, I would start by trying out an
additional replica of each shard on a different box, so each box would be
serving the data for 2 shards and each shard would be available on 2 boxes.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions
. I
could be wrong. It's probably best if you post your field definition from
your schema.
Also, is this a free-text field, or something that's more like a short
string?
Thanks,
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing
If that's your problem, I bet all you have to do is twiddle on one of the
catenate options, either catenateWords or catenateAll.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t
with 4.8.1 pointing at a CDH 5 HDFS, and a production
cluster with 4.9 as well.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g
/csug_tuning_solr.html
If anyone has anything to add or correct about these two resources, please
let me know!
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com
Hi Philippe,
You can indeed copy an index like that. The problem probably arises because
4.9.0 is using core discovery by default. This wiki page will shed some
light:
https://wiki.apache.org/solr/Core%20Discovery%20%284.4%20and%20beyond%29
Michael Della Bitta
Applications Developer
o: +1 646
just to be sure?
Thanks,
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g+:
plus.google.com/appinions
https://plus.google.com/u/0/b
You'd still need to modify that schema to use the ASCII folding filter.
Alternatively, if you want something off the shelf, you might check out
Sematext's autocomplete product:
http://www.sematext.com/products/autocomplete/index.html
Michael Della Bitta
Applications Developer
o: +1 646 532
contain the same term?
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g+:
plus.google.com/appinions
https://plus.google.com/u/0/b
version of you field for display, so your accented characters would not get
stripped.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g
propagate over to Solr.
http://www.ngdata.com/on-lily-hbase-hadoop-and-solr/
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g+:
plus.google.com
You need to use this filter in your analysis chain:
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.ASCIIFoldingFilterFactory
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY
How are you implementing autosuggest? I'm assuming you're querying an
indexed field and getting a stored value back. But there are a wide variety
of ways of doing it.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East
at this:
http://www.lucenerevolution.org/sites/default/files/Living%20with%20Garbage.pdf
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g
Rahul,
Check out the relevancy FAQ. You probably want to boost that field value at
index time, or use the query elevation component.
http://wiki.apache.org/solr/SolrRelevancyFAQ
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing
to commits and optimizes?
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g+:
plus.google.com/appinions
https://plus.google.com/u/0/b
What's the %system load on your nodes? What servlet container are you
using? Are you writing a single document per update, or in batches? How
many clients are attached to your cloud?
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence
for findability reasons and I heard it works out OK.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g+:
plus.google.com/appinions
https
Alex, maybe you're thinking of constraints put on shard keys?
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g+:
plus.google.com/appinions
. Is that a bad idea? It seems like there might be some
overhead to having several going in the same process that could be avoided,
but maybe I'm overcomplicating things.
Thanks,
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18
.
If you're just worried about the segment count, you can tune that in
solrconfig.xml and Solr will merge down your index on the fly as it indexes.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY
I'm currently playing around with Solr Cloud migration strategies, too. I'm
wondering... when you say zero downtime, do you mean zero *read*
downtime, or zero downtime altogether?
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing
tool to bring yourself up to the new version.
Part of this for me is a migration to HDFSDirectory so there's an added
level of complication there.
I would assume that since you only need to preserve reads, you could cut
over once your collections were created on the new cloud?
Michael Della Bitta
Unfortunately, it's not really advisable to allow open access to Solr to
the open web.
There are many avenues of DOSing a Solr install otherwise, and depending on
how it's configured, some more intrusive vulnerabilities.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions
.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g+:
plus.google.com/appinions
https://plus.google.com/u/0/b/112002776285509593336
or experiences you might be able to share would be helpful.
In the meantime, I'm going to start experimenting with some of these
approaches.
Thanks!
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY
/currency.data file either. Is it possible that you have
somehow used a mismatched JAVA_HOME and tools.jar somehow?
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com
Did you put that attribute on the root element, or somewhere else? The
beginning of solr.xml should look like this:
?xml version=1.0 encoding=UTF-8 ?
solr sharedLib=lib persistent=true
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence
Any chance you don't have a persistent=true attribute in your solr.xml?
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g+:
plus.google.com
rank higher.
Or, you could use the Suggester component or one of the other bolt-on
autocomplete components instead.
Maybe you should post your current field definition and let us know
specifically what you're trying to achieve?
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
We've definitely looked at Luwak before... nice to hear it might be being
brought closer into the Solr ecosystem!
There's an example of using curl to make a REST call to update a core on
this page:
https://wiki.apache.org/solr/UpdateXmlMessages
If that doesn't help, please let us know what error you're receiving.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science
Just a thought: If your users can send updates and you can't trust them,
how can you keep them from deleting all your data?
I would consider using a servlet filter to inspect the request. That would
probably be non-trivial if you plan to accept javabin requests as well.
Michael Della Bitta
with a maxTime somewhat larger than
your soft commit setting, somewhere in the low minutes range.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com/Appinions | g
no mass
loading. Additionally, we generally do bulk data collection across only 3
days of data, so if you're looking to do a mess of reporting against your
full set, take that into consideration.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence
if anybody has experience with installing a fairly new
version of Solr, say 4.7 or 4.8, through Cloudera Manager.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions https
Hi Furkan,
If I were to guess, the XML format is more cross-compatible with different
versions of SolrJ. But it might not be intentional.
In any case, feeding your SolrServer a BinaryResponseParser will switch it
over to javabin.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
The speed of ingest via HTTP improves greatly once you do two things:
1. Batch multiple documents into a single request.
2. Index with multiple threads at once.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
The Science of Influence Marketing
18 East 41st
you get an immediate notification of the problem, and to
install some sort of caching server like nscd if you expect to have DNS
resolution failures regularly.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
The Science of Influence Marketing
18 East 41st Street
collection with the collections API and the required
bits are in schema.xml and solrconfig.xml, you should be good to go. See
https://wiki.apache.org/solr/SolrCloud#Required_Config
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
The Science of Influence Marketing
-XX:CMSInitiatingOccupancyFraction ?
Just a shot in the dark, since I'm not familiar with Jetty's logging
statements, but that looks like plain old dropped HTTP sockets to me.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
The Science of Influence Marketing
18 East 41st
), but I don't know if there's
any definitive information about how to set them appropriately for Solr.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
The Science of Influence Marketing
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com
I'm not sure how you're measuring free RAM. Maybe this will help:
http://www.linuxatemyram.com/play.html
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
The Science of Influence Marketing
18 East 41st Street
New York, NY 10017
t: @appinions https://twitter.com
Hi Metin,
How many IDs are you supplying in a single query? You could probably
accomplish this easily with boosts if it were few.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
The Science of Influence Marketing
18 East 41st Street
New York, NY 10017
t
Hi,
As for your first question, setting openSearcher to true means you will see
the new docs after every hard commit. Soft and hard commits only become
isolated from one another with that set to false.
Your second problem might be explained by your large heap and garbage
collection. Walking a
to protect the guilty)
The admin handler for replication doesn't seem to be there, but the actual
API seems to work normally.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
The Science of Influence Marketing
18 East 41st Street
New York, NY 10017
t: @appinions
Hi,
Filter queries don't affect score, so boosting won't have an effect there.
If you want those query terms to get boosted, move them into the q
parameter.
http://wiki.apache.org/solr/CommonQueryParameters#fq
Hope that helps!
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
should be using.
On Feb 1, 2014 1:51 PM, Toke Eskildsen t...@statsbiblioteket.dk wrote:
Michael Della Bitta [michael.della.bi...@appinions.com] wrote:
Here at Appinions, we use mostly m2.2xlarges, but the new i2.xlarges look
pretty tasty primarily because of the SSD, and I'll probably push
Here at Appinions, we use mostly m2.2xlarges, but the new i2.xlarges look
pretty tasty primarily because of the SSD, and I'll probably push for a
switch to those when our reservations run out.
http://www.ec2instances.info/
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
1 - 100 of 407 matches
Mail list logo