On Wed, Apr 22, 2015 at 4:17 PM, Yonik Seeley ysee...@gmail.com wrote:
On Wed, Apr 22, 2015 at 11:00 AM, didier deshommes dfdes...@gmail.com
wrote:
curl
http://localhost:8983/solr/gettingstarted/select?wt=jsonindent=trueq=foundation
-H Content-type:application/json
You're telling
A similar problem seems to happen when sending application/json to the
search handler. Solr returns a NullPointerException for some reason:
vagrant@precise64:~/solr-5.1.0$ curl
http://localhost:8983/solr/gettingstarted/select?wt=jsonindent=trueq=foundation;
-H Content-type:application/json
{
It would be a huge step forward if one could have several hundreds of Solr
collections, but only have a small portion of them opened/loaded at the
same time. This is similar to ElasticSearch's close index api, listed here:
I added a JIRA issue here: https://issues.apache.org/jira/browse/SOLR-6399
On Thu, May 22, 2014 at 4:16 PM, Erick Erickson erickerick...@gmail.com
wrote:
Age out in this context is just implementing a LRU cache for open
cores. When the cache limit is exceeded, the oldest core is closed
On Thu, May 22, 2014 at 10:30 AM, Erick Erickson erickerick...@gmail.comwrote:
If we manage to extend the lazy core loading from stand-alone to
lazy collection loading in SolrCloud would that satisfy the
use-case? It still doesn't allow manual unloading of the collection,
but the large
Thanks Furkan,
That's exactly what I was looking for.
On Wed, Sep 18, 2013 at 4:21 PM, Furkan KAMACI furkankam...@gmail.comwrote:
Are yoh looking for that:
http://lucene.472066.n3.nabble.com/SOLR-Cloud-Collection-Management-quesiotn-td4063305.html
18 Eylül 2013 Çarşamba tarihinde didier
Hi,
How do I add a node as a replica to a solrcloud cluster? Here is my
situation: some time ago, I created several collections
with replicationFactor=2. Now I need to add a new replica. I thought just
starting a new node and re-using the same zokeeper instance would make it
automatically a
For Solr 4.3.0, I don't think you can pass loadOnStartup to the Collections
API, although the Cores API accepts it. That's been my experience anyway.
On Mon, Aug 5, 2013 at 6:27 AM, Srivatsan ranjith.venkate...@gmail.comwrote:
No errors in zookeeper and solr. I m using CloudSolrServer for
and it is working for you, let me know
how you got it working!
On Fri, May 3, 2013 at 2:11 PM, didier deshommes dfdes...@gmail.com wrote:
On Fri, May 3, 2013 at 11:18 AM, Erick Erickson
erickerick...@gmail.comwrote:
The cores aren't loaded (or at least shouldn't be) for getting the status
a JIRA if so...
Thanks for reporting!
Erick
On Thu, May 2, 2013 at 1:27 PM, didier deshommes dfdes...@gmail.com
wrote:
Hi,
I've been very interested in the transient core feature of solr to
manage a
large number of cores. I'm especially interested in this use case, that
the
wiki
Hi,
I've been very interested in the transient core feature of solr to manage a
large number of cores. I'm especially interested in this use case, that the
wiki lists at http://wiki.apache.org/solr/LotsOfCores (looks to be down
now):
loadOnStartup=false transient=true: This is really the
I've created an issue and patch here that makes it possible to specify
transient and loadOnStatup on core creation:
https://issues.apache.org/jira/browse/SOLR-4631
On Wed, Mar 20, 2013 at 10:14 AM, didier deshommes dfdes...@gmail.comwrote:
Thanks. Is there a way to pass loadOnStartup
think SolrCloud works with the transient stuff.
- Mark
On Mar 19, 2013, at 8:04 PM, didier deshommes dfdes...@gmail.com wrote:
Hi,
I cannot get Solrcloud to respect transientCacheSize when creating
multiple
cores via the web api. I'm runnig solr 4.2 like this:
java -Dbootstrap_confdir
Hi,
I cannot get Solrcloud to respect transientCacheSize when creating multiple
cores via the web api. I'm runnig solr 4.2 like this:
java -Dbootstrap_confdir=./solr/collection1/conf
-Dcollection.configName=conf1 -DzkRun -DnumShards=1 -jar start.jar
I'm creating multiple cores via the core admin
Consider putting a cache (memcached, redis, etc) *in front* of your
solr slaves. Just make sure to update it when replication occurs.
didier
On Tue, Aug 9, 2011 at 6:07 PM, arian487 akarb...@tagged.com wrote:
I'm wondering if the caches on all the slaves are replicated across (such as
On Thu, Feb 10, 2011 at 4:08 PM, Stijn Vanhoorelbeke
stijn.vanhoorelb...@gmail.com wrote:
Hi,
I've done some stress testing onto my solr system ( running in the ec2 cloud
).
From what I've noticed during the tests, the QTime drops to just 1 or 2 ms (
on a index of ~2 million documents ).
Hi there,
I noticed that the java-based replication does not make replication of
multiple core automatic. For example, if I have a master with 7
cores, any slave I set up has to explicitly know about each of the 7
cores to be able to replicate them. This information is stored in
solr.xml, and
On Thu, Oct 21, 2010 at 3:00 PM, Shawn Heisey s...@elyograg.org wrote:
On 10/21/2010 1:42 PM, didier deshommes wrote:
I noticed that the java-based replication does not make replication of
multiple core automatic. For example, if I have a master with 7
cores, any slave I set up has
Hi Alexandre,
Have you tried setting a higher headerBufferSize? Look in
etc/jetty.xml and search for 'headerBufferSize'; I think it controls
the size of the url. By default it is 8192.
didier
On Wed, Aug 18, 2010 at 2:43 PM, Alexandre Rocco alel...@gmail.com wrote:
Guys,
We are facing an
For xml 1.1 documents, you can view if any of your documents have
these restricted characters defined here:
http://www.w3.org/TR/2006/REC-xml11-20060816/#NT-RestrictedChar
If they are, you'll have to remove them.
didier
On Sun, Jul 18, 2010 at 11:16 AM, robert mena robert.m...@gmail.com wrote:
Have you taken a look at Solr's TermVector component? It's probably
what you want:
http://wiki.apache.org/solr/TermVectorComponent
didier
On Tue, Jun 15, 2010 at 8:38 AM, sarfaraz masood
sarfarazmasood2...@yahoo.com wrote:
I am Sarfaraz, working on a Search Engine
project which is based on
On Wed, Jan 27, 2010 at 9:48 AM, Matthieu Labour
matthieu_lab...@yahoo.com wrote:
What I am trying to understand is the search/filter algorithm. If I have 1
core with all documents and I search for Paris for userId=123, is lucene
going to first search for all Paris documents and then apply a
Have you tried loading solr instances as you need them and unloading
those that are not being used? I wish I could help more, I don't know
many people running that many use cores.
didier
On Sun, Dec 20, 2009 at 2:38 PM, Matthieu Labour matth...@strateer.com wrote:
Hi
I have a slr instance in
On Sun, Oct 25, 2009 at 1:15 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: I need some help about the mergeindex command. I have 2 cores A and B
: that I want to merge into a new index RES. A has 100 docs and B 10
: docs. All of B's docs are from A, except that one attribute is
:
Hi there,
I need some help about the mergeindex command. I have 2 cores A and B
that I want to merge into a new index RES. A has 100 docs and B 10
docs. All of B's docs are from A, except that one attribute is
changed. The goal is to bring the updated attributes from B into A.
When I issue the
I am using Solr to index data in a SQL database. Most of the data
doesn't change after initial commit, except for a single boolean field
that indicates whether an item is flagged as 'needing attention'. So
I have a need_attention field in the database that I update whenever a
user marks an item
Hi there,
We are running solr and allocating 1GB to it and we keep having
OutOfMemoryErrors. We get messages like this:
Error during auto-warming of
key:org.apache.solr.search.queryresult...@c785194d:java.lang.OutOfMemoryError:
Java heap space
at
forgot to
add that we're running a development version of solr (git clone from ~
3 weeks ago).
Thanks,
didier
Francis
-Original Message-
From: didier deshommes [mailto:dfdes...@gmail.com]
Sent: Thursday, September 24, 2009 3:32 PM
To: solr-user@lucene.apache.org
Cc: Andrew
28 matches
Mail list logo