On Wed, Apr 22, 2015 at 4:17 PM, Yonik Seeley wrote:
> On Wed, Apr 22, 2015 at 11:00 AM, didier deshommes
> wrote:
> > curl "
> >
> http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=foundation
> "
> > -H "Content-ty
A similar problem seems to happen when sending application/json to the
search handler. Solr returns a NullPointerException for some reason:
vagrant@precise64:~/solr-5.1.0$ curl "
http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=foundation";
-H "Content-type:application/json"
It would be a huge step forward if one could have several hundreds of Solr
collections, but only have a small portion of them opened/loaded at the
same time. This is similar to ElasticSearch's close index api, listed here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indice
I added a JIRA issue here: https://issues.apache.org/jira/browse/SOLR-6399
On Thu, May 22, 2014 at 4:16 PM, Erick Erickson
wrote:
> "Age out" in this context is just implementing a LRU cache for open
> cores. When the cache limit is exceeded, the oldest core is closed
> automatically.
>
> Best,
On Thu, May 22, 2014 at 10:30 AM, Erick Erickson wrote:
> If we manage to extend the "lazy core" loading from stand-alone to
> "lazy collection" loading in SolrCloud would that satisfy the
> use-case? It still doesn't allow manual unloading of the collection,
> but the large collection would "age
Thanks Furkan,
That's exactly what I was looking for.
On Wed, Sep 18, 2013 at 4:21 PM, Furkan KAMACI wrote:
> Are yoh looking for that:
>
> http://lucene.472066.n3.nabble.com/SOLR-Cloud-Collection-Management-quesiotn-td4063305.html
>
> 18 Eylül 2013 Çarşamba tarihinde didi
Hi,
How do I add a node as a replica to a solrcloud cluster? Here is my
situation: some time ago, I created several collections
with replicationFactor=2. Now I need to add a new replica. I thought just
starting a new node and re-using the same zokeeper instance would make it
automatically a replica
For Solr 4.3.0, I don't think you can pass loadOnStartup to the Collections
API, although the Cores API accepts it. That's been my experience anyway.
On Mon, Aug 5, 2013 at 6:27 AM, Srivatsan wrote:
> No errors in zookeeper and solr. I m using CloudSolrServer for creating
> collections as said
working for you, let me know
how you got it working!
On Fri, May 3, 2013 at 2:11 PM, didier deshommes wrote:
>
> On Fri, May 3, 2013 at 11:18 AM, Erick Erickson
> wrote:
>
>> The cores aren't loaded (or at least shouldn't be) for getting the status.
>>
it's been removed from
> the transient cache. Ditto for the create action.
>
> So let's figure out whether you're really seeing loaded cores or not, and
> then
> raise a JIRA if so...
>
> Thanks for reporting!
> Erick
>
> On Thu, May 2, 2013 at 1:27 PM, d
Hi,
I've been very interested in the transient core feature of solr to manage a
large number of cores. I'm especially interested in this use case, that the
wiki lists at http://wiki.apache.org/solr/LotsOfCores (looks to be down
now):
>loadOnStartup=false transient=true: This is really the use-case
I've created an issue and patch here that makes it possible to specify
transient and loadOnStatup on core creation:
https://issues.apache.org/jira/browse/SOLR-4631
On Wed, Mar 20, 2013 at 10:14 AM, didier deshommes wrote:
> Thanks. Is there a way to pass loadOnStartup and/or tran
x27;t think SolrCloud works with the transient stuff.
>
> - Mark
>
> On Mar 19, 2013, at 8:04 PM, didier deshommes wrote:
>
> > Hi,
> > I cannot get Solrcloud to respect transientCacheSize when creating
> multiple
> > cores via the web api. I'm runnig solr 4.2
Hi,
I cannot get Solrcloud to respect transientCacheSize when creating multiple
cores via the web api. I'm runnig solr 4.2 like this:
java -Dbootstrap_confdir=./solr/collection1/conf
-Dcollection.configName=conf1 -DzkRun -DnumShards=1 -jar start.jar
I'm creating multiple cores via the core admin
Consider putting a cache (memcached, redis, etc) *in front* of your
solr slaves. Just make sure to update it when replication occurs.
didier
On Tue, Aug 9, 2011 at 6:07 PM, arian487 wrote:
> I'm wondering if the caches on all the slaves are replicated across (such as
> queryResultCache). That i
On Thu, Feb 10, 2011 at 4:08 PM, Stijn Vanhoorelbeke
wrote:
> Hi,
>
> I've done some stress testing onto my solr system ( running in the ec2 cloud
> ).
> From what I've noticed during the tests, the QTime drops to just 1 or 2 ms (
> on a index of ~2 million documents ).
>
> My first thought pointe
On Thu, Oct 21, 2010 at 3:00 PM, Shawn Heisey wrote:
> On 10/21/2010 1:42 PM, didier deshommes wrote:
>>
>> I noticed that the java-based replication does not make replication of
>> multiple core automatic. For example, if I have a master with 7
>> cores, any slave I se
Hi there,
I noticed that the java-based replication does not make replication of
multiple core automatic. For example, if I have a master with 7
cores, any slave I set up has to explicitly know about each of the 7
cores to be able to replicate them. This information is stored in
solr.xml, and sinc
Hi Alexandre,
Have you tried setting a higher headerBufferSize? Look in
etc/jetty.xml and search for 'headerBufferSize'; I think it controls
the size of the url. By default it is 8192.
didier
On Wed, Aug 18, 2010 at 2:43 PM, Alexandre Rocco wrote:
> Guys,
>
> We are facing an issue executing ve
For xml 1.1 documents, you can view if any of your documents have
these restricted characters defined here:
http://www.w3.org/TR/2006/REC-xml11-20060816/#NT-RestrictedChar
If they are, you'll have to remove them.
didier
On Sun, Jul 18, 2010 at 11:16 AM, robert mena wrote:
> Hi,
>
> I am doing s
Have you taken a look at Solr's TermVector component? It's probably
what you want:
http://wiki.apache.org/solr/TermVectorComponent
didier
On Tue, Jun 15, 2010 at 8:38 AM, sarfaraz masood
wrote:
> I am Sarfaraz, working on a Search Engine
> project which is based on Nutch & Solr. I am trying to
On Wed, Jan 27, 2010 at 9:48 AM, Matthieu Labour
wrote:
> What I am trying to understand is the search/filter algorithm. If I have 1
> core with all documents and I search for "Paris" for userId="123", is lucene
> going to first search for all Paris documents and then apply a filter on the
> u
Have you tried loading solr instances as you need them and unloading
those that are not being used? I wish I could help more, I don't know
many people running that many use cores.
didier
On Sun, Dec 20, 2009 at 2:38 PM, Matthieu Labour wrote:
> Hi
> I have a slr instance in which i created 700 c
On Sun, Oct 25, 2009 at 1:15 PM, Chris Hostetter
wrote:
>
> : I need some help about the mergeindex command. I have 2 cores A and B
> : that I want to merge into a new index RES. A has 100 docs and B 10
> : docs. All of B's docs are from A, except that one attribute is
> : changed. The goal is to
Hi there,
I need some help about the mergeindex command. I have 2 cores A and B
that I want to merge into a new index RES. A has 100 docs and B 10
docs. All of B's docs are from A, except that one attribute is
changed. The goal is to bring the updated attributes from B into A.
When I issue the merg
I am using Solr to index data in a SQL database. Most of the data
doesn't change after initial commit, except for a single boolean field
that indicates whether an item is flagged as 'needing attention'. So
I have a need_attention field in the database that I update whenever a
user marks an item a
ot to
add that we're running a development version of solr (git clone from ~
3 weeks ago).
Thanks,
didier
>
> Francis
>
> -Original Message-
> From: didier deshommes [mailto:dfdes...@gmail.com]
> Sent: Thursday, September 24, 2009 3:32 PM
> To: solr-user@lucene
Hi there,
We are running solr and allocating 1GB to it and we keep having
OutOfMemoryErrors. We get messages like this:
Error during auto-warming of
key:org.apache.solr.search.queryresult...@c785194d:java.lang.OutOfMemoryError:
Java heap space
at java.util.Arrays.copyOfRange(Arrays.java:3
28 matches
Mail list logo