Client application which queries solr needs to increase a number of
simultaneously connections in order to improve performance ( in additional
to get solr results, it needs to get an internal resources like images. )
But this increment has improved client performance, but caused degradation
in
The configs are in Zookeeper. So you have to switch your thinking,
it's rather confusing at first.
When you create a collection, you specify a config set, these are usually in
./server/solr/configsets/data_driven_schema,
./server/solr/configsets/techproducts and the like.
The entire conf
does it apply to solr 4.10 ? or only to solr 5 ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/increase-connections-on-tomcat-tp4192405p4192436.html
Sent from the Solr - User mailing list archive at Nabble.com.
I removed/commented as it was not understood able and not for our use.
With Regards
Aman Tandon
On Tue, Mar 10, 2015 at 8:04 PM, Steve Rowe sar...@gmail.com wrote:
Hi Aman,
The stack trace shows that the AddSchemaFieldsUpdateProcessorFactory
specified in data_driven_schema_configs’s
Thanks David for sharing! The custom attribute approach sounds interesting
indeed.
Markus
-Original message-
From:david.w.smi...@gmail.com david.w.smi...@gmail.com
Sent: Tuesday 10th March 2015 16:53
To: solr-user@lucene.apache.org
Subject: Re: Delimited payloads input issue
Thanks!
I am using the old way and I see no reason to switch really?
cheers
On 11 March 2015 at 20:18, Shawn Heisey apa...@elyograg.org wrote:
On 3/11/2015 12:25 PM, Karl Kildén wrote:
I am a solr beginner. Anyone knows how solr 5.0 determines the max heap
size? I can't find it anywhere.
Thanks a lot Erick.. It will be helpful.
On Wed, Mar 11, 2015 at 9:27 PM, Erick Erickson erickerick...@gmail.com
wrote:
The configs are in Zookeeper. So you have to switch your thinking,
it's rather confusing at first.
When you create a collection, you specify a config set, these are
@Shawn,
I can definitely upgrade to SolrJ 4.x and would prefer that so as to target
4.x cores as well. I'm already on Java 7.
One attempt I made was this
UpdateRequest updateRequest = new UpdateRequest();
updateRequest.setParam(collection, collectionName);
Hello,
I am a solr beginner. Anyone knows how solr 5.0 determines the max heap
size? I can't find it anywhere.
Also, where whould you activate jmx? Would like to be able to use visualvm
in the future I imagine.
I have a custom nssm thing going that installs it as a window service that
simply
I have a SolrJ application that reads from a Redis queue and updates
different collections based on the message content. New collections are
added without my knowledge, so I am creating SolrServer objects on the fly
as follows:
def solrHost = http://myhost/solr/; (defined at startup)
On 3/11/2015 12:23 PM, tuxedomoon wrote:
I have a SolrJ application that reads from a Redis queue and updates
different collections based on the message content. New collections are
added without my knowledge, so I am creating SolrServer objects on the fly
as follows:
def solrHost =
AFAICT, what you’re trying to do is take a configset you’ve used in the
past with an older version of Solr and get it to work with a newer Solr
version. If that’s so, perhaps you should start with a configset like
sample_techproducts_configs?
This is exactly i want to do. Thanks for advice
Hi Aman,
So you (randomly?) chose an example configset, commented out parts you didn’t
understand, and now things don’t work?
… Maybe you should review the process you’re using?
Like, don’t start with a configset that will auto-populate the schema for you
with guessed field types if you don’t
On 3/11/2015 12:25 PM, Karl Kildén wrote:
I am a solr beginner. Anyone knows how solr 5.0 determines the max heap
size? I can't find it anywhere.
Also, where whould you activate jmx? Would like to be able to use visualvm
in the future I imagine.
I have a custom nssm thing going that
Looks like it's happening for any field which is using docvalues.
java.lang.IllegalStateException: unexpected docvalues type NONE for field
'title_sort' (expected=SORTED). Use UninvertingReader or index with
docvalues.
Any idea ?
--
View this message in context:
On 3/11/2015 3:35 PM, tuxedomoon wrote:
I can definitely upgrade to SolrJ 4.x and would prefer that so as to target
4.x cores as well. I'm already on Java 7.
One attempt I made was this
UpdateRequest updateRequest = new UpdateRequest();
updateRequest.setParam(collection,
1) the error message you posted doesn't appear to have been copied
verbatim (note: ...or dex...) ... please proivde the *exact* error you
are getting -- ideally with full stack trace from the solr logs.
2) the field/ you posted doens't match the field name in your error
message
3)providing a
@Shawn
I'm getting the Bad Request again, with the original code snippet I posted,
it appears to be an 'illegal' string field.
SOLR log
-
INFO:
{add=[mgid:arc:content:jokers.com:694d5bf8-ecfd-11e0-aca6-0026b9414f30]} 0 7
Hi,
Thanks Nitin for replying, isn't it will be costly operation to restart all
nodes.
What i am doing in this is uploading the configurations again to zookeeper
and then reloading my core. And it is working well. So am i missing
something?
With Regards
Aman Tandon
On Wed, Mar 11, 2015 at
When I run the following query,
http://myserver:8990/solr/archives0/select?q=*:*rows=3wt=jsonft=id,ymd
The response is
What is the ft parameter that you are sending?
In order to see all stored fields use the parameter fl=*
Or list the field names you need: fl=id,ymd
On Wed, Mar 11, 2015 at 12:35 PM, phi...@free.fr wrote:
When I run the following query,
Hello,
I found the reason: the query to store ymds in SOLR was invalid (json and
literal are concatenated below).
curl -Ss -X POST
'http://myserver:8990/solr/archives0/update/extract?extractFormat=textwt=jsonliteral.ymd=1944-12-31T00:00:00Aliteral.id=159168
Philippe
- Mail original
I meant 'fl'.
--
http://myserver:8990/solr/archives0/select?q=*:*rows=3wt=jsonfl=*
--
Thanks very much for each of your replies. These resolved my problem and
teach me something important.
I have just discovered that I have another problem but I guess that I
have to open another discussion.
Cheers,
Mirko
On 10/03/15 20:30, Chris Hostetter wrote:
: is a syntactically
On 3/11/2015 12:43 AM, Aman Tandon wrote:
Thanks Nitin for replying, isn't it will be costly operation to restart all
nodes.
What i am doing in this is uploading the configurations again to zookeeper
and then reloading my core. And it is working well. So am i missing
something?
Yes, that
Thanks Shawn.
except that you should reload the collection, which
will reload all cores for that collection
So i could reload a collection via Collection API's
http://localhost:8983/solr/admin/collections?action=RELOADname=newCollection
right?
With Regards
Aman Tandon
On Wed, Mar 11,
Hello,
does anyone if it is possible to create a directory resource in the solr-jetty
configuration files?
In Tomcat 8, you can do the following:
PostResources
className=org.apache.catalina.webresources.DirResourceSet
Hi, alexandre..
Thanks for responding...
When I created new collection(wikingram) using solrCloud. It gets create
into example/cloud/node*(node1, node2) like that.
I have used *schema.xml and solrconfig.xml of sample_techproducts_configs*
configuration.
Now, The problem is that.
If I change the
Thanks Walter. This explains a lot.
- MJ
-Original Message-
From: Walter Underwood [mailto:wun...@wunderwood.org]
Sent: Tuesday, March 10, 2015 4:41 PM
To: solr-user@lucene.apache.org
Subject: Re: Cores and and ranking (search quality)
If the documents are distributed randomly across
Which example are you using? Or how are you creating your collection?
If you are using your example, it creates a new directory under
example. If you are creating a new collection with -c, it creates
a new directory under the server/solr. The actual files are a bit
deeper than usual to allow for
I am using solr 4.10.3. Documents are indexed using apache nutch 2.3. There
is a field in schema.xml that is tstamp that contains informas when
documents was indexed. This field is not indexed and stored only in solr. I
want to count no of documents indexed by nutch in solr. It is clear that I
Hi Shawn,
I make the changes in my schema.xml uploaded the configuration from one
of my server, which is now visible in all other servers (I confirmed it by
checking from admin interface).
*My Solr Cloud arch is :*
I have two collections, mcat intent in my external zookeeper ensemble of
3.
Hello,
I have switched from solr 4.10.2 to solr 5.0.0. In solr
4-10.2, schema.xml and solrconfig.xml were in example/solr/conf/ folder.
Where is schema.xml and solrconfig.xml in solr 5.0.0 ? and also want to
know how to configure in solrcloud ?
Thanks for your reply. Initially, I was under the impression that the issue
is related to grouping as group queries were failing. Later, when I looked
further, I found that it's happening for any field for which the docvalue
has turned on. The second example I took was from another field. Here's a
Didier, I'm starting to look at SOLR-6399
after the core was unloaded, it was absent from the collection list, as
if it never existed. On the other hand, re-issuing a CREATE call with the
same collection restored the collection, along with its data
The collection is sill in ZK though?
upon
Hi
am evaluating the SolrCloud Offering for distributed search.
Do we have any case studies are websites which are using these new features
of SolrCloud ?
Any inputs are highly appreciated.
Thanks
Pradeep
Something like this:
https://www.youtube.com/watch?v=_Erkln5WWLwlist=PLU6n9Voqu_1FM8nmVwiWWDRtsEjlPqhgP
?
Or are you looking for specific setups? I think those are harder to
get (everybody is different...), but it might be worth looking through
the rest of the Solr Revolution videos, I think
On 3/11/2015 6:32 AM, Aman Tandon wrote:
I restart my complete cluster but the problem still present. Please help.
*Here is the screenshot url:*
*http://i.imgur.com/QFdg89S.png
http://i.imgur.com/tS0yTNh.png
The first screenshot actually shows the problem, but it may not be
immediately
On 3/11/2015 6:17 AM, Hafiz Shafiq wrote:
I am using solr 4.10.3. Documents are indexed using apache nutch 2.3. There
is a field in schema.xml that is tstamp that contains informas when
documents was indexed. This field is not indexed and stored only in solr. I
want to count no of documents
Hi Shawn,
As suggested i gave the -Dhost=192.168.5.236 in command line for the server
which was showing 127.0.0.1.
*./solr start -c -z 192.168.6.217:2181
http://192.168.6.217:2181,192.168.5.81:2181
http://192.168.5.81:2181,192.168.5.236:2181 http://192.168.5.236:2181
-p 4567 -Dhost=192.168.5.236
On 3/11/2015 4:28 PM, Shawn Heisey wrote:
When I have some time to actually work on the code, I'm going to write
it using 4.x classes because that's what I have immediate access to,
but if you do 5.x, SolrServer becomes SolrClient, and HttpSolrServer
becomes HttpSolrClient.
At the URL below
41 matches
Mail list logo