On 6/4/2015 11:42 PM, pras.venkatesh wrote:
I see docValues has been there since Lucene 4.0. so can I use docValues with
my current solr cloud version of 4.8.x
The reason I am asking is because, I have deployment mechanism and securing
the index (using Tomcat valve) all built out based on
Hi Benedetti,
I've set str name=buildOnStartuptrue/str in my solrconfig.xml
tentatively, and the field which I'm using for suggestion has been set
stored=true.
However, I still couldn't get any suggestions even after I restart my Solr.
Is there anything else I might have missed out?
In
On 6/5/2015 12:32 AM, Chandima Dileepa wrote:
According to the wiki, I got to know that integrating Solr (starting
release 5.0.0) with tomcat cannot be done. Should I run Solr as a
standalone server?
Yes.
There's a lot more detail, but read this first:
https://wiki.apache.org/solr/WhyNoWar
That's why we are trying to get the user to change something else instead
of the collection name. The collection alias sounds like a good option.
Is there a way to list out all the alias names, or the only way is to
reference it at aliases.json file under the Cloud section in the Admin UI?
As
On 6/4/2015 11:39 PM, Zheng Lin Edwin Yeo wrote:
The reason is we want to allow flexibility to change the collection name
based on the needs of the users.
For the collection aliasing, does this mean that the user will reference
the collection by the alias name instead of the collection name,
pickup the patch https://issues.apache.org/jira/browse/SOLR-5882 and/or
chase committers.
On Fri, Jun 5, 2015 at 10:35 AM, DorZion dorz...@gmail.com wrote:
Hey,
I'm using Solr 5.0.0 and I'm trying to sort documents with FunctionQueries.
The problem is that I'm trying to sort those documents
Hi,
According to the wiki, I got to know that integrating Solr (starting
release 5.0.0) with tomcat cannot be done. Should I run Solr as a
standalone server?
Thanks,
Chandima
Hey,
I'm using Solr 5.0.0 and I'm trying to sort documents with FunctionQueries.
The problem is that I'm trying to sort those documents with their child
documents elements.
Here is an example:
I have three documents, one is parent, the others are child documents
(_childDocuments_).
{
id:
To verify if you have valued stored, simply do some simple query.
But if was stored from the beginning , probably it is ok.
Please check the logs as well for anything.
If no problem there I can take a look better to the config.
Cheers
2015-06-05 11:07 GMT+01:00 Zheng Lin Edwin Yeo
If i've set stored=true for that field, so it should be stored already? Or
do I have to verify using other means?
This field is stored from the beginning. I've also tried to index some new
documents in it, and have also set str name=buildOnCommittrue/str for
now, but there's still no suggestions
I'm not so sure this is as bad as it sounds. When your collection is
sharded, no single node knows about the documents in other shards/nodes,
so to find the total number, a query will need to go to every node.
Trying to work out something to do a single request to every node,
combine their
On 6/5/2015 7:00 AM, Upayavira wrote:
I'm not so sure this is as bad as it sounds. When your collection is
sharded, no single node knows about the documents in other shards/nodes,
so to find the total number, a query will need to go to every node.
Trying to work out something to do a single
Any thoughts on this / anything configuration items I can check? Could
the 180 second clusterstatus timeout messages that I'm getting be
related? Any issue with running 7 nodes in the zookeeper quorum? For
reference the clusterstatus stack trace is:
org.apache.solr.common.SolrException:
On 6/3/2015 6:39 PM, Joseph Obernberger wrote:
Hi All - I've run into a problem where every-once in a while one or more
of the shards (27 shard cluster) will loose connection to zookeeper and
report updates are disabled. In additional to the CLUSTERSTATUS
timeout errors, which don't seem to
I would need to look at the code to figure out how it works, but I would
imagine that the shards are shuffled randomly among the hosts so that
multiple collections will be evenly distributed across the cluster. It
would take me quite a while to familiarize myself with the code before I
could
On 6/5/2015 1:46 AM, Zheng Lin Edwin Yeo wrote:
That's why we are trying to get the user to change something else instead
of the collection name. The collection alias sounds like a good option.
Is there a way to list out all the alias names, or the only way is to
reference it at aliases.json
Dear Solr Users,
I would like to post 1 000 000 records (1 records = 1 files) in one shoot ?
and do the commit and the end.
Is it possible to do that ?
I've several directories with each 20 000 files inside.
I would like to do:
bin/post -c mydb /DATA
under DATA I have
/DATA/1/*.xml (20 000
Thank you Shawn! Yes - it is now a Solr 5.1.0 cloud on 27 nodes and we
use the startup scripts. The current index size is 3.0T - about 115G
per node - index is stored in HDFS which is spread across those 27 nodes
and about (a guess) - 256 spindles. Each node has 26G of HDFS cache
You have to provide a _lot_ more details. You say:
The problem... some data was not get indexed... still sometime we
found that documents are not getting indexed.
Neither of these should be happening, so I suspect
1 you're expectations aren't correct. For instance, in the
master/slave setup you
Are you using the q param? You should use suggest.q if I remember well!
On 5 Jun 2015 18:02, Zheng Lin Edwin Yeo edwinye...@gmail.com wrote:
I've tried the queries and it is working fine, but I've found these in the
logs.Not sure if the behavior is correct or not.
INFO - 2015-06-05
Hi Shalin,
Yes I did read that. But, putting jars in the classpath isn't a problem in
our deployment cycle and sounds more simple.
Thanks,
Vaibhav
On Thu, Jun 4, 2015 at 8:34 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
Since you are moving to Solr 5.x, have you seen
I would like to add this , to Shawn description :
DocValues are only available for specific field types. The types chosen
determine the underlying Lucene docValue type that will be used. The
available Solr field types are:
- StrField and UUIDField.
- If the field is single-valued
Ok thanks for these information !
Le 05/06/2015 17:37, Erick Erickson a écrit :
Picking up on Alessandro's point. While you can post all these docs
and commit at the end, unless you do a hard commit (
openSearcher=true or false doesn't matter), then if your server should
abnormally terminate
Thanks for the link,
So, I launch this post, I will see on Monday if it will ok :)
Le 05/06/2015 17:21, Alessandro Benedetti a écrit :
I can not see any problem in that, but talking about commits I would like
to make a difference between Hard and Soft .
Hard commit - durability
Soft commit -
Hi,
In my use case, I am adding a document to Solr through spring application using
spring-data-solr. This setup works well with single Solr. In current setup it
is single point of failure. So we decided to use solr replication because we
also need centralized search. Therefore we setup two
Picking up on Alessandro's point. While you can post all these docs
and commit at the end, unless you do a hard commit (
openSearcher=true or false doesn't matter), then if your server should
abnormally terminate for _any_ reason, all these docs will be
replayed on startup from the transaction
I've tried the queries and it is working fine, but I've found these in the
logs.Not sure if the behavior is correct or not.
INFO - 2015-06-05 18:06:28.437; [collection1 shard1 core_node1
collection1] org.apache.solr.handler.component.SuggestComponent;
SuggestComponent prepare with :
I want to have realtime index and realtime search.
Rgds
AJ
On Jun 5, 2015, at 10:12 PM, Amit Jha shanuu@gmail.com wrote:
Hi,
In my use case, I am adding a document to Solr through spring application
using spring-data-solr. This setup works well with single Solr. In current
setup
Have you verified that you actually have values stored for the field you
want to build suggestions from ?
Was the field stored from the beginning or you changed it ?
Have you re-indexed the content after you made the field stored ?
Cheers
2015-06-05 10:35 GMT+01:00 Zheng Lin Edwin Yeo
I can not see any problem in that, but talking about commits I would like
to make a difference between Hard and Soft .
Hard commit - durability
Soft commit - visibility
I suggest you this interesting reading :
bq: Does this remain 'fixed' in Zookeeper once established, so that restarting
nodes will not affect their shardn assignment?
How could it work otherwise? If restarting a node assigned the index
on that disk to another shard chaos would ensue.
Best,
Erick
On Fri, Jun 5, 2015 at 6:51 AM,
On 6/5/2015 1:38 PM, Amit Jha wrote:
Thanks Eric, what about document is committed to master?Then document should
be visible from master. Is that correct?
I was using replication with repeater mode because LBHttpSolrServer can send
write request to any of the Solr server, and that Solr
Thanks Eric, what about document is committed to master?Then document should be
visible from master. Is that correct?
I was using replication with repeater mode because LBHttpSolrServer can send
write request to any of the Solr server, and that Solr should index the
document because it a
Thanks Shawn, for reminding CloudSolrServer, yes I have moved to SolrCloud.
I agree that repeater is a slave and acts as master for other slaves. But still
it's a master and logically it has to obey the what master suppose to obey.
if 2 servers are master that means writing can be done on
Thanks, that was the response I was expecting unfortunately.
We have to stop the cluster to add a node, because Solr is part of a larger
system and we don’t support either partial shutdown, or dynamic addition within
the larger system.
“it waits for some time to see other nodes but if it finds
On 6/5/2015 2:20 PM, Amit Jha wrote:
Thanks Shawn, for reminding CloudSolrServer, yes I have moved to SolrCloud.
I agree that repeater is a slave and acts as master for other slaves. But
still it's a master and logically it has to obey the what master suppose to
obey.
if 2 servers are
Thanks everyone. I got the answer.
Rgds
AJ
On Jun 6, 2015, at 7:00 AM, Erick Erickson erickerick...@gmail.com wrote:
bq: if 2 servers are master that means writing can be done on both.
If there's a single piece of documentation that supports this contention,
we'll correct it immediately.
bq: if 2 servers are master that means writing can be done on both.
If there's a single piece of documentation that supports this contention,
we'll correct it immediately. But it's simply not true.
As Shawn says, the entire design behind master/slave
architecture is that there is exactly one
Hi Alessandro,
I'm actually on my dev' computer, so I would like to post 1 000 000 xml
file (with a structure defined in my schema.xml)
I have already import 1 000 000 xml files by using
bin/post -c mydb /DATA0/1 /DATA0/2 /DATA0/3 /DATA0/4 /DATA0/5
where /DATA0/X contains 20 000 xml files (I
Hi Bruno,
I can not see what is your challenge.
Of course you can index your data in the flavour you want and do a commit
whenever you want…
Are those xml Solr xml ?
If not you would need to use the DIH, the extract update handler or any
custom Indexer application.
Maybe I missed your point…
Give
40 matches
Mail list logo