I see that while merging indexes (I mean optimizing via admin gui), my Solr
instance can still response select queries (as well). How that querying
mechanism works (because merging not finished yet but my Solr instance
still can return a consistent response)?
.
Until the new, merged segment is complete, the existing segments remain
untouched and readable.
-- Jack Krupansky
-Original Message- From: Furkan KAMACI
Sent: Wednesday, April 17, 2013 6:28 PM
To: solr-user@lucene.apache.org
Subject: Select Queris While Merging Indexes
I see
Could you give more info about your index size and technical details of
your machine? Maybe you are indexing more data day by day and your RAM
capability is not enough anymore?
2013/4/19 qibaoyuan qibaoy...@gmail.com
Hello,
i am using sold 4.1.0 and ihave used sold cloud in my product.I
I am trying to understand update request processor chains. Do they runs one
by one when indexing a ducument? Can I identify multiple update request
processor chains? Also what are that LogUpdateProcessorFactory and
RunUpdateProcessorFactory?
Is there any documentation that explains pros and cons of using RAID or
different RAIDS?
Thanks for detailed answers.
2013/4/19 Chris Hostetter hossman_luc...@fucit.org
: I am trying to understand update request processor chains. Do they runs
one
: by one when indexing a ducument? Can I identify multiple update request
: processor chains? Also what are that
I was reading here: http://wiki.apache.org/solr/NewSolrCloudDesign
There says something about:
*Split_partition* : (params : partitionoptional). The partition is split
into two halves. If the partition parameter is not supplied, the partition
with the largest number of documents is identified as
I know that: when using SolrCloud we define the number of shards into the
system. When we start up new Solr instances each one will be a a leader for
a shard, and if I continue to start up new Solr instances (that has
exceeded the number number of shards) each one will be a replica for each
leader
?
2013/4/20 Shawn Heisey s...@elyograg.org
On 4/20/2013 7:36 AM, Toke Eskildsen wrote:
Furkan KAMACI [furkankam...@gmail.com]:
Is there any documentation that explains pros and cons of using RAID or
different RAIDS?
There's plenty for RAID in general, but I do not know of any in-depth
http://sematext.com/
On Tue, Apr 16, 2013 at 6:31 PM, Furkan KAMACI lt;
furkankamaci@
gt; wrote:
Thanks again for your answer. If I find any document about such
comparisons
that I would like to read.
By the way, is there any advantage for using Lucene instead
. And later to 5 nodes with 2 shards and so on. There are
cases where you want some way to make the most of your hardware yet
plan for expansion.
Best
Erick
On Sun, Apr 21, 2013 at 3:51 PM, Furkan KAMACI furkankam...@gmail.com
wrote:
I know that: when using SolrCloud we define the number
Message- From: Furkan KAMACI
Sent: Monday, April 15, 2013 9:38 AM
To: solr-user@lucene.apache.org
Subject: Re: SolrCloud Leaders
Here writes something:
https://support.lucidworks.com/entries/22180608-Solr-HA-DR-overview-3-x-and-4-0-SolrCloud-and
says:
Both leaders and replicas index
Thanks for the answers.
2013/4/23 Erick Erickson erickerick...@gmail.com
bq: However what will happen to that 10 nodes when I specify replication
factor?
I think they just sit around doing nothing.
Best
Erick
On Mon, Apr 22, 2013 at 7:24 AM, Furkan KAMACI furkankam...@gmail.com
wrote
Oopps, Mark you said: If you use tomcat, this won't work in 4.2 or 4.2.1
Can you explain more what won't be at Tomcat and what will change at 4.3?
2013/4/23 Mark Miller markrmil...@gmail.com
If you use jetty - which you should :) It's what we test with. Tomcat only
gets user testing.
If you
, 2013, at 2:02 PM, Furkan KAMACI furkankam...@gmail.com wrote:
Oopps, Mark you said: If you use tomcat, this won't work in 4.2 or
4.2.1
Can you explain more what won't be at Tomcat and what will change at 4.3?
2013/4/23 Mark Miller markrmil...@gmail.com
If you use jetty - which you
When I read about SolrCloud wiki there writes something about cluster
overseer. What is the role of that at read and write processes? How can I
see which node is overseer at my cluster?
Thanks for the explanation.
2013/4/23 Mark Miller markrmil...@gmail.com
On Apr 23, 2013, at 2:53 PM, Furkan KAMACI furkankam...@gmail.com wrote:
When I read about SolrCloud wiki there writes something about cluster
overseer. What is the role of that at read and write processes? How can
I
Hi Mark;
All in all you say that when 4.3 is tagged at repository (I mean when it is
ready) this feature will work for Tomcat too at a stable version?
2013/4/23 Mark Miller markrmil...@gmail.com
On Apr 23, 2013, at 2:49 PM, Shawn Heisey s...@elyograg.org wrote:
What exactly is the
, Mar 22, 2013 at 8:07 AM, Furkan KAMACI furkankam...@gmail.com
wrote:
If I want to use Solr in a web search engine what kind of strategies
should
I follow about how to run Solr. I mean I can run it via embedded jetty or
use war and deploy to a container? You should consider that I
If I have a Zookeper Cluster for my Hbase Cluster already, can I use same
Zookeper cluster for my SolrCloud too?
2013/4/23 Timothy Potter thelabd...@gmail.com
Ah cool, thanks for clarifying Chris - some of that multi-config
management stuff gets confusing but much clearer from your
might work
with Jetty and not another container until/unless the issue is reported by
a user and fixed.
- Mark
On Apr 23, 2013, at 3:25 PM, Furkan KAMACI furkankam...@gmail.com wrote:
At first I will work on 100 Solr nodes and I want to use Tomcat as
container and deploy Solr as a war. I
I will use Nutch with map reduce to crawl huge data and use SolrCloud for
many users with high response time. Actually I wonder about performance
issues separating Zookeper cluster or using them for both Hbase and Solr.
2013/4/23 Shawn Heisey s...@elyograg.org
On 4/23/2013 1:46 PM, Furkan
of the processing in Solr happens in Solr/Lucene code
and Jetty (or whatever engine you choose) is a very small part of any
request.
On Tue, Apr 23, 2013 at 1:52 PM, Furkan KAMACI furkankam...@gmail.com
wrote:
Thanks for the answer. If I find something that explains using embedded
Jetty or Jetty
?
2013/4/23 Shawn Heisey s...@elyograg.org
On 4/23/2013 1:52 PM, Furkan KAMACI wrote:
Thanks for the answer. If I find something that explains using embedded
Jetty or Jetty, or Tomcat it would be nice.
2013/4/23 Mark Miller markrmil...@gmail.com
Tomcat should work just fine in most cases
Thanks for the answers. I will go with embedded Jetty for my SolrCloud. If
I face with something important I would want to share my experiences with
you.
2013/4/23 Shawn Heisey s...@elyograg.org
On 4/23/2013 2:25 PM, Furkan KAMACI wrote:
Is there any documentation that explains using Jetty
When I read Lucidworks' Solr Guide I saw that:
Distributed searching does not support the QueryElevationComponent, which
configures the
top results for a given query regardless of Lucene's scoring
is that still true for SolrCloud?
. (Anonymous - via GTD
book)
On Sun, Apr 14, 2013 at 4:59 PM, Furkan KAMACI furkankam...@gmail.com
wrote:
I have crawled some internet pages and indexed them at Solr.
When I list my results via Solr I want that: if a page has a URL(my
schema
includes a field for URL) that ends with .edu
I want to use SolrCloud at my system. I know that there are many automated
operations at SolrCloud one of them includes version system of the
documents and so checking for consistency. When I read about documentations
I saw that there is a tool called check index tool for Lucene.
Does it
Thanks. It seems that the wiki should be updated at Lucidworks side.
2013/4/24 Mark Miller markrmil...@gmail.com
No, I'm fairly sure we added support a year or less back.
- Mark
On Apr 23, 2013, at 5:56 PM, Furkan KAMACI furkankam...@gmail.com wrote:
When I read Lucidworks' Solr Guide I
On Apr 23, 2013, at 3:20 PM, Furkan KAMACI furkankam...@gmail.com wrote:
Hi Mark;
All in all you say that when 4.3 is tagged at repository (I mean when it
is
ready) this feature will work for Tomcat too at a stable version?
2013/4/23 Mark Miller markrmil...@gmail.com
On Apr
Lucidworks Solr Guide says that:
If you are using Sun's JVM, add the -server command-line option when you
start Solr. This tells the JVM that it should optimize for a long running,
server process. If the Java runtime on your system is a JRE, rather than a
full JDK distribution (including javac
Hi;
I am new to Solr and I was using Solr as war file and deploying it into
Tomcat. However I decided to use Solr as jar file with Embedded Jetty. I
was doing like that: when I run dist at ant I get .war file of Solr and
used to deploy to Tomcat.
I want to use it as a jar file as like start.jar
absolute path to the relevant urls? That will
cleanly split the problem into 'still not working' and 'wrong relative
path'.
Regards,
Alex.
On Wed, Apr 24, 2013 at 9:02 AM, Furkan KAMACI furkankam...@gmail.com
wrote:
lib dir=../../../contrib/extraction/lib regex=.*\.jar /
lib dir
to
or
not.
Erik
On Apr 24, 2013, at 10:05 , Alexandre Rafalovitch wrote:
Have you tried using absolute path to the relevant urls? That will
cleanly split the problem into 'still not working' and 'wrong relative
path'.
Regards,
Alex.
On Wed, Apr 24, 2013 at 9:02 AM, Furkan KAMACI
erik.hatc...@gmail.com
Did you restart after adding those fields and types?
On Apr 24, 2013, at 16:59, Furkan KAMACI furkankam...@gmail.com wrote:
I have added that fields:
field name=text type=text_general indexed=true stored=true/
dynamicField name=attr_* type=text_general indexed=true
When I try to configure schema.xml within example folder and start jar file
I get that error:
org.apache.solr.common.SolrException: copyField dest :'author_s' is not an
explicit field and doesn't match a dynamicField.
There is nothing about it at example schema.xml file?
, 2013 at 6:50 PM, Furkan KAMACI furkankam...@gmail.com
wrote:
Here is my definition for handler:
requestHandler name=/update/extract class=solr.extraction.
ExtractingRequestHandler
lst name=defaults
str name=fmap.contenttext/str
str name=lowernamestrue/str
str name=uprefixattr_
)
On Wed, Apr 24, 2013 at 7:02 PM, Furkan KAMACI furkankam...@gmail.com
wrote:
When I try to configure schema.xml within example folder and start jar
file
I get that error:
org.apache.solr.common.SolrException: copyField dest :'author_s' is not
an
explicit field and doesn't match
jetty.
Otis
Solr ElasticSearch Support
http://sematext.com/
On Apr 23, 2013 4:56 PM, Furkan KAMACI furkankam...@gmail.com wrote:
Thanks for the answers. I will go with embedded Jetty for my SolrCloud.
If
I face with something important I would want to share my experiences with
you
Could you explain that what you mean with such kind of scripts? What it
checks and do exactly?
2013/4/25 Toke Eskildsen t...@statsbiblioteket.dk
On Wed, 2013-04-24 at 18:03 +0200, Mark Miller wrote:
On Apr 24, 2013, at 12:00 PM, Mark Miller markrmil...@gmail.com wrote:
it's easier
to get started. That's all. :)
Otis
--
Solr ElasticSearch Support
http://sematext.com/
On Thu, Apr 25, 2013 at 3:30 AM, Furkan KAMACI furkankam...@gmail.com
wrote:
Hi Otis;
You are right. start.jar starts up an Jetty and there is a war file under
example directory
I have a Zookeepeer ensemble with three machines. I have started a cluster
with one shard. However I decided to change my shard number. I want to
clean Zookeeper data but whatever I do I always get one shard and rest of
added Solr nodes are as replica.
What should I do?
, Furkan KAMACI furkankam...@gmail.com
wrote:
I have a Zookeepeer ensemble with three machines. I have started a
cluster
with one shard. However I decided to change my shard number. I want to
clean Zookeeper data but whatever I do I always get one shard and rest
of
added Solr nodes
instances, use the clean
command on / or /solr (whatever the root is in zk for you Solr stuff),
start your Solr instances, create the collection again.
- Mark
On Apr 25, 2013, at 11:27 AM, Furkan KAMACI furkankam...@gmail.com
wrote:
I have a Zookeepeer ensemble with three machines. I
Solr stuff),
start your Solr instances, create the collection again.
- Mark
On Apr 25, 2013, at 11:27 AM, Furkan KAMACI furkankam...@gmail.com
wrote:
I have a Zookeepeer ensemble with three machines. I have started a
cluster
with one shard. However I decided to change my shard number
Ok, it works
2013/4/25 Mark Miller markrmil...@gmail.com
I think it's numShards, not numshards.
- Mark
On Apr 25, 2013, at 12:07 PM, Furkan KAMACI furkankam...@gmail.com
wrote:
Hi;
If you can help it would be nice:
I have erased the data. I use that commands:
Firstly I do
I use SolrCloud. Let's assume that I want to move all indexes from one
place to another. There maybe two reasons for that:
First one is that: I will close all my system and I will use new machines
with previous indexes (if it is a must they may have same network topology)
at anywhere else after
I will use SolrCloud and theis main purpose will be rich document indexing.
Solr example includes that definition:
requestHandler name=/update/extract
startup=lazyclass=solr.extraction.ExtractingRequestHandler
it startups it lazy. So what is pros and cons for removing it for my
situation?
I have a large corpus of rich documents i.e. pdf and doc files. I think
that I can use directly the example jar of Solr. However for a real time
environment what should I care? Also how do you send such kind of documents
into Solr to index, I think post.jar does not handle that file type? I
I use Solr 4.2.1 and these are my fields:
field name=id type=string indexed=true stored=true required=true
multiValued=false /
field name=text type=text_general indexed=true stored=true/
!-- Common metadata fields, named specifically to match up with
SolrCell metadata when parsing rich
I want to use GrayLog2 to monitor my logging files for SolrCloud. However I
think that GrayLog2 works with log4j and logback. Solr uses slf4j.
How can I solve this problem and what logging monitoring system does folks
use?
.
On Fri, Apr 26, 2013 at 11:30 AM, Furkan KAMACI furkankam...@gmail.com
wrote:
I use Solr 4.2.1 and these are my fields:
field name=id type=string indexed=true stored=true
required=true
multiValued=false /
field name=text type=text_general indexed=true stored=true/
!-- Common
@cominvent.com
http://wiki.apache.org/solr/post.jar
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
Solr Training - www.solrtraining.com
26. apr. 2013 kl. 13:28 skrev Furkan KAMACI furkankam...@gmail.com:
Hi Raymond;
Now I get that error: SimplePostTool: WARNING
file types:
http://wiki.apache.org/solr/ExtractingRequestHandler
-- Jack Krupansky
-Original Message- From: Furkan KAMACI
Sent: Friday, April 26, 2013 4:48 AM
To: solr-user@lucene.apache.org
Subject: Solr Indexing Rich Documents
I have a large corpus of rich documents i.e. pdf
I think that I should start a new thread for my question to help people who
searches for same situation.
2013/4/26 Furkan KAMACI furkankam...@gmail.com
If you can help me it would be nice. I get that error:
SimplePostTool version 1.5
Posting files to base url http://localhost:8983/solr
Could anybody help me for my error. When I try to post documents with
post.jar I get that error:
SimplePostTool version 1.5
Posting files to base url http://localhost:8983/solr/update/extract..
Entering auto mode. File endings considered are
://wiki.apache.org/solr/ExtractingRequestHandler
Again, DO NOT MIX the instructions from the two.
post.jar is designed so that you do not need to know or care exactly how
rich document indexing works.
-- Jack Krupansky
-Original Message- From: Furkan KAMACI
Sent: Friday, April 26, 2013 5:30 AM
I am new to Solr and try to index rich files. I have defined that at my
schema:
field name=id type=string indexed=true stored=true required=true
multiValued=false /
and there is a line at my schema:
uniqueKeyid/uniqueKey
should I make it like that:
uniqueKey required=false/uniqueKey
for my
I sen some documents to my Solr to be indexed. However I get such kind of
errors:
ERROR: [doc=0579B002] unknown field 'name'
I know that I should define a field named 'name' at mu schema. However
there maybe many of fields like that. How can I define a generic field that
holds all non defined
Ok, solved
2013/4/26 Raymond Wiker rwi...@gmail.com
On Fri, Apr 26, 2013 at 2:45 PM, Furkan KAMACI furkankam...@gmail.com
wrote:
I use that command to post:
java -Durl=http://localhost:8983/solr/update/extract -Dauto -jar
post.jar
523387.pdf
I think you need to have
I have not indicated a URL and it solved as you mention. Because default
URL does not include /extract
2013/4/26 Furkan KAMACI furkankam...@gmail.com
Ok, solved
2013/4/26 Raymond Wiker rwi...@gmail.com
On Fri, Apr 26, 2013 at 2:45 PM, Furkan KAMACI furkankam...@gmail.com
wrote:
I use
Hi;
I use that:
copyField source=* dest=text/
however I want to exclude something i.e. author field. How can I do that?
ExtractingRequestHandler feature
of solr.
--- On Fri, 4/26/13, Furkan KAMACI furkankam...@gmail.com wrote:
From: Furkan KAMACI furkankam...@gmail.com
Subject: Re: Solr Indexing Rich Documents
To: solr-user@lucene.apache.org
Date: Friday, April 26, 2013, 3:39 PM
Thanks for the answer, I get
I use that at my Solr 4.2.1:
dynamicField name=* type=ignored multiValued=true/
however can I exlude some patterns from it?
Thanks for the answers.
2013/4/26 Chris Hostetter hossman_luc...@fucit.org
: In short, whether you want to keep the handler is completely independent
of
: the lazy startup option.
I think Jack missread your question -- my interpretation is that you are
asking about the pros/cons of
OK, I asked another question for it and it has solved, thanks.
2013/4/26 Furkan KAMACI furkankam...@gmail.com
Jack, thanks for your answers. Ok, when I remove -Durl parameter I think
it works, thanks. However I think that I have a problem with my schema. I
get that error:
Apr 26, 2013 3:52
Ok, thanks for the answer.
2013/4/26 Gora Mohanty g...@mimirtech.com
On 26 April 2013 18:38, Furkan KAMACI furkankam...@gmail.com wrote:
I am new to Solr and try to index rich files. I have defined that at my
schema:
[...]
uniqueKey required=false/uniqueKey
This will not work: Please
Thanks, I used uprefix and solved my problem.
2013/4/26 Gora Mohanty g...@mimirtech.com
On 26 April 2013 20:51, Furkan KAMACI furkankam...@gmail.com wrote:
Hi;
I use that:
copyField source=* dest=text/
however I want to exclude something i.e. author field. How can I do
is and then decide what to keep). It is better to
use explicit patterns or static schema for production use.
-- Jack Krupansky
-Original Message- From: Furkan KAMACI
Sent: Friday, April 26, 2013 11:29 AM
To: solr-user@lucene.apache.org
Subject: Exclude Pattern at Dynamic Field
I use
I have that fields at my schema.xml:
field name=id type=string indexed=true stored=true required=true
multiValued=false /
field name=content type=text_general indexed=true stored=true
multiValued=false/
field name=_version_ type=long indexed=true stored=true/
dynamicField name=* type=ignored
I am trying to run an embedded Zookeeper ensemble however I get that error:
Apr 28, 2013 1:02:48 AM org.apache.solr.common.SolrException log
SEVERE: null:java.lang.IllegalArgumentException: port out of range:-1
at java.net.InetSocketAddress.init(InetSocketAddress.java:101)
at
Ok I have resolved it when I define host for -DzkRun parameter too.
2013/4/28 Furkan KAMACI furkankam...@gmail.com
I am trying to run an embedded Zookeeper ensemble however I get that error:
Apr 28, 2013 1:02:48 AM org.apache.solr.common.SolrException log
SEVERE
It seems that your solrconfig.xml can not find libraries. Here is an
example path from solrconfig.xml:
lib dir=../../../contrib/extraction/lib regex=.*\.jar /
2013/4/29 Krishna Venkateswaran krish...@usc.edu
Hi
I have installed Solr over Apache Tomcat.
I have used Apache Tomcat v6.x for
If you can help me it would be nice. I have tested crawling at my amazon
instances and I have a weird situation:
My slave version is higher than master (actually I have killed my master
and started up it again at some time)
Replication (Slave) Version Gen Size
Master: 1367243029412 49 1.29 GB
Check your logs when you startup Solr if you get that error: There exists
no core with the name collection1. Do you get any error as like
core:collection1 could not create or something like that?
2013/4/29 Jon Strayer j...@strayer.org
I can't be the only person to run into this, but I can't
I think about such situation:
Let's assume that I am indexing at my SolrCloud. My leader has a version of
higher than replica as well (I have one leader and one replica for each
shard). If I kill leader, replica will be leader as well. When I startup
old leader again it will be a replica for my
When I look at admin gui I see that for a leader:
Replication (Slave) Version Gen Size
Master: 1367309548534 84 779.87 MB
Slave: 1367307649512 82 784.44 MB
and that for a replica:
Replication (Master) Version Gen Size
Master: 1367309548534 84 779.87 MB
isn't it confusing leader is a slave and
Hi Folks;
I can backup my indexes at SolrCloud via
http://_master_host_:_port_/solr/replication?command=backup
and it creates a file called snapshot. I know that I should pull that
directory any other safe place (a backup store) However what should I do to
make a recovery from that backup file?
I use SolrCloud, 4.2.1 of Solr.
Here is a detail from my admin page:
Replication (Slave) Version Gen Size
Master: 1367309548534 84 779.87 MB
Slave: 1367307649512 82 784.44 MB
When I use command=abortfetch still file size are not same. Any idea?
Street, 2nd Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a Game
On Tue, Apr 30, 2013 at 8:13 AM, Furkan KAMACI furkankam...@gmail.com
wrote:
I use SolrCloud, 4.2.1 of Solr.
Here is a detail from my admin page:
Replication (Slave) Version Gen Size
Master
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a Game
On Tue, Apr 30, 2013 at 8:06 AM, Furkan KAMACI furkankam...@gmail.com
wrote:
Hi Folks;
I can backup my indexes at SolrCloud via
http://_master_host_
, NY 10017-6271
www.appinions.com
Where Influence Isn’t a Game
On Tue, Apr 30, 2013 at 10:33 AM, Furkan KAMACI furkankam...@gmail.com
wrote:
I think that replication occurs after commit by default. It has been long
time however there is still mismatch between leader and replica
I use Solr 4.2.1
What happens if I unload a core, I mean what does Solr do? Because solr.xml
didn't change and I think that Solr should writes something to somewhere or
deletes something from somewhere?
Shawn, why they don't have same data byte per byte? Can I force slave to
pull them, I tried but didn't work.
2013/4/30 Furkan KAMACI furkankam...@gmail.com
I use Solr 4.2.1 as SolrCloud
2013/4/30 Michael Della Bitta michael.della.bi...@appinions.com
I'm a little confused. Are you using
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a Game
On Tue, Apr 30, 2013 at 11:04 AM, Furkan KAMACI furkankam...@gmail.com
wrote:
I use Solr 4.2.1 as SolrCloud
2013/4/30 Michael Della Bitta michael.della.bi...@appinions.com
I'm a little confused. Are you using
the solr index after the core is
unregistered (asynchronously)
2. deleteDataDir=true -- will delete the entire data directory
3. deleteInstanceDir=true -- will delete the entire instance directory
including configuration files
On Tue, Apr 30, 2013 at 8:43 PM, Furkan KAMACI furkankam...@gmail.com
Oops, it changes solr.xml, OK.
2013/4/30 Furkan KAMACI furkankam...@gmail.com
I have closed my application and restarted it.If it didn't change anything
could you tell me how admin page says:
There are no SolrCores running.
Using the Solr Admin UI currently requires at least one SolrCore
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a Game
On Tue, Apr 30, 2013 at 1:27 PM, Furkan KAMACI furkankam...@gmail.com
wrote:
However I am using SolrCloud with 5 shards. Every leader has
about this want to chime in?
Michael Della Bitta
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a Game
On Tue, Apr 30, 2013 at 11:03 AM, Furkan KAMACI furkankam...@gmail.com
wrote
not the leader.
Michael Della Bitta
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a Game
On Tue, Apr 30, 2013 at 1:27 PM, Furkan KAMACI furkankam
a time later it will sync its replica to itself but nothing has changed.
2013/5/1 Shawn Heisey s...@elyograg.org
On 4/30/2013 8:33 AM, Furkan KAMACI wrote:
I think that replication occurs after commit by default. It has been long
time however there is still mismatch between leader and replica
or 1,3,2.
Cheers,
Tim
On Tue, Apr 30, 2013 at 12:45 PM, Furkan KAMACI furkankam...@gmail.com
wrote:
I had index and tlog folder under my data folder. I have a snapshot
folder
too when I make backup. However what will I do next if I want to use
backup, will I remove index and tlog
If you use Solr 4.x and SolrCloud there is no master-slave architecture
that has been before. You can change autoSoftCommit time, autoCommit time
at solrconfig.xml. Also you can consider using commitWithin, it is
explained here: http://wiki.apache.org/solr/UpdateXmlMessages
Beside that options if
Sorry but how do you use check index tool? Do you use Luke or does Solr has
built in functionality?
2013/5/1 Otis Gospodnetic otis.gospodne...@gmail.com
Was afraid of that and wondering if CheckIndex could regenerate the
segments file based on segments it finds in the index dir?
Otis
Solr
www.appinions.com
Where Influence Isn’t a Game
On Wed, May 1, 2013 at 8:59 AM, Furkan KAMACI furkankam...@gmail.com
wrote:
Sorry but what will I do? Will I copy everything under snapshot folder
into
under index folder? If I don't run backup command and just copy index
folder anywhere
Can you explain more about your document size, shard and replica sizes, and
auto/soft commit time parameters?
2013/5/2 vicky desai vicky.de...@germinait.com
Hi all,
I have recently migrated from solr 3.6 to solr 4.0. The documents in my
core
are getting constantly updated and so I fire a
I use the same folder naming convention of Solr example for my Solr 4.2.1
cloud. I have a collection1 folder and under it I have a conf folder. When
I starting up my first node, I indicate that:
-Dsolr.solr.home=./solr -Dsolr.data.dir=./solr/data -DnumShards=5
Does Near Real Time get not supported at SolrCloud?
I mean when a soft commit occurs at a leader I think that it doesn't
distribute it to replicas(because it is not at storage, does indexes at RAM
distributes to replicas too?) and a search query comes what happens?
Here is a part from wiki:
1) Just forward credentials from the super-request which caused the
inter-solr-node sub-requests
2) Use internal credentials provided to the solr-node by the
administrator at startup
what do you use and is there any code example for it?
not be behind replica because the old leader would not
come back and take over the leader role. It would ne just a replica and it
would replicate the index from whichever node is the leader.
Otis
Solr ElasticSearch Support
http://sematext.com/
On Apr 29, 2013 5:31 PM, Furkan KAMACI furkankam
101 - 200 of 746 matches
Mail list logo