We use the following merge policy on SSD's and are running on physical
machines with linux OS.
10
3
15
64
Not sure if its very aggressive, but its something we keep to prevent
deleted documents taking up too much space on our index.
Is
On 5/5/2015 1:15 PM, Rishi Easwaran wrote:
> Thanks for clarifying lucene segment behaviour. We don't trigger optimize
> externally, could it be internal solr optimize? Is there a setting/ knob to
> control when optimize occurs.
Optimize never happens automatically, but *merging* does. An opti
e only option we are left with it is to clean up
the entire index to free up disk space, and allow a replica to sync from
scratch.
Thanks,
Rishi.
-Original Message-
From: Shawn Heisey
To: solr-user
Sent: Tue, May 5, 2015 10:55 am
Subject: Re: Multiple index.timestamp directories usin
On 5/5/2015 7:29 AM, Rishi Easwaran wrote:
> Worried about data loss makes sense. If I get the way solr behaves, the new
> directory should only have missing/changed segments.
> I guess since our application is extremely write heavy, with lot of inserts
> and deletes, almost every segment is tou
, May 5, 2015 4:52 am
Subject: Re: Multiple index.timestamp directories using up disk space
Yes, data loss is the concern. If the recovering replica is not able
to
retrieve the files from the leader, it at least has an older copy.
Also,
the entire index is not fetched from the leader, only the
leader and follower lose data)?.
Thanks,
Rishi.
-Original Message-
From: Mark Miller
To: solr-user
Sent: Tue, Apr 28, 2015 10:52 am
Subject: Re: Multiple index.timestamp directories using up disk space
If copies of the index are not eventually cleaned up, I'd fill a JIRA
to
a
.
-Original Message-
From: Walter Underwood
To: solr-user
Sent: Mon, May 4, 2015 9:50 am
Subject: Re: Multiple index.timestamp directories using up disk space
One segment is in-use, being searched. That segment (and others) are merged into
a new segment. After the new segment is ready, searches are
,
> Rishi.
>
>
>
>
>
>
>
> -Original Message-
> From: Mark Miller
> To: solr-user
> Sent: Tue, Apr 28, 2015 10:52 am
> Subject: Re: Multiple index.timestamp directories using up disk space
>
>
> If copies of the index are not eventual
am
Subject: Re: Multiple index.timestamp directories using up disk space
If copies of the index are not eventually cleaned up, I'd fill a JIRA
to
address the issue. Those directories should be removed over time. At
times
there will have to be a couple around at the same time and others may
If copies of the index are not eventually cleaned up, I'd fill a JIRA to
address the issue. Those directories should be removed over time. At times
there will have to be a couple around at the same time and others may take
a while to clean up.
- Mark
On Tue, Apr 28, 2015 at 3:27 AM Ramkumar R. Ai
SolrCloud does need up to twice the amount of disk space as your usual
index size during replication. Amongst other things, this ensures you have
a full copy of the index at any point. There's no way around this, I would
suggest you provision the additional disk space needed.
On 20 Apr 2015 23:21,
Hi All,
We are seeing this problem with solr 4.6 and solr 4.10.3.
For some reason, solr cloud tries to recover and creates a new index directory
- (ex:index.20150420181214550), while keeping the older index as is. This
creates an issues where the disk space fills up and the shard never ends up
12 matches
Mail list logo