Re: SPLITSHARD throwing OutOfMemory Error

2018-10-04 Thread Zheng Lin Edwin Yeo
Hi Atita,

It would be good to consider upgrading to have the use of the better
features like better memory consumption and better authentication.

On a side note, it is also good to upgrade now in Solr 7, as Solr Indexes
can only be upgraded from the previous major release version (Solr 6) to
the current major release version (Solr 7). Since you are using Solr 6.1,
so when Solr 8 comes around, it will not be possible to upgrade directly,
and the index will have to be upgrade to Solr 7 first before upgrading to
Solr 8.
http://lucene.apache.org/solr/guide/7_5/indexupgrader-tool.html

Regards,
Edwin

On Thu, 4 Oct 2018 at 17:41, Atita Arora  wrote:

> Hi Andrzej,
>
> We're rather weighing on a lot of other stuff to upgrade our Solr for a
> very long time like better authentication handling, backups using CDCR, new
> Replication mode and this probably has just given us another reason to
> upgrade.
> Thank you so much for the suggestion, I think its good to know about
> something like this exists. We'll find out more about this.
>
> Great day ahead!
>
> Regards,
> Atita
>
>
>
> On Thu, Oct 4, 2018 at 11:28 AM Andrzej Białecki  wrote:
>
> > I know it’s not much help if you’re stuck with Solr 6.1 … but Solr 7.5
> > comes with an alternative strategy for SPLITSHARD that doesn’t consume as
> > much memory and nearly doesn’t consume additional disk space on the
> leader.
> > This strategy can be turned on by “splitMethod=link” parameter.
> >
> > > On 4 Oct 2018, at 10:23, Atita Arora  wrote:
> > >
> > > Hi Edwin,
> > >
> > > Thanks for following up on this.
> > >
> > > So here are the configs :
> > >
> > > Memory - 30G - 20 G to Solr
> > > Disk - 1TB
> > > Index = ~ 500G
> > >
> > > and I think that it possibly is due to the reason why this could be
> > > happening is that during split shard, the unsplit index + split index
> > > persists on the instance and may be causing this.
> > > I actually tried splitshard on another instance with index size 64G and
> > it
> > > went through without any issues.
> > >
> > > I would appreciate if you have additional information to enlighten me
> on
> > > this issue.
> > >
> > > Thanks again.
> > >
> > > Regards,
> > >
> > > Atita
> > >
> > > On Thu, Oct 4, 2018 at 9:47 AM Zheng Lin Edwin Yeo <
> edwinye...@gmail.com
> > >
> > > wrote:
> > >
> > >> Hi Atita,
> > >>
> > >> What is the amount of memory that you have in your system?
> > >> And what is your index size?
> > >>
> > >> Regards,
> > >> Edwin
> > >>
> > >> On Tue, 25 Sep 2018 at 22:39, Atita Arora 
> wrote:
> > >>
> > >>> Hi,
> > >>>
> > >>> I am working on a test setup with Solr 6.1.0 cloud with 1 collection
> > >>> sharded across 2 shards with no replication. When triggered a
> > SPLITSHARD
> > >>> command it throws "java.lang.OutOfMemoryError: Java heap space"
> > >> everytime.
> > >>> I tried this with multiple heap settings of 8, 12 & 20G but every
> time
> > it
> > >>> does create 2 sub-shards but then fails eventually.
> > >>> I know the issue => https://jira.apache.org/jira/browse/SOLR-5214
> has
> > >> been
> > >>> resolved but the trace looked very similar to this one.
> > >>> Also just to ensure that I do not run into exceptions due to merge as
> > >>> reported in this ticket, I also tried running optimize before
> > proceeding
> > >>> with splitting the shard.
> > >>> I issued the following commands :
> > >>>
> > >>> 1.
> > >>>
> > >>>
> > >>
> >
> http://localhost:8983/solr/admin/collections?collection=testcollection&shard=shard1&action=SPLITSHARD
> > >>>
> > >>> This threw java.lang.OutOfMemoryError: Java heap space
> > >>>
> > >>> 2.
> > >>>
> > >>>
> > >>
> >
> http://localhost:8983/solr/admin/collections?collection=testcollection&shard=shard1&action=SPLITSHARD&async=1000
> > >>>
> > >>> Then I ran with async=1000 and checked the status. Every time It's
> > >> creating
> > >>> the sub shards, but not splitting the index.
> > >>>
> > >>> Is there something that I am not doing correctly?
> > >>>
> > >>> Please guide.
> > >>>
> > >>> Thanks,
> > >>> Atita
> > >>>
> > >>
> >
> > —
> >
> > Andrzej Białecki
> >
> >
>


Re: SPLITSHARD throwing OutOfMemory Error

2018-10-04 Thread Atita Arora
Hi Andrzej,

We're rather weighing on a lot of other stuff to upgrade our Solr for a
very long time like better authentication handling, backups using CDCR, new
Replication mode and this probably has just given us another reason to
upgrade.
Thank you so much for the suggestion, I think its good to know about
something like this exists. We'll find out more about this.

Great day ahead!

Regards,
Atita



On Thu, Oct 4, 2018 at 11:28 AM Andrzej Białecki  wrote:

> I know it’s not much help if you’re stuck with Solr 6.1 … but Solr 7.5
> comes with an alternative strategy for SPLITSHARD that doesn’t consume as
> much memory and nearly doesn’t consume additional disk space on the leader.
> This strategy can be turned on by “splitMethod=link” parameter.
>
> > On 4 Oct 2018, at 10:23, Atita Arora  wrote:
> >
> > Hi Edwin,
> >
> > Thanks for following up on this.
> >
> > So here are the configs :
> >
> > Memory - 30G - 20 G to Solr
> > Disk - 1TB
> > Index = ~ 500G
> >
> > and I think that it possibly is due to the reason why this could be
> > happening is that during split shard, the unsplit index + split index
> > persists on the instance and may be causing this.
> > I actually tried splitshard on another instance with index size 64G and
> it
> > went through without any issues.
> >
> > I would appreciate if you have additional information to enlighten me on
> > this issue.
> >
> > Thanks again.
> >
> > Regards,
> >
> > Atita
> >
> > On Thu, Oct 4, 2018 at 9:47 AM Zheng Lin Edwin Yeo  >
> > wrote:
> >
> >> Hi Atita,
> >>
> >> What is the amount of memory that you have in your system?
> >> And what is your index size?
> >>
> >> Regards,
> >> Edwin
> >>
> >> On Tue, 25 Sep 2018 at 22:39, Atita Arora  wrote:
> >>
> >>> Hi,
> >>>
> >>> I am working on a test setup with Solr 6.1.0 cloud with 1 collection
> >>> sharded across 2 shards with no replication. When triggered a
> SPLITSHARD
> >>> command it throws "java.lang.OutOfMemoryError: Java heap space"
> >> everytime.
> >>> I tried this with multiple heap settings of 8, 12 & 20G but every time
> it
> >>> does create 2 sub-shards but then fails eventually.
> >>> I know the issue => https://jira.apache.org/jira/browse/SOLR-5214 has
> >> been
> >>> resolved but the trace looked very similar to this one.
> >>> Also just to ensure that I do not run into exceptions due to merge as
> >>> reported in this ticket, I also tried running optimize before
> proceeding
> >>> with splitting the shard.
> >>> I issued the following commands :
> >>>
> >>> 1.
> >>>
> >>>
> >>
> http://localhost:8983/solr/admin/collections?collection=testcollection&shard=shard1&action=SPLITSHARD
> >>>
> >>> This threw java.lang.OutOfMemoryError: Java heap space
> >>>
> >>> 2.
> >>>
> >>>
> >>
> http://localhost:8983/solr/admin/collections?collection=testcollection&shard=shard1&action=SPLITSHARD&async=1000
> >>>
> >>> Then I ran with async=1000 and checked the status. Every time It's
> >> creating
> >>> the sub shards, but not splitting the index.
> >>>
> >>> Is there something that I am not doing correctly?
> >>>
> >>> Please guide.
> >>>
> >>> Thanks,
> >>> Atita
> >>>
> >>
>
> —
>
> Andrzej Białecki
>
>


Re: SPLITSHARD throwing OutOfMemory Error

2018-10-04 Thread Andrzej Białecki
I know it’s not much help if you’re stuck with Solr 6.1 … but Solr 7.5 comes 
with an alternative strategy for SPLITSHARD that doesn’t consume as much memory 
and nearly doesn’t consume additional disk space on the leader. This strategy 
can be turned on by “splitMethod=link” parameter.

> On 4 Oct 2018, at 10:23, Atita Arora  wrote:
> 
> Hi Edwin,
> 
> Thanks for following up on this.
> 
> So here are the configs :
> 
> Memory - 30G - 20 G to Solr
> Disk - 1TB
> Index = ~ 500G
> 
> and I think that it possibly is due to the reason why this could be
> happening is that during split shard, the unsplit index + split index
> persists on the instance and may be causing this.
> I actually tried splitshard on another instance with index size 64G and it
> went through without any issues.
> 
> I would appreciate if you have additional information to enlighten me on
> this issue.
> 
> Thanks again.
> 
> Regards,
> 
> Atita
> 
> On Thu, Oct 4, 2018 at 9:47 AM Zheng Lin Edwin Yeo 
> wrote:
> 
>> Hi Atita,
>> 
>> What is the amount of memory that you have in your system?
>> And what is your index size?
>> 
>> Regards,
>> Edwin
>> 
>> On Tue, 25 Sep 2018 at 22:39, Atita Arora  wrote:
>> 
>>> Hi,
>>> 
>>> I am working on a test setup with Solr 6.1.0 cloud with 1 collection
>>> sharded across 2 shards with no replication. When triggered a SPLITSHARD
>>> command it throws "java.lang.OutOfMemoryError: Java heap space"
>> everytime.
>>> I tried this with multiple heap settings of 8, 12 & 20G but every time it
>>> does create 2 sub-shards but then fails eventually.
>>> I know the issue => https://jira.apache.org/jira/browse/SOLR-5214 has
>> been
>>> resolved but the trace looked very similar to this one.
>>> Also just to ensure that I do not run into exceptions due to merge as
>>> reported in this ticket, I also tried running optimize before proceeding
>>> with splitting the shard.
>>> I issued the following commands :
>>> 
>>> 1.
>>> 
>>> 
>> http://localhost:8983/solr/admin/collections?collection=testcollection&shard=shard1&action=SPLITSHARD
>>> 
>>> This threw java.lang.OutOfMemoryError: Java heap space
>>> 
>>> 2.
>>> 
>>> 
>> http://localhost:8983/solr/admin/collections?collection=testcollection&shard=shard1&action=SPLITSHARD&async=1000
>>> 
>>> Then I ran with async=1000 and checked the status. Every time It's
>> creating
>>> the sub shards, but not splitting the index.
>>> 
>>> Is there something that I am not doing correctly?
>>> 
>>> Please guide.
>>> 
>>> Thanks,
>>> Atita
>>> 
>> 

—

Andrzej Białecki



Re: SPLITSHARD throwing OutOfMemory Error

2018-10-04 Thread Atita Arora
Hi Edwin,

Thanks for following up on this.

So here are the configs :

Memory - 30G - 20 G to Solr
Disk - 1TB
Index = ~ 500G

and I think that it possibly is due to the reason why this could be
happening is that during split shard, the unsplit index + split index
persists on the instance and may be causing this.
I actually tried splitshard on another instance with index size 64G and it
went through without any issues.

I would appreciate if you have additional information to enlighten me on
this issue.

Thanks again.

Regards,

Atita

On Thu, Oct 4, 2018 at 9:47 AM Zheng Lin Edwin Yeo 
wrote:

> Hi Atita,
>
> What is the amount of memory that you have in your system?
> And what is your index size?
>
> Regards,
> Edwin
>
> On Tue, 25 Sep 2018 at 22:39, Atita Arora  wrote:
>
> > Hi,
> >
> > I am working on a test setup with Solr 6.1.0 cloud with 1 collection
> > sharded across 2 shards with no replication. When triggered a SPLITSHARD
> > command it throws "java.lang.OutOfMemoryError: Java heap space"
> everytime.
> > I tried this with multiple heap settings of 8, 12 & 20G but every time it
> > does create 2 sub-shards but then fails eventually.
> > I know the issue => https://jira.apache.org/jira/browse/SOLR-5214 has
> been
> > resolved but the trace looked very similar to this one.
> > Also just to ensure that I do not run into exceptions due to merge as
> > reported in this ticket, I also tried running optimize before proceeding
> > with splitting the shard.
> > I issued the following commands :
> >
> > 1.
> >
> >
> http://localhost:8983/solr/admin/collections?collection=testcollection&shard=shard1&action=SPLITSHARD
> >
> > This threw java.lang.OutOfMemoryError: Java heap space
> >
> > 2.
> >
> >
> http://localhost:8983/solr/admin/collections?collection=testcollection&shard=shard1&action=SPLITSHARD&async=1000
> >
> > Then I ran with async=1000 and checked the status. Every time It's
> creating
> > the sub shards, but not splitting the index.
> >
> > Is there something that I am not doing correctly?
> >
> > Please guide.
> >
> > Thanks,
> > Atita
> >
>


Re: SPLITSHARD throwing OutOfMemory Error

2018-10-04 Thread Zheng Lin Edwin Yeo
Hi Atita,

What is the amount of memory that you have in your system?
And what is your index size?

Regards,
Edwin

On Tue, 25 Sep 2018 at 22:39, Atita Arora  wrote:

> Hi,
>
> I am working on a test setup with Solr 6.1.0 cloud with 1 collection
> sharded across 2 shards with no replication. When triggered a SPLITSHARD
> command it throws "java.lang.OutOfMemoryError: Java heap space" everytime.
> I tried this with multiple heap settings of 8, 12 & 20G but every time it
> does create 2 sub-shards but then fails eventually.
> I know the issue => https://jira.apache.org/jira/browse/SOLR-5214 has been
> resolved but the trace looked very similar to this one.
> Also just to ensure that I do not run into exceptions due to merge as
> reported in this ticket, I also tried running optimize before proceeding
> with splitting the shard.
> I issued the following commands :
>
> 1.
>
> http://localhost:8983/solr/admin/collections?collection=testcollection&shard=shard1&action=SPLITSHARD
>
> This threw java.lang.OutOfMemoryError: Java heap space
>
> 2.
>
> http://localhost:8983/solr/admin/collections?collection=testcollection&shard=shard1&action=SPLITSHARD&async=1000
>
> Then I ran with async=1000 and checked the status. Every time It's creating
> the sub shards, but not splitting the index.
>
> Is there something that I am not doing correctly?
>
> Please guide.
>
> Thanks,
> Atita
>