Great hint! Looks like it helped!
What a great power of community!
Br, Margus
> On 22 Mar 2018, at 18:24, Josh Elser wrote:
>
> Hard to say at a glance, but this issue is happening down in the MapReduce
> framework, not in Phoenix itself.
>
> It looks similar to problems
I did not set any split policy. I was under assumption
'hbase.hregion.max.filesize' => '107374182400' ( 100 GB ) this property
would take care and size was also with in 33 GB.
I want to understand even if split happens it's of no use, as the first
salt byte 1 - 49 are used for putting the keys
Did you set the split policy to CostantSizeRegionSplitPolicy?
> On Mar 22, 2018, at 2:56 PM, Adi Kadimetla wrote:
>
> Group,
> TABLE - with 50 salt buckets and configured as time series table.
>
> Having pre split into 50 SALT buckets we disabled the region splits using
Group,
TABLE - with 50 salt buckets and configured as time series table.
Having pre split into 50 SALT buckets we disabled the region splits using
max file size as 100 GB for the split.
I see some of the keys got split and created stale regions.
no writes are happening into the region
Hey Anil,
You sure there isn't another exception earlier on in the output of your
application? The exception you have here looks more like the JVM was
already shutting down and Phoenix had closed the connection (the
exceptions were about queued tasks being cleared out after the decision
to
Hard to say at a glance, but this issue is happening down in the
MapReduce framework, not in Phoenix itself.
It looks similar to problems I've seen many years ago around
mapreduce.task.io.sort.mb. You can try reducing that value. It also may
be related to a bug in your Hadoop version.
Good
HI Team,
We have upgraded the phoenix from 4.7.0 to 4.11.0 and started noticing the
attached exception.
Can you help me identifying the root cause of the exception ? Thanks.
Regards,
Anil
2018-03-21 08:13:19,684 ERROR
com.tst.hadoop.flume.writer.inventory.AccountPersistenceImpl: Error querying
Hi
Needed to recreate indexes over main table contains more than 2.3 x 10^10
records.
I used ASYNC and org.apache.phoenix.mapreduce.index.IndexTool
One index succeed but another gives stack:
2018-03-20 13:23:16,723 FATAL [IPC Server handler 0 on 43926]