Re: Index population over table contains 2.3 x 10^10 records

2018-03-22 Thread Margusja
Great hint! Looks like it helped! 

What a great power of community!

Br, Margus

> On 22 Mar 2018, at 18:24, Josh Elser  wrote:
> 
> Hard to say at a glance, but this issue is happening down in the MapReduce 
> framework, not in Phoenix itself.
> 
> It looks similar to problems I've seen many years ago around 
> mapreduce.task.io.sort.mb. You can try reducing that value. It also may be 
> related to a bug in your Hadoop version.
> 
> Good luck!
> 
> On 3/22/18 4:37 AM, Margusja wrote:
>> Hi
>> Needed to recreate indexes over main table contains more than 2.3 x 10^10 
>> records.
>> I used ASYNC and org.apache.phoenix.mapreduce.index.IndexTool
>> One index succeed but another gives stack:
>> 2018-03-20 13:23:16,723 FATAL [IPC Server handler 0 on 43926] 
>> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
>> attempt_1521544097253_0004_m_08_0 - exited : 
>> java.lang.ArrayIndexOutOfBoundsException at 
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1453)
>>  at 
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1349)
>>  at java.io.DataOutputStream.writeInt(DataOutputStream.java:197) at 
>> org.apache.hadoop.hbase.io.ImmutableBytesWritable.write(ImmutableBytesWritable.java:159)
>>  at 
>> org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:98)
>>  at 
>> org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:82)
>>  at 
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1149) 
>> at 
>> org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:715) 
>> at 
>> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
>>  at 
>> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
>>  at 
>> org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:114)
>>  at 
>> org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:48)
>>  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at 
>> org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at 
>> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170) at 
>> java.security.AccessController.doPrivileged(Native Method) at 
>> javax.security.auth.Subject.doAs(Subject.java:422) at 
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
>> Is here any best practice how to deal with situations like this?
>> Br, Margus



Re: Index population over table contains 2.3 x 10^10 records

2018-03-22 Thread Josh Elser
Hard to say at a glance, but this issue is happening down in the 
MapReduce framework, not in Phoenix itself.


It looks similar to problems I've seen many years ago around 
mapreduce.task.io.sort.mb. You can try reducing that value. It also may 
be related to a bug in your Hadoop version.


Good luck!

On 3/22/18 4:37 AM, Margusja wrote:

Hi

Needed to recreate indexes over main table contains more thanĀ 2.3 x 
10^10 records.

I used ASYNC and org.apache.phoenix.mapreduce.index.IndexTool


One index succeed but another gives stack:

2018-03-20 13:23:16,723 FATAL [IPC Server handler 0 on 43926] 
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
attempt_1521544097253_0004_m_08_0 - exited : 
java.lang.ArrayIndexOutOfBoundsException at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1453) 
at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1349) 
at java.io.DataOutputStream.writeInt(DataOutputStream.java:197) at 
org.apache.hadoop.hbase.io.ImmutableBytesWritable.write(ImmutableBytesWritable.java:159) 
at 
org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:98) 
at 
org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:82) 
at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1149) 
at 
org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:715) 
at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89) 
at 
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112) 
at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:114) 
at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:48) 
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) 
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)



Is here any best practice how to deal with situations like this?

Br, Margus