[ 
https://issues.apache.org/jira/browse/PHOENIX-2553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2553:
----------------------------------
    Assignee: Samarth Jain  (was: Maryann Xue)

> SortMergeJoinIT frequently fails on the Mac
> -------------------------------------------
>
>                 Key: PHOENIX-2553
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2553
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: James Taylor
>            Assignee: Samarth Jain
>             Fix For: 4.7.0
>
>
> SortMergeJoinIT frequently fails on the Mac with the following exception:
> {code}
> Tests run: 101, Failures: 0, Errors: 8, Skipped: 0, Time elapsed: 118.326 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.SortMergeJoinIT
> testJoinWithSubquery[2](org.apache.phoenix.end2end.SortMergeJoinIT)  Time 
> elapsed: 0.278 sec  <<< ERROR!
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Join.ItemTable: 
> java.lang.OutOfMemoryError: unable to create new native thread
>       at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
>       at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:461)
>       at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11609)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7356)
>       at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1849)
>       at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1831)
>       at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
>       at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>       at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>       at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>       at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to 
> create new native thread
>       at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:208)
>       at 
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
>       at 
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)
>       at 
> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)
>       at 
> org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155)
>       at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:809)
>       at 
> org.apache.hadoop.hbase.client.HTableWrapper.getScanner(HTableWrapper.java:215)
>       at 
> org.apache.phoenix.schema.stats.StatisticsUtil.readStatistics(StatisticsUtil.java:94)
>       at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:842)
>       at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:476)
>       at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2444)
>       at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2391)
>       at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:444)
>       ... 10 more
> Caused by: java.lang.OutOfMemoryError: unable to create new native thread
>       at java.lang.Thread.start0(Native Method)
>       at java.lang.Thread.start(Thread.java:714)
>       at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
>       at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1360)
>       at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService.submit(ResultBoundedCompletionService.java:142)
>       at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.addCallsForCurrentReplica(ScannerCallableWithReplicas.java:290)
>       at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:169)
>       at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)
>       at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>       ... 22 more
>       at 
> org.apache.phoenix.end2end.SortMergeJoinIT.initTable(SortMergeJoinIT.java:92)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to