See 
<https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/13/display/redirect?page=changes>

Changes:

[kadir] PHOENIX-5743 addendum for multi-column family indexes


------------------------------------------
[...truncated 572.03 KB...]
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.008 s 
<<< FAILURE! - in org.apache.phoenix.end2end.ViewMetadataIT
[ERROR] org.apache.phoenix.end2end.ViewMetadataIT  Time elapsed: 0.007 s  <<< 
ERROR!
java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to create native 
thread: possibly out of memory or process/resource limits reached
        at 
org.apache.phoenix.end2end.ViewMetadataIT.doSetup(ViewMetadataIT.java:98)
Caused by: java.lang.OutOfMemoryError: unable to create native thread: possibly 
out of memory or process/resource limits reached
        at 
org.apache.phoenix.end2end.ViewMetadataIT.doSetup(ViewMetadataIT.java:98)

[INFO] Running org.apache.phoenix.end2end.AlterTableWithViewsIT
[INFO] Running org.apache.phoenix.end2end.DropIndexedColsIT
[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.006 s 
<<< FAILURE! - in org.apache.phoenix.end2end.DropIndexedColsIT
[ERROR] org.apache.phoenix.end2end.DropIndexedColsIT  Time elapsed: 0.005 s  
<<< FAILURE!
java.lang.AssertionError: Multiple regions on 
asf927.gq1.ygridcore.net,37235,1582446465756

[INFO] Running org.apache.phoenix.end2end.DropTableWithViewsIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.065 s 
- in org.apache.phoenix.end2end.DropTableWithViewsIT
[ERROR] Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 107.331 
s <<< FAILURE! - in org.apache.phoenix.end2end.AlterMultiTenantTableWithViewsIT
[ERROR] 
testAddPKColumnToBaseTableWhoseViewsHaveIndices(org.apache.phoenix.end2end.AlterMultiTenantTableWithViewsIT)
  Time elapsed: 2.562 s  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: SCHEMA1.N000001: 
java.lang.OutOfMemoryError: unable to create native thread: possibly out of 
memory or process/resource limits reached
        at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:113)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2126)
        at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17218)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8265)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2444)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2426)
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42286)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
Caused by: java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to 
create native thread: possibly out of memory or process/resource limits reached
        at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:200)
        at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267)
        at 
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435)
        at 
org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310)
        at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595)
        at org.apache.phoenix.util.ViewUtil.findRelatedViews(ViewUtil.java:127)
        at org.apache.phoenix.util.ViewUtil.dropChildViews(ViewUtil.java:200)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1767)
        ... 9 more
Caused by: java.lang.OutOfMemoryError: unable to create native thread: possibly 
out of memory or process/resource limits reached
        at java.base/java.lang.Thread.start0(Native Method)
        at java.base/java.lang.Thread.start(Thread.java:803)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:937)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1343)
        at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService.submit(ResultBoundedCompletionService.java:171)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.addCallsForCurrentReplica(ScannerCallableWithReplicas.java:329)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:191)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
        ... 16 more

        at 
org.apache.phoenix.end2end.AlterMultiTenantTableWithViewsIT.testAddPKColumnToBaseTableWhoseViewsHaveIndices(AlterMultiTenantTableWithViewsIT.java:295)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: SCHEMA1.N000001: 
java.lang.OutOfMemoryError: unable to create native thread: possibly out of 
memory or process/resource limits reached
        at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:113)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2126)
        at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17218)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8265)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2444)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2426)
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42286)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
Caused by: java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to 
create native thread: possibly out of memory or process/resource limits reached
        at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:200)
        at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267)
        at 
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435)
        at 
org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310)
        at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595)
        at org.apache.phoenix.util.ViewUtil.findRelatedViews(ViewUtil.java:127)
        at org.apache.phoenix.util.ViewUtil.dropChildViews(ViewUtil.java:200)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1767)
        ... 9 more
Caused by: java.lang.OutOfMemoryError: unable to create native thread: possibly 
out of memory or process/resource limits reached
        at java.base/java.lang.Thread.start0(Native Method)
        at java.base/java.lang.Thread.start(Thread.java:803)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:937)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1343)
        at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService.submit(ResultBoundedCompletionService.java:171)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.addCallsForCurrentReplica(ScannerCallableWithReplicas.java:329)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:191)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
        ... 16 more

Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
org.apache.hadoop.hbase.DoNotRetryIOException: SCHEMA1.N000001: 
java.lang.OutOfMemoryError: unable to create native thread: possibly out of 
memory or process/resource limits reached
        at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:113)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2126)
        at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17218)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8265)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2444)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2426)
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42286)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
Caused by: java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to 
create native thread: possibly out of memory or process/resource limits reached
        at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:200)
        at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267)
        at 
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435)
        at 
org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310)
        at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595)
        at org.apache.phoenix.util.ViewUtil.findRelatedViews(ViewUtil.java:127)
        at org.apache.phoenix.util.ViewUtil.dropChildViews(ViewUtil.java:200)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1767)
        ... 9 more
Caused by: java.lang.OutOfMemoryError: unable to create native thread: possibly 
out of memory or process/resource limits reached
        at java.base/java.lang.Thread.start0(Native Method)
        at java.base/java.lang.Thread.start(Thread.java:803)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:937)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1343)
        at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService.submit(ResultBoundedCompletionService.java:171)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.addCallsForCurrentReplica(ScannerCallableWithReplicas.java:329)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:191)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
        ... 16 more


[INFO] Running org.apache.phoenix.end2end.TenantSpecificViewIndexSaltedIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.206 s 
- in org.apache.phoenix.end2end.TenantSpecificViewIndexSaltedIT
[INFO] Running org.apache.phoenix.end2end.index.ViewIndexIT
[WARNING] Tests run: 24, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
93.129 s - in org.apache.phoenix.end2end.index.ViewIndexIT
[ERROR] Tests run: 56, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 
5,012.348 s <<< FAILURE! - in org.apache.phoenix.end2end.AlterTableWithViewsIT
[ERROR] 
testDroppingIndexedColDropsViewIndex[AlterTableWithViewsIT_columnEncoded=false, 
multiTenant=false, 
salted=false](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time elapsed: 
1,203.425 s  <<< ERROR!
org.apache.phoenix.execute.CommitException: 
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
action: Operation rpcTimeout: 1 time, servers with issues: 
asf927.gq1.ygridcore.net,40149,1582446452481
        at 
org.apache.phoenix.end2end.AlterTableWithViewsIT.testDroppingIndexedColDropsViewIndex(AlterTableWithViewsIT.java:1151)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: 
Failed 1 action: Operation rpcTimeout: 1 time, servers with issues: 
asf927.gq1.ygridcore.net,40149,1582446452481
        at 
org.apache.phoenix.end2end.AlterTableWithViewsIT.testDroppingIndexedColDropsViewIndex(AlterTableWithViewsIT.java:1151)

[ERROR] 
testDroppingIndexedColDropsViewIndex[AlterTableWithViewsIT_columnEncoded=true, 
multiTenant=false, 
salted=true](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time elapsed: 
1,202.922 s  <<< ERROR!
org.apache.phoenix.execute.CommitException: 
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
action: Operation rpcTimeout: 1 time, servers with issues: 
asf927.gq1.ygridcore.net,36725,1582446452114
        at 
org.apache.phoenix.end2end.AlterTableWithViewsIT.testDroppingIndexedColDropsViewIndex(AlterTableWithViewsIT.java:1151)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: 
Failed 1 action: Operation rpcTimeout: 1 time, servers with issues: 
asf927.gq1.ygridcore.net,36725,1582446452114
        at 
org.apache.phoenix.end2end.AlterTableWithViewsIT.testDroppingIndexedColDropsViewIndex(AlterTableWithViewsIT.java:1151)

[ERROR] 
testDroppingIndexedColDropsViewIndex[AlterTableWithViewsIT_columnEncoded=true, 
multiTenant=true, 
salted=false](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time elapsed: 
1,202.843 s  <<< ERROR!
org.apache.phoenix.execute.CommitException: 
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
action: Operation rpcTimeout: 1 time, servers with issues: 
asf927.gq1.ygridcore.net,38417,1582446451877
        at 
org.apache.phoenix.end2end.AlterTableWithViewsIT.testDroppingIndexedColDropsViewIndex(AlterTableWithViewsIT.java:1151)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: 
Failed 1 action: Operation rpcTimeout: 1 time, servers with issues: 
asf927.gq1.ygridcore.net,38417,1582446451877
        at 
org.apache.phoenix.end2end.AlterTableWithViewsIT.testDroppingIndexedColDropsViewIndex(AlterTableWithViewsIT.java:1151)

[ERROR] 
testDroppingIndexedColDropsViewIndex[AlterTableWithViewsIT_columnEncoded=true, 
multiTenant=true, 
salted=true](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time elapsed: 
1,202.921 s  <<< ERROR!
org.apache.phoenix.execute.CommitException: 
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
action: Operation rpcTimeout: 1 time, servers with issues: 
asf927.gq1.ygridcore.net,38417,1582446451877
        at 
org.apache.phoenix.end2end.AlterTableWithViewsIT.testDroppingIndexedColDropsViewIndex(AlterTableWithViewsIT.java:1151)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: 
Failed 1 action: Operation rpcTimeout: 1 time, servers with issues: 
asf927.gq1.ygridcore.net,38417,1582446451877
        at 
org.apache.phoenix.end2end.AlterTableWithViewsIT.testDroppingIndexedColDropsViewIndex(AlterTableWithViewsIT.java:1151)

[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   
DropIndexedColsIT>SplitSystemCatalogIT.doSetup:60->SplitSystemCatalogIT.splitSystemCatalog:77->BaseTest.splitSystemCatalog:1923->BaseTest.splitTable:1894
 Multiple regions on asf927.gq1.ygridcore.net,37235,1582446465756
[ERROR] Errors: 
[ERROR]   
AlterMultiTenantTableWithViewsIT.testAddPKColumnToBaseTableWhoseViewsHaveIndices:295
 » PhoenixIO
[ERROR]   AlterTableWithViewsIT.testDroppingIndexedColDropsViewIndex:1151 » 
Commit org.a...
[ERROR]   AlterTableWithViewsIT.testDroppingIndexedColDropsViewIndex:1151 » 
Commit org.a...
[ERROR]   AlterTableWithViewsIT.testDroppingIndexedColDropsViewIndex:1151 » 
Commit org.a...
[ERROR]   AlterTableWithViewsIT.testDroppingIndexedColDropsViewIndex:1151 » 
Commit org.a...
[ERROR]   
TenantSpecificViewIndexIT>SplitSystemCatalogIT.doSetup:57->BaseTest.setUpTestDriver:515->BaseTest.setUpTestDriver:521->BaseTest.initAndRegisterTestDriver:660
 » PhoenixIO
[ERROR]   
ViewIT.doSetup:142->BaseTest.setUpTestDriver:520->BaseTest.checkClusterInitialized:434->BaseTest.setUpTestCluster:448->BaseTest.initMiniCluster:549
 » Runtime
[ERROR]   
ViewMetadataIT.doSetup:98->BaseTest.setUpTestDriver:520->BaseTest.checkClusterInitialized:434->BaseTest.setUpTestCluster:448->BaseTest.initMiniCluster:549
 » Runtime
[INFO] 
[ERROR] Tests run: 101, Failures: 1, Errors: 8, Skipped: 2
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.22.0:verify (ParallelStatsEnabledTest) @ 
phoenix-core ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Apache Phoenix 5.1.0-SNAPSHOT:
[INFO] 
[INFO] Phoenix Hbase 2.2.1 compatibility .................. SUCCESS [ 17.821 s]
[INFO] Phoenix Hbase 2.1.6 compatibility .................. SUCCESS [  7.542 s]
[INFO] Phoenix Hbase 2.0.1 compatibility .................. SUCCESS [  8.024 s]
[INFO] Apache Phoenix ..................................... SUCCESS [  1.255 s]
[INFO] Phoenix Core ....................................... FAILURE [  03:08 h]
[INFO] Phoenix - Pherf .................................... SKIPPED
[INFO] Phoenix Client ..................................... SKIPPED
[INFO] Phoenix Server ..................................... SKIPPED
[INFO] Phoenix Assembly ................................... SKIPPED
[INFO] Phoenix - Tracing Web Application .................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  03:08 h
[INFO] Finished at: 2020-02-23T09:53:28Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-failsafe-plugin:2.22.0:verify 
(ParallelStatsEnabledTest) on project phoenix-core: There are test failures.
[ERROR] 
[ERROR] Please refer to 
<https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/target/failsafe-reports>
 for the individual test results.
[ERROR] Please refer to dump files (if any exist) [date]-jvmRun[N].dump, 
[date].dumpstream and [date]-jvmRun[N].dumpstream.
[ERROR] org.apache.maven.surefire.booter.SurefireBooterForkException: 
ExecutionException Error occurred in starting fork, check output in log
[ERROR]         at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:494)
[ERROR]         at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:441)
[ERROR]         at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:293)
[ERROR]         at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:245)
[ERROR]         at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1194)
[ERROR]         at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1022)
[ERROR]         at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:868)
[ERROR]         at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
[ERROR]         at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:210)
[ERROR]         at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156)
[ERROR]         at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148)
[ERROR]         at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
[ERROR]         at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
[ERROR]         at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56)
[ERROR]         at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
[ERROR]         at 
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:305)
[ERROR]         at 
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192)
[ERROR]         at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105)
[ERROR]         at org.apache.maven.cli.MavenCli.execute(MavenCli.java:957)
[ERROR]         at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:289)
[ERROR]         at org.apache.maven.cli.MavenCli.main(MavenCli.java:193)
[ERROR]         at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[ERROR]         at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[ERROR]         at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[ERROR]         at java.base/java.lang.reflect.Method.invoke(Method.java:566)
[ERROR]         at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:282)
[ERROR]         at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:225)
[ERROR]         at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:406)
[ERROR]         at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:347)
[ERROR] Caused by: 
org.apache.maven.surefire.booter.SurefireBooterForkException: Error occurred in 
starting fork, check output in log
[ERROR]         at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:624)
[ERROR]         at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:533)
[ERROR]         at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.access$600(ForkStarter.java:115)
[ERROR]         at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter$2.call(ForkStarter.java:429)
[ERROR]         at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter$2.call(ForkStarter.java:406)
[ERROR]         at 
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
[ERROR]         at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
[ERROR]         at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
[ERROR]         at java.base/java.lang.Thread.run(Thread.java:834)
[ERROR] 
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <args> -rf :phoenix-core
Build step 'Invoke top-level Maven targets' marked build as failure
Archiving artifacts
Recording test results

Reply via email to