See <https://builds.apache.org/job/Tajo-master-jdk8-nightly/107/>
------------------------------------------
[...truncated 733285 lines...]
2015-08-16 02:26:42,110 WARN: org.apache.hadoop.hdfs.DFSClient (close(669)) -
DFSInputStream has been closed already
2015-08-16 02:26:42,110 WARN: org.apache.hadoop.hdfs.DFSClient (close(669)) -
DFSInputStream has been closed already
2015-08-16 02:26:42,114 INFO: BlockStateChange (logAddStoredBlock(2624)) -
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:34596 is added to
blk_1073748705_7881{UCState=UNDER_CONSTRUCTION, truncateBlock=null,
primaryNodeIndex=-1,
replicas=[ReplicaUC[[DISK]DS-c88053b7-df50-4797-9186-131c6a712883:NORMAL:127.0.0.1:34596|FINALIZED]]}
size 0
2015-08-16 02:26:42,114 INFO: org.apache.tajo.worker.TaskAttemptContext
(setState(142)) - Query status of ta_1439690731887_2052_000001_000000_00 is
changed to TA_SUCCEEDED
2015-08-16 02:26:42,114 INFO: org.apache.tajo.worker.TaskImpl (run(460)) -
ta_1439690731887_2052_000001_000000_00 completed. Worker's task counter -
total:1, succeeded: 1, killed: 0, failed: 0
2015-08-16 02:26:42,115 INFO: org.apache.tajo.querymaster.Stage
(transition(1298)) - Stage - eb_1439690731887_2052_000001 finalize NONE_SHUFFLE
(total=1, success=1, killed=0)
2015-08-16 02:26:42,115 INFO: org.apache.tajo.querymaster.DefaultTaskScheduler
(stop(160)) - Task Scheduler stopped
2015-08-16 02:26:42,115 INFO: org.apache.tajo.querymaster.DefaultTaskScheduler
(run(122)) - TaskScheduler schedulingThread stopped
2015-08-16 02:26:42,115 INFO: org.apache.tajo.querymaster.Stage
(transition(1356)) - Stage completed - eb_1439690731887_2052_000001 (total=1,
success=1, killed=0)
2015-08-16 02:26:42,115 INFO: org.apache.tajo.querymaster.Query (handle(855)) -
Processing q_1439690731887_2052 of type STAGE_COMPLETED
2015-08-16 02:26:42,115 INFO:
org.apache.tajo.engine.planner.global.ParallelExecutionQueue (next(95)) - Next
executable block eb_1439690731887_2052_000002
2015-08-16 02:26:42,115 INFO: org.apache.tajo.querymaster.Query
(transition(802)) - Complete Stage[eb_1439690731887_2052_000001], State:
SUCCEEDED, 1/1.
2015-08-16 02:26:42,116 INFO: org.apache.tajo.querymaster.Query (handle(855)) -
Processing q_1439690731887_2052 of type QUERY_COMPLETED
2015-08-16 02:26:42,116 INFO: org.apache.tajo.worker.TaskManager
(stopExecutionBlock(161)) - Stopped execution block:eb_1439690731887_2052_000001
2015-08-16 02:26:42,116 INFO: org.apache.tajo.querymaster.Query
(finalizeQuery(528)) - Can't find partitions for adding.
2015-08-16 02:26:42,116 INFO: org.apache.tajo.querymaster.Query (handle(873)) -
q_1439690731887_2052 Query Transitioned from QUERY_RUNNING to QUERY_SUCCEEDED
2015-08-16 02:26:42,116 INFO: org.apache.tajo.querymaster.QueryMasterTask
(handle(295)) - Query completion notified from q_1439690731887_2052 final
state: QUERY_SUCCEEDED
2015-08-16 02:26:42,117 INFO: org.apache.tajo.master.QueryInProgress
(heartbeat(252)) - Received QueryMaster
heartbeat:q_1439690731887_2052,state=QUERY_SUCCEEDED,progress=1.0,
queryMaster=asf900.gq1.ygridcore.net
2015-08-16 02:26:42,117 INFO: org.apache.tajo.master.QueryManager
(stopQuery(275)) - Stop QueryInProgress:q_1439690731887_2052
2015-08-16 02:26:42,117 INFO: org.apache.tajo.querymaster.QueryMasterTask
(serviceStop(172)) - Stopping QueryMasterTask:q_1439690731887_2052
2015-08-16 02:26:42,117 INFO: org.apache.tajo.master.QueryInProgress
(stopProgress(117)) - =========================================================
2015-08-16 02:26:42,117 INFO: org.apache.tajo.master.QueryInProgress
(stopProgress(118)) - Stop query:q_1439690731887_2052
2015-08-16 02:26:42,117 INFO: org.apache.tajo.querymaster.QueryMasterTask
(cleanupQuery(471)) - Cleanup resources of all workers. Query:
q_1439690731887_2052, workers: 1
2015-08-16 02:26:42,118 INFO: org.apache.tajo.querymaster.QueryMasterTask
(serviceStop(188)) - Stopped QueryMasterTask:q_1439690731887_2052
2015-08-16 02:26:42,333 WARN: org.apache.hadoop.hdfs.DFSClient (close(669)) -
DFSInputStream has been closed already
2015-08-16 02:26:42,334 WARN: org.apache.hadoop.hdfs.DFSClient (close(669)) -
DFSInputStream has been closed already
2015-08-16 02:26:42,335 INFO: org.apache.tajo.master.TajoMasterClientService
(getQueryResultData(579)) - Send result to client for
6727fcca-1d43-4888-9ab3-a22fe8fb2108,q_1439690731887_2052, 2 rows
2015-08-16 02:26:42,336 INFO: org.apache.tajo.master.TajoMasterClientService
(getQueryResultData(579)) - Send result to client for
6727fcca-1d43-4888-9ab3-a22fe8fb2108,q_1439690731887_2052, 0 rows
2015-08-16 02:26:42,338 INFO: org.apache.tajo.session.SessionManager
(removeSession(86)) - Session 6727fcca-1d43-4888-9ab3-a22fe8fb2108 is removed.
2015-08-16 02:26:42,339 INFO: org.apache.tajo.master.GlobalEngine
(updateQuery(237)) - SQL: DROP TABLE IF EXISTS "TestTajoJdbc".table1
2015-08-16 02:26:42,339 INFO: org.apache.tajo.master.GlobalEngine
(createLogicalPlan(280)) - Non Optimized Query:
-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------
2015-08-16 02:26:42,340 INFO: org.apache.tajo.master.GlobalEngine
(createLogicalPlan(282)) - =============================================
2015-08-16 02:26:42,340 INFO: org.apache.tajo.master.GlobalEngine
(createLogicalPlan(283)) - Optimized Query:
-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------
2015-08-16 02:26:42,340 INFO: org.apache.tajo.master.GlobalEngine
(createLogicalPlan(284)) - =============================================
2015-08-16 02:26:42,340 INFO: org.apache.tajo.master.exec.DDLExecutor
(dropTable(310)) - relation "TestTajoJdbc.table1" is already exists.
2015-08-16 02:26:42,341 INFO: org.apache.tajo.master.GlobalEngine
(updateQuery(237)) - SQL: DROP TABLE IF EXISTS testaltertablepartition
2015-08-16 02:26:42,341 INFO: org.apache.tajo.master.GlobalEngine
(createLogicalPlan(280)) - Non Optimized Query:
-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------
2015-08-16 02:26:42,341 INFO: org.apache.tajo.master.GlobalEngine
(createLogicalPlan(282)) - =============================================
2015-08-16 02:26:42,341 INFO: org.apache.tajo.master.GlobalEngine
(createLogicalPlan(283)) - Optimized Query:
-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------
2015-08-16 02:26:42,341 INFO: org.apache.tajo.master.GlobalEngine
(createLogicalPlan(284)) - =============================================
2015-08-16 02:26:42,342 INFO: org.apache.tajo.catalog.CatalogServer
(dropTable(697)) - relation "TestTajoJdbc.testaltertablepartition" is deleted
from the catalog (127.0.0.1:20683)
2015-08-16 02:26:42,342 INFO: org.apache.tajo.master.exec.DDLExecutor
(dropTable(327)) - relation "TestTajoJdbc.testaltertablepartition" is dropped.
2015-08-16 02:26:42,343 INFO: org.apache.tajo.session.SessionManager
(removeSession(86)) - Session 3361986d-4e7b-4ec1-91b9-aba9ea1764b8 is removed.
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.543 sec - in
org.apache.tajo.jdbc.TestTajoJdbc
2015-08-16 02:26:42,348 INFO: org.apache.tajo.worker.TajoWorker (run(567)) -
============================================
2015-08-16 02:26:42,351 INFO: org.apache.tajo.worker.TajoWorker (run(568)) -
TajoWorker received SIGINT Signal
2015-08-16 02:26:42,351 INFO: org.apache.tajo.worker.TajoWorker (run(569)) -
============================================
eader initialized will read a total of 2 records.
Aug 16, 2015 2:08:48 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next
block
Aug 16, 2015 2:08:48 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in
1 ms. row count = 2
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 26
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN,
BIT_PACKED, RLE]
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings:
[PLAIN, BIT_PACKED, RLE]
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 26
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN,
BIT_PACKED, RLE]
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings:
[PLAIN, BIT_PACKED, RLE]
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 26
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN,
BIT_PACKED, RLE]
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings:
[PLAIN, BIT_PACKED, RLE]
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 26
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN,
BIT_PACKED, RLE]
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings:
[PLAIN, BIT_PACKED, RLE]
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 26
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN,
BIT_PACKED, RLE]
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings:
[PLAIN, BIT_PACKED, RLE]
Aug 16, 2015 2:09:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 16, 2015 2:09:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 16, 2015 2:09:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 16, 2015 2:09:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
reading another 1 footers
Aug 16, 2015 2:09:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 16, 2015 2:09:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
reading another 1 footers
Aug 16, 2015 2:09:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 16, 2015 2:09:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
reading another 1 footers
Aug 16, 2015 2:09:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized
will read a total of 1 records.
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next
block
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized
will read a total of 1 records.
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next
block
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in
1 ms. row count = 1
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized
will read a total of 1 records.
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next
block
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in
1 ms. row count = 1
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in
1 ms. row count = 1
Aug 16, 2015 2:09:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 16, 2015 2:09:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
reading another 1 footers
Aug 16, 2015 2:09:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized
will read a total of 1 records.
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next
block
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in
1 ms. row count = 1
Aug 16, 2015 2:09:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 16, 2015 2:09:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
reading another 1 footers
Aug 16, 2015 2:09:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized
will read a total of 1 records.
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next
block
Aug 16, 2015 2:09:04 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in
1 ms. row count = 1
Aug 16, 2015 2:09:07 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 212
Aug 16, 2015 2:09:07 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for
[l_orderkey] INT32: 5 values, 10B raw, 10B comp, 1 pages, encodings:
[BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 3 entries, 12B raw, 3B comp}
Aug 16, 2015 2:09:07 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 123B for
[l_shipdate] BINARY: 5 values, 76B raw, 76B comp, 1 pages, encodings: [PLAIN,
BIT_PACKED, RLE]
Aug 16, 2015 2:09:07 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 123B for
[l_shipdate_function] BINARY: 5 values, 76B raw, 76B comp, 1 pages, encodings:
[PLAIN, BIT_PACKED, RLE]
Aug 16, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 16, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
reading another 1 footers
Aug 16, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 16, 2015 2:09:08 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized
will read a total of 5 records.
Aug 16, 2015 2:09:08 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next
block
Aug 16, 2015 2:09:08 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in
1 ms. row count = 5
2015-08-16 02:26:42,356 INFO: org.mortbay.log (info(67)) - Shutdown hook
executing
2015-08-16 02:26:42,358 INFO: org.apache.tajo.master.TajoMaster (run(538)) -
============================================
2015-08-16 02:26:42,371 INFO: org.apache.tajo.master.TajoMaster (run(539)) -
TajoMaster received SIGINT Signal
2015-08-16 02:26:42,371 INFO: org.apache.tajo.master.TajoMaster (run(540)) -
============================================
2015-08-16 02:26:42,371 INFO: org.mortbay.log (info(67)) - Shutdown hook
complete
2015-08-16 02:26:42,372 INFO: org.apache.tajo.util.history.HistoryWriter
(run(268)) - HistoryWriter_asf900.gq1.ygridcore.net_20687 stopped.
2015-08-16 02:26:42,373 INFO: org.apache.tajo.util.history.HistoryCleaner
(run(136)) - History cleaner stopped
2015-08-16 02:26:42,375 INFO: org.apache.tajo.rpc.NettyServerBase
(shutdown(173)) - Rpc (Tajo-REST) listened on 0:0:0:0:0:0:0:0:20686) shutdown
2015-08-16 02:26:42,375 INFO: org.apache.tajo.ws.rs.TajoRestService
(serviceStop(129)) - Tajo Rest Service stopped.
2015-08-16 02:26:42,375 INFO: org.apache.tajo.catalog.CatalogServer
(serviceStop(178)) - Catalog Server (127.0.0.1:20683) shutdown
2015-08-16 02:26:42,375 INFO: org.apache.tajo.rpc.NettyServerBase
(shutdown(173)) - Rpc (CatalogProtocol) listened on 127.0.0.1:20683) shutdown
2015-08-16 02:26:42,378 INFO: org.apache.tajo.util.history.HistoryWriter
(run(268)) - HistoryWriter_127.0.0.1_20685 stopped.
2015-08-16 02:26:42,384 INFO: BlockStateChange (logAddStoredBlock(2624)) -
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:34596 is added to
blk_1073741834_1010{UCState=UNDER_CONSTRUCTION, truncateBlock=null,
primaryNodeIndex=-1,
replicas=[ReplicaUC[[DISK]DS-19571bff-196c-426d-b6e5-e183147c41bf:NORMAL:127.0.0.1:34596|RBW]]}
size 704
2015-08-16 02:26:42,385 INFO: org.apache.tajo.util.history.HistoryCleaner
(run(136)) - History cleaner stopped
2015-08-16 02:26:42,387 INFO: org.apache.tajo.rpc.NettyServerBase
(shutdown(173)) - Rpc (QueryCoordinatorProtocol) listened on 127.0.0.1:20685)
shutdown
2015-08-16 02:26:42,387 INFO: org.apache.tajo.rpc.NettyServerBase
(shutdown(173)) - Rpc (TajoMasterClientProtocol) listened on 127.0.0.1:20684)
shutdown
2015-08-16 02:26:42,396 INFO: org.apache.tajo.rpc.NettyServerBase
(shutdown(173)) - Rpc (TajoResourceTrackerProtocol) listened on
127.0.0.1:20682) shutdown
2015-08-16 02:26:42,396 INFO: org.apache.tajo.master.TajoMaster
(serviceStop(406)) - Tajo Master main thread exiting
2015-08-16 02:26:42,401 WARN: org.apache.tajo.rpc.NettyClientBase
(doReconnect(198)) - Exception
[org.apache.tajo.ipc.TajoMasterClientProtocol(/127.0.0.1:20684)]:
ConnectException: Connection refused: /127.0.0.1:20684 Try to reconnect :
/127.0.0.1:20684
2015-08-16 02:26:42,426 INFO: org.apache.tajo.worker.NodeStatusUpdater
(serviceStop(111)) - NodeStatusUpdater stopped.
2015-08-16 02:26:42,426 INFO: org.apache.tajo.worker.NodeStatusUpdater
(run(262)) - Heartbeat Thread stopped.
2015-08-16 02:26:42,427 INFO: org.apache.tajo.rpc.NettyServerBase
(shutdown(173)) - Rpc (QueryMasterProtocol) listened on 0:0:0:0:0:0:0:0:20689)
shutdown
2015-08-16 02:26:42,428 INFO:
org.apache.tajo.querymaster.QueryMasterManagerService (serviceStop(106)) -
QueryMasterManagerService stopped
2015-08-16 02:26:42,429 INFO: org.apache.tajo.querymaster.QueryMaster
(run(417)) - QueryMaster heartbeat thread stopped
2015-08-16 02:26:42,431 INFO: org.apache.tajo.querymaster.QueryMaster
(serviceStop(168)) - QueryMaster stopped
2015-08-16 02:26:42,431 INFO: org.apache.tajo.worker.TajoWorkerClientService
(stop(99)) - TajoWorkerClientService stopping
2015-08-16 02:26:42,432 INFO: org.apache.tajo.rpc.NettyServerBase
(shutdown(173)) - Rpc (QueryMasterClientProtocol) listened on
0:0:0:0:0:0:0:0:20688) shutdown
2015-08-16 02:26:42,432 INFO: org.apache.tajo.worker.TajoWorkerClientService
(stop(103)) - TajoWorkerClientService stopped
2015-08-16 02:26:42,432 INFO: org.apache.tajo.rpc.NettyServerBase
(shutdown(173)) - Rpc (TajoWorkerProtocol) listened on 0:0:0:0:0:0:0:0:20687)
shutdown
2015-08-16 02:26:42,432 INFO: org.apache.tajo.worker.TajoWorkerManagerService
(serviceStop(93)) - TajoWorkerManagerService stopped
2015-08-16 02:26:42,433 INFO: org.apache.tajo.worker.TajoWorker
(serviceStop(377)) - TajoWorker main thread exiting
2015-08-16 02:26:43,402 WARN: org.apache.tajo.rpc.NettyClientBase
(doReconnect(198)) - Exception
[org.apache.tajo.ipc.TajoMasterClientProtocol(/127.0.0.1:20684)]:
ConnectException: Connection refused: /127.0.0.1:20684 Try to reconnect :
/127.0.0.1:20684
Results :
Failed tests:
TestDDLBuilder.testBuildDDLForExternalTable:64 expected:<...) USING TEXT WITH
('[compression.codec'='org.apache.hadoop.io.compress.GzipCodec',
'text.delimiter'='|]') PARTITION BY COLU...> but was:<...) USING TEXT WITH
('[text.delimiter'='|',
'compression.codec'='org.apache.hadoop.io.compress.GzipCodec]') PARTITION BY
COLU...>
TestDDLBuilder.testBuildDDLForBaseTable:103 expected:<...) USING TEXT WITH
('[compression.codec'='org.apache.hadoop.io.compress.GzipCodec',
'text.delimiter'='|]');> but was:<...) USING TEXT WITH ('[text.delimiter'='|',
'compression.codec'='org.apache.hadoop.io.compress.GzipCodec]');>
TestDDLBuilder.testBuildDDLQuotedTableName:90 expected:<...) USING TEXT WITH
('[compression.codec'='org.apache.hadoop.io.compress.GzipCodec',
'text.delimiter'='|]') PARTITION BY COLU...> but was:<...) USING TEXT WITH
('[text.delimiter'='|',
'compression.codec'='org.apache.hadoop.io.compress.GzipCodec]') PARTITION BY
COLU...>
Tests run: 1691, Failures: 3, Errors: 0, Skipped: 0
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Tajo Main ......................................... SUCCESS [ 1.546 s]
[INFO] Tajo Project POM .................................. SUCCESS [ 1.104 s]
[INFO] Tajo Maven Plugins ................................ SUCCESS [ 2.558 s]
[INFO] Tajo Common ....................................... SUCCESS [ 21.833 s]
[INFO] Tajo Algebra ...................................... SUCCESS [ 2.205 s]
[INFO] Tajo Catalog Common ............................... SUCCESS [ 4.817 s]
[INFO] Tajo Plan ......................................... SUCCESS [ 6.103 s]
[INFO] Tajo Rpc Common ................................... SUCCESS [ 1.202 s]
[INFO] Tajo Protocol Buffer Rpc .......................... SUCCESS [01:26 min]
[INFO] Tajo Catalog Client ............................... SUCCESS [ 1.369 s]
[INFO] Tajo Catalog Server ............................... SUCCESS [ 9.698 s]
[INFO] Tajo Storage Common ............................... SUCCESS [ 7.940 s]
[INFO] Tajo HDFS Storage ................................. SUCCESS [ 43.625 s]
[INFO] Tajo PullServer ................................... SUCCESS [ 0.982 s]
[INFO] Tajo Client ....................................... SUCCESS [ 2.489 s]
[INFO] Tajo CLI tools .................................... SUCCESS [ 1.640 s]
[INFO] Tajo JDBC Driver .................................. SUCCESS [ 2.477 s]
[INFO] ASM (thirdparty) .................................. SUCCESS [ 1.829 s]
[INFO] Tajo RESTful Container ............................ SUCCESS [ 3.477 s]
[INFO] Tajo Metrics ...................................... SUCCESS [ 1.367 s]
[INFO] Tajo Core ......................................... SUCCESS [ 8.124 s]
[INFO] Tajo RPC .......................................... SUCCESS [ 0.914 s]
[INFO] Tajo Catalog Drivers Hive ......................... SUCCESS [ 10.396 s]
[INFO] Tajo Catalog Drivers .............................. SUCCESS [ 0.044 s]
[INFO] Tajo Catalog ...................................... SUCCESS [ 0.963 s]
[INFO] Tajo HBase Storage ................................ SUCCESS [ 3.211 s]
[INFO] Tajo Storage ...................................... SUCCESS [ 0.929 s]
[INFO] Tajo Distribution ................................. SUCCESS [ 5.339 s]
[INFO] Tajo Cluster Tests ................................ SUCCESS [ 2.128 s]
[INFO] Tajo Core Tests ................................... FAILURE [21:32 min]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 25:29 min
[INFO] Finished at: 2015-08-16T02:26:44+00:00
[INFO] Final Memory: 141M/1931M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on
project tajo-core-tests: There are test failures.
[ERROR]
[ERROR] Please refer to
<https://builds.apache.org/job/Tajo-master-jdk8-nightly/ws/tajo-core-tests/target/surefire-reports>
for the individual test results.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please
read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :tajo-core-tests
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Tajo-master-jdk8-nightly #82
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 58863387 bytes
Compression is 0.0%
Took 19 sec
Recording test results