See <https://builds.apache.org/job/Tajo-master-jdk8-nightly/108/changes>

Changes:

[hyunsik] Bump up to 0.12.0-SNAPSHOT.

[hyunsik] Add missed SNAPSHOT suffix.

------------------------------------------
[...truncated 733298 lines...]
2015-08-17 02:26:03,414 WARN: org.apache.hadoop.hdfs.DFSClient (close(669)) - 
DFSInputStream has been closed already
2015-08-17 02:26:03,414 WARN: org.apache.hadoop.hdfs.DFSClient (close(669)) - 
DFSInputStream has been closed already
2015-08-17 02:26:03,418 INFO: BlockStateChange (logAddStoredBlock(2624)) - 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:35930 is added to 
blk_1073748707_7883{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-e6518683-d5b2-480e-b961-ce5c42e5115a:NORMAL:127.0.0.1:35930|RBW]]}
 size 0
2015-08-17 02:26:03,418 INFO: org.apache.tajo.worker.TaskAttemptContext 
(setState(142)) - Query status of ta_1439777128104_2052_000001_000000_00 is 
changed to TA_SUCCEEDED
2015-08-17 02:26:03,418 INFO: org.apache.tajo.worker.TaskImpl (run(460)) - 
ta_1439777128104_2052_000001_000000_00 completed. Worker's task counter - 
total:1, succeeded: 1, killed: 0, failed: 0
2015-08-17 02:26:03,419 INFO: org.apache.tajo.querymaster.Stage 
(transition(1298)) - Stage - eb_1439777128104_2052_000001 finalize NONE_SHUFFLE 
(total=1, success=1, killed=0)
2015-08-17 02:26:03,419 INFO: org.apache.tajo.querymaster.DefaultTaskScheduler 
(stop(160)) - Task Scheduler stopped
2015-08-17 02:26:03,419 INFO: org.apache.tajo.querymaster.DefaultTaskScheduler 
(run(122)) - TaskScheduler schedulingThread stopped
2015-08-17 02:26:03,419 INFO: org.apache.tajo.querymaster.Stage 
(transition(1356)) - Stage completed - eb_1439777128104_2052_000001 (total=1, 
success=1, killed=0)
2015-08-17 02:26:03,420 INFO: org.apache.tajo.querymaster.Query (handle(855)) - 
Processing q_1439777128104_2052 of type STAGE_COMPLETED
2015-08-17 02:26:03,420 INFO: 
org.apache.tajo.engine.planner.global.ParallelExecutionQueue (next(95)) - Next 
executable block eb_1439777128104_2052_000002
2015-08-17 02:26:03,420 INFO: org.apache.tajo.worker.TaskManager 
(stopExecutionBlock(161)) - Stopped execution block:eb_1439777128104_2052_000001
2015-08-17 02:26:03,420 INFO: org.apache.tajo.querymaster.Query 
(transition(802)) - Complete Stage[eb_1439777128104_2052_000001], State: 
SUCCEEDED, 1/1. 
2015-08-17 02:26:03,420 INFO: org.apache.tajo.querymaster.Query (handle(855)) - 
Processing q_1439777128104_2052 of type QUERY_COMPLETED
2015-08-17 02:26:03,421 INFO: org.apache.tajo.querymaster.Query 
(finalizeQuery(528)) - Can't find partitions for adding.
2015-08-17 02:26:03,421 INFO: org.apache.tajo.querymaster.Query (handle(873)) - 
q_1439777128104_2052 Query Transitioned from QUERY_RUNNING to QUERY_SUCCEEDED
2015-08-17 02:26:03,421 INFO: org.apache.tajo.querymaster.QueryMasterTask 
(handle(295)) - Query completion notified from q_1439777128104_2052 final 
state: QUERY_SUCCEEDED
2015-08-17 02:26:03,421 INFO: org.apache.tajo.master.QueryInProgress 
(heartbeat(252)) - Received QueryMaster 
heartbeat:q_1439777128104_2052,state=QUERY_SUCCEEDED,progress=1.0, 
queryMaster=asf900.gq1.ygridcore.net
2015-08-17 02:26:03,422 INFO: org.apache.tajo.master.QueryManager 
(stopQuery(275)) - Stop QueryInProgress:q_1439777128104_2052
2015-08-17 02:26:03,422 INFO: org.apache.tajo.master.QueryInProgress 
(stopProgress(117)) - =========================================================
2015-08-17 02:26:03,422 INFO: org.apache.tajo.master.QueryInProgress 
(stopProgress(118)) - Stop query:q_1439777128104_2052
2015-08-17 02:26:03,422 INFO: org.apache.tajo.querymaster.QueryMasterTask 
(serviceStop(172)) - Stopping QueryMasterTask:q_1439777128104_2052
2015-08-17 02:26:03,422 INFO: org.apache.tajo.querymaster.QueryMasterTask 
(cleanupQuery(471)) - Cleanup resources of all workers. Query: 
q_1439777128104_2052, workers: 1
2015-08-17 02:26:03,422 INFO: org.apache.tajo.querymaster.QueryMasterTask 
(serviceStop(188)) - Stopped QueryMasterTask:q_1439777128104_2052
2015-08-17 02:26:03,640 WARN: org.apache.hadoop.hdfs.DFSClient (close(669)) - 
DFSInputStream has been closed already
2015-08-17 02:26:03,640 WARN: org.apache.hadoop.hdfs.DFSClient (close(669)) - 
DFSInputStream has been closed already
2015-08-17 02:26:03,642 INFO: org.apache.tajo.master.TajoMasterClientService 
(getQueryResultData(579)) - Send result to client for 
f70274f8-1fa4-42c2-a854-ed2bf2f561f0,q_1439777128104_2052, 2 rows
2015-08-17 02:26:03,643 INFO: org.apache.tajo.master.TajoMasterClientService 
(getQueryResultData(579)) - Send result to client for 
f70274f8-1fa4-42c2-a854-ed2bf2f561f0,q_1439777128104_2052, 0 rows
2015-08-17 02:26:03,644 INFO: org.apache.tajo.session.SessionManager 
(removeSession(86)) - Session f70274f8-1fa4-42c2-a854-ed2bf2f561f0 is removed.
2015-08-17 02:26:03,645 INFO: org.apache.tajo.master.GlobalEngine 
(updateQuery(237)) - SQL: DROP TABLE IF EXISTS "TestTajoJdbc".table1
2015-08-17 02:26:03,645 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(280)) - Non Optimized Query: 

-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------


2015-08-17 02:26:03,646 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(282)) - =============================================
2015-08-17 02:26:03,646 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(283)) - Optimized Query: 

-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------


2015-08-17 02:26:03,646 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(284)) - =============================================
2015-08-17 02:26:03,646 INFO: org.apache.tajo.master.exec.DDLExecutor 
(dropTable(310)) - relation "TestTajoJdbc.table1" is already exists.
2015-08-17 02:26:03,647 INFO: org.apache.tajo.master.GlobalEngine 
(updateQuery(237)) - SQL: DROP TABLE IF EXISTS testaltertablepartition
2015-08-17 02:26:03,647 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(280)) - Non Optimized Query: 

-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------


2015-08-17 02:26:03,647 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(282)) - =============================================
2015-08-17 02:26:03,647 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(283)) - Optimized Query: 

-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------


2015-08-17 02:26:03,647 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(284)) - =============================================
2015-08-17 02:26:03,648 INFO: org.apache.tajo.catalog.CatalogServer 
(dropTable(697)) - relation "TestTajoJdbc.testaltertablepartition" is deleted 
from the catalog (127.0.0.1:38300)
2015-08-17 02:26:03,648 INFO: org.apache.tajo.master.exec.DDLExecutor 
(dropTable(327)) - relation "TestTajoJdbc.testaltertablepartition" is  dropped.
2015-08-17 02:26:03,649 INFO: org.apache.tajo.session.SessionManager 
(removeSession(86)) - Session 5fa8f626-1eff-4560-a6fe-1d0220084375 is removed.
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.542 sec - in 
org.apache.tajo.jdbc.TestTajoJdbc
2015-08-17 02:26:03,655 INFO: org.apache.tajo.worker.TajoWorker (run(567)) - 
============================================
2015-08-17 02:26:03,657 INFO: org.apache.tajo.worker.TajoWorker (run(568)) - 
TajoWorker received SIGINT Signal
2015-08-17 02:26:03,658 INFO: org.apache.tajo.worker.TajoWorker (run(569)) - 
============================================
2015-08-17 02:26:03,658 INFO: org.apache.tajo.session.SessionManager 
(removeSession(86)) - Session 8d61af39-989b-4be2-9b19-9eb04da06e25 is removed.
2015-08-17 02:26:03,659 INFO: org.apache.tajo.master.TajoMaster (run(538)) - 
============================================
2015-08-17 02:26:03,661 INFO: org.mortbay.log (info(67)) - Shutdown hook 
executing
2015-08-17 02:26:03,666 INFO: org.apache.tajo.master.TajoMaster (run(539)) - 
TajoMaster received SIGINT Signal
2015-08-17 02:26:03,668 INFO: org.mortbay.log (info(67)) - Shutdown hook 
complete
2015-08-17 02:26:03,668 INFO: org.apache.tajo.master.TajoMaster (run(540)) - 
============================================
2015-08-17 02:26:03,670 INFO: org.apache.tajo.util.history.HistoryCleaner 
(run(136)) - History cleaner stopped
2015-08-17 02:26:03,670 INFO: org.apache.tajo.util.history.HistoryWriter 
(run(268)) - HistoryWriter_asf900.gq1.ygridcore.net_38304 stopped.
2015-08-17 02:26:03,676 INFO: org.apache.tajo.worker.NodeStatusUpdater 
(run(262)) - Heartbeat Thread stopped.
2015-08-17 02:26:03,676 INFO: org.apache.tajo.worker.NodeStatusUpdater 
(serviceStop(111)) - NodeStatusUpdater stopped.
2015-08-17 02:26:03,674 INFO: org.apache.tajo.session.SessionManager 
(removeSession(86)) - Session 6651846e-8724-49c5-af64-53497fd5ca4b is removed.
2015-08-17 02:26:03,678 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (Tajo-REST) listened on 0:0:0:0:0:0:0:0:38303) shutdown
2015-08-17 02:26:03,680 INFO: org.apache.tajo.ws.rs.TajoRestService 
(serviceStop(129)) - Tajo Rest Service stopped.
ader initialized will read a total of 2 records.
Aug 17, 2015 2:08:52 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Aug 17, 2015 2:08:52 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 2
Aug 17, 2015 2:09:07 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
Aug 17, 2015 2:09:07 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN, 
RLE, BIT_PACKED]
Aug 17, 2015 2:09:07 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
Aug 17, 2015 2:09:07 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
Aug 17, 2015 2:09:07 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN, 
RLE, BIT_PACKED]
Aug 17, 2015 2:09:07 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
Aug 17, 2015 2:09:07 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
Aug 17, 2015 2:09:07 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN, 
RLE, BIT_PACKED]
Aug 17, 2015 2:09:07 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
Aug 17, 2015 2:09:07 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
Aug 17, 2015 2:09:07 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN, 
RLE, BIT_PACKED]
Aug 17, 2015 2:09:07 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN, 
RLE, BIT_PACKED]
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
Aug 17, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 17, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 17, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 17, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Aug 17, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 17, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Aug 17, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 17, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Aug 17, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Aug 17, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 17, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Aug 17, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Aug 17, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 17, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Aug 17, 2015 2:09:08 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Aug 17, 2015 2:09:08 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Aug 17, 2015 2:09:11 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 212
Aug 17, 2015 2:09:11 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 5 values, 10B raw, 10B comp, 1 pages, encodings: 
[PLAIN_DICTIONARY, RLE, BIT_PACKED], dic { 3 entries, 12B raw, 3B comp}
Aug 17, 2015 2:09:11 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 123B for 
[l_shipdate] BINARY: 5 values, 76B raw, 76B comp, 1 pages, encodings: [PLAIN, 
RLE, BIT_PACKED]
Aug 17, 2015 2:09:11 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 123B for 
[l_shipdate_function] BINARY: 5 values, 76B raw, 76B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
Aug 17, 2015 2:09:11 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 17, 2015 2:09:11 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Aug 17, 2015 2:09:11 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 17, 2015 2:09:11 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 5 records.
Aug 17, 2015 2:09:11 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Aug 17, 2015 2:09:11 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 5
2015-08-17 02:26:03,678 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (QueryMasterProtocol) listened on 0:0:0:0:0:0:0:0:38306) 
shutdown
2015-08-17 02:26:03,681 INFO: 
org.apache.tajo.querymaster.QueryMasterManagerService (serviceStop(106)) - 
QueryMasterManagerService stopped
2015-08-17 02:26:03,680 INFO: org.apache.tajo.catalog.CatalogServer 
(serviceStop(178)) - Catalog Server (127.0.0.1:38300) shutdown
2015-08-17 02:26:03,682 INFO: org.apache.tajo.querymaster.QueryMaster 
(run(417)) - QueryMaster heartbeat thread stopped
2015-08-17 02:26:03,682 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (CatalogProtocol) listened on 127.0.0.1:38300) shutdown
2015-08-17 02:26:03,686 INFO: org.apache.tajo.util.history.HistoryWriter 
(run(268)) - HistoryWriter_127.0.0.1_38302 stopped.
2015-08-17 02:26:03,686 INFO: org.apache.tajo.querymaster.QueryMaster 
(serviceStop(168)) - QueryMaster stopped
2015-08-17 02:26:03,686 INFO: org.apache.tajo.worker.TajoWorkerClientService 
(stop(99)) - TajoWorkerClientService stopping
2015-08-17 02:26:03,689 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (QueryMasterClientProtocol) listened on 
0:0:0:0:0:0:0:0:38305) shutdown
2015-08-17 02:26:03,689 INFO: org.apache.tajo.worker.TajoWorkerClientService 
(stop(103)) - TajoWorkerClientService stopped
2015-08-17 02:26:03,689 INFO: BlockStateChange (logAddStoredBlock(2624)) - 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:35930 is added to 
blk_1073741834_1010{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-c2eba7c8-cf74-484e-b92d-8239c4ce64c3:NORMAL:127.0.0.1:35930|RBW]]}
 size 704
2015-08-17 02:26:03,691 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (TajoWorkerProtocol) listened on 0:0:0:0:0:0:0:0:38304) 
shutdown
2015-08-17 02:26:03,691 INFO: org.apache.tajo.worker.TajoWorkerManagerService 
(serviceStop(93)) - TajoWorkerManagerService stopped
2015-08-17 02:26:03,693 INFO: org.apache.tajo.worker.TajoWorker 
(serviceStop(377)) - TajoWorker main thread exiting
2015-08-17 02:26:03,693 INFO: org.apache.tajo.util.history.HistoryCleaner 
(run(136)) - History cleaner stopped
2015-08-17 02:26:03,693 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (QueryCoordinatorProtocol) listened on 127.0.0.1:38302) 
shutdown
2015-08-17 02:26:03,695 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (TajoMasterClientProtocol) listened on 127.0.0.1:38301) 
shutdown
2015-08-17 02:26:03,700 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (TajoResourceTrackerProtocol) listened on 
127.0.0.1:38299) shutdown
2015-08-17 02:26:03,700 INFO: org.apache.tajo.master.TajoMaster 
(serviceStop(406)) - Tajo Master main thread exiting

Results :

Failed tests: 
  TestDDLBuilder.testBuildDDLForExternalTable:64 expected:<...) USING TEXT WITH 
('[compression.codec'='org.apache.hadoop.io.compress.GzipCodec', 
'text.delimiter'='|]') PARTITION BY COLU...> but was:<...) USING TEXT WITH 
('[text.delimiter'='|', 
'compression.codec'='org.apache.hadoop.io.compress.GzipCodec]') PARTITION BY 
COLU...>
  TestDDLBuilder.testBuildDDLForBaseTable:103 expected:<...) USING TEXT WITH 
('[compression.codec'='org.apache.hadoop.io.compress.GzipCodec', 
'text.delimiter'='|]');> but was:<...) USING TEXT WITH ('[text.delimiter'='|', 
'compression.codec'='org.apache.hadoop.io.compress.GzipCodec]');>
  TestDDLBuilder.testBuildDDLQuotedTableName:90 expected:<...) USING TEXT WITH 
('[compression.codec'='org.apache.hadoop.io.compress.GzipCodec', 
'text.delimiter'='|]') PARTITION BY COLU...> but was:<...) USING TEXT WITH 
('[text.delimiter'='|', 
'compression.codec'='org.apache.hadoop.io.compress.GzipCodec]') PARTITION BY 
COLU...>

Tests run: 1691, Failures: 3, Errors: 0, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Tajo Main ......................................... SUCCESS [  1.472 s]
[INFO] Tajo Project POM .................................. SUCCESS [  1.202 s]
[INFO] Tajo Maven Plugins ................................ SUCCESS [  2.424 s]
[INFO] Tajo Common ....................................... SUCCESS [ 22.096 s]
[INFO] Tajo Algebra ...................................... SUCCESS [  2.117 s]
[INFO] Tajo Catalog Common ............................... SUCCESS [  4.632 s]
[INFO] Tajo Plan ......................................... SUCCESS [  6.055 s]
[INFO] Tajo Rpc Common ................................... SUCCESS [  1.191 s]
[INFO] Tajo Protocol Buffer Rpc .......................... SUCCESS [01:26 min]
[INFO] Tajo Catalog Client ............................... SUCCESS [  1.303 s]
[INFO] Tajo Catalog Server ............................... SUCCESS [  9.304 s]
[INFO] Tajo Storage Common ............................... SUCCESS [  7.944 s]
[INFO] Tajo HDFS Storage ................................. SUCCESS [ 43.761 s]
[INFO] Tajo PullServer ................................... SUCCESS [  0.799 s]
[INFO] Tajo Client ....................................... SUCCESS [  2.513 s]
[INFO] Tajo CLI tools .................................... SUCCESS [  1.644 s]
[INFO] Tajo JDBC Driver .................................. SUCCESS [  2.428 s]
[INFO] ASM (thirdparty) .................................. SUCCESS [  1.827 s]
[INFO] Tajo RESTful Container ............................ SUCCESS [  3.436 s]
[INFO] Tajo Metrics ...................................... SUCCESS [  1.333 s]
[INFO] Tajo Core ......................................... SUCCESS [  8.131 s]
[INFO] Tajo RPC .......................................... SUCCESS [  0.907 s]
[INFO] Tajo Catalog Drivers Hive ......................... SUCCESS [ 10.215 s]
[INFO] Tajo Catalog Drivers .............................. SUCCESS [  0.044 s]
[INFO] Tajo Catalog ...................................... SUCCESS [  0.960 s]
[INFO] Tajo HBase Storage ................................ SUCCESS [  3.323 s]
[INFO] Tajo Storage ...................................... SUCCESS [  0.930 s]
[INFO] Tajo Distribution ................................. SUCCESS [  5.239 s]
[INFO] Tajo Cluster Tests ................................ SUCCESS [  2.194 s]
[INFO] Tajo Core Tests ................................... FAILURE [20:56 min]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 24:52 min
[INFO] Finished at: 2015-08-17T02:26:05+00:00
[INFO] Final Memory: 142M/2008M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project tajo-core-tests: There are test failures.
[ERROR] 
[ERROR] Please refer to 
<https://builds.apache.org/job/Tajo-master-jdk8-nightly/ws/tajo-core-tests/target/surefire-reports>
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :tajo-core-tests
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Tajo-master-jdk8-nightly #82
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 58856592 bytes
Compression is 0.0%
Took 21 sec
Recording test results

Reply via email to